Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong
2016-09-01
The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Monte Carlo based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, R.; Waris, A.; Viridi, S.
2014-09-01
There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.
Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location
NASA Astrophysics Data System (ADS)
Zhao, A. H.
2014-12-01
Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.
Point kernel calculations of skyshine exposure rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roseberry, M.L.; Shultis, J.K.
1982-02-01
A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.
3D Surface Reconstruction and Volume Calculation of Rills
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.
2015-04-01
We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
Theoretical relation between halo current-plasma energy displacement/deformation in EAST
NASA Astrophysics Data System (ADS)
Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen
2018-04-01
In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak.
Critical points of metal vapors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khomkin, A. L., E-mail: alhomkin@mail.ru; Shumikhin, A. S.
2015-09-15
A new method is proposed for calculating the parameters of critical points and binodals for the vapor–liquid (insulator–metal) phase transition in vapors of metals with multielectron valence shells. The method is based on a model developed earlier for the vapors of alkali metals, atomic hydrogen, and exciton gas, proceeding from the assumption that the cohesion determining the basic characteristics of metals under normal conditions is also responsible for their properties in the vicinity of the critical point. It is proposed to calculate the cohesion of multielectron atoms using well-known scaling relations for the binding energy, which are constructed for mostmore » metals in the periodic table by processing the results of many numerical calculations. The adopted model allows the parameters of critical points and binodals for the vapor–liquid phase transition in metal vapors to be calculated using published data on the properties of metals under normal conditions. The parameters of critical points have been calculated for a large number of metals and show satisfactory agreement with experimental data for alkali metals and with available estimates for all other metals. Binodals of metals have been calculated for the first time.« less
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1993-01-01
Distributed Point Charge Models (PCM) for CO, (H2O)2, and HS-SH molecules have been computed from analytical expressions using multi-center multipole moments. The point charges (set of charges including both atomic and non-atomic positions) exactly reproduce both molecular and segmental multipole moments, thus constituting an accurate representation of the local anisotropy of electrostatic properties. In contrast to other known point charge models, PCM can be used to calculate not only intermolecular, but also intramolecular interactions. Comparison of these results with more accurate calculations demonstrated that PCM can correctly represent both weak and strong (intramolecular) interactions, thus indicating the merit of extending PCM to obtain improved potentials for molecular mechanics and molecular dynamics computational methods.
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Flash Points of Secondary Alcohol and n-Alkane Mixtures.
Esina, Zoya N; Miroshnikov, Alexander M; Korchuganova, Margarita R
2015-11-19
The flash point is one of the most important characteristics used to assess the ignition hazard of mixtures of flammable liquids. To determine the flash points of mixtures of secondary alcohols with n-alkanes, it is necessary to calculate the activity coefficients. In this paper, we use a model that allows us to obtain enthalpy of fusion and enthalpy of vaporization data of the pure components to calculate the liquid-solid equilibrium (LSE) and vapor-liquid equilibrium (VLE). Enthalpy of fusion and enthalpy of vaporization data of secondary alcohols in the literature are limited; thus, the prediction of these characteristics was performed using the method of thermodynamic similarity. Additionally, the empirical models provided the critical temperatures and boiling temperatures of the secondary alcohols. The modeled melting enthalpy and enthalpy of vaporization as well as the calculated LSE and VLE flash points were determined for the secondary alcohol and n-alkane mixtures.
NASA Astrophysics Data System (ADS)
Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna
2018-05-01
As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.
A user-friendly modified pore-solid fractal model
Ding, Dian-yuan; Zhao, Ying; Feng, Hao; Si, Bing-cheng; Hill, Robert Lee
2016-01-01
The primary objective of this study was to evaluate a range of calculation points on water retention curves (WRC) instead of the singularity point at air-entry suction in the pore-solid fractal (PSF) model, which additionally considered the hysteresis effect based on the PSF theory. The modified pore-solid fractal (M-PSF) model was tested using 26 soil samples from Yangling on the Loess Plateau in China and 54 soil samples from the Unsaturated Soil Hydraulic Database. The derivation results showed that the M-PSF model is user-friendly and flexible for a wide range of calculation point options. This model theoretically describes the primary differences between the soil moisture desorption and the adsorption processes by the fractal dimensions. The M-PSF model demonstrated good performance particularly at the calculation points corresponding to the suctions from 100 cm to 1000 cm. Furthermore, the M-PSF model, used the fractal dimension of the particle size distribution, exhibited an accepted performance of WRC predictions for different textured soils when the suction values were ≥100 cm. To fully understand the function of hysteresis in the PSF theory, the role of allowable and accessible pores must be examined. PMID:27996013
Free Energy, Enthalpy and Entropy from Implicit Solvent End-Point Simulations.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2018-01-01
Free energy is the key quantity to describe the thermodynamics of biological systems. In this perspective we consider the calculation of free energy, enthalpy and entropy from end-point molecular dynamics simulations. Since the enthalpy may be calculated as the ensemble average over equilibrated simulation snapshots the difficulties related to free energy calculation are ultimately related to the calculation of the entropy of the system and in particular of the solvent entropy. In the last two decades implicit solvent models have been used to circumvent the problem and to take into account solvent entropy implicitly in the solvation terms. More recently outstanding advancement in both implicit solvent models and in entropy calculations are making the goal of free energy estimation from end-point simulations more feasible than ever before. We review briefly the basic theory and discuss the advancements in light of practical applications.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omar, M.S., E-mail: dr_m_s_omar@yahoo.com
2012-11-15
Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to thatmore » of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.« less
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
A frost formation model and its validation under various experimental conditions
NASA Technical Reports Server (NTRS)
Dietenberger, M. A.
1982-01-01
A numerical model that was used to calculate the frost properties for all regimes of frost growth is described. In the first regime of frost growth, the initial frost density and thickness was modeled from the theories of crystal growth. The 'frost point' temperature was modeled as a linear interpolation between the dew point temperature and the fog point temperature, based upon the nucleating capability of the particular condensing surfaces. For a second regime of frost growth, the diffusion model was adopted with the following enhancements: the generalized correlation of the water frost thermal conductivity was applied to practically all water frost layers being careful to ensure that the calculated heat and mass transfer coefficients agreed with experimental measurements of the same coefficients.
Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C
2011-09-01
An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Tight-binding modeling and low-energy behavior of the semi-Dirac point.
Banerjee, S; Singh, R R P; Pardo, V; Pickett, W E
2009-07-03
We develop a tight-binding model description of semi-Dirac electronic spectra, with highly anisotropic dispersion around point Fermi surfaces, recently discovered in electronic structure calculations of VO2-TiO2 nanoheterostructures. We contrast their spectral properties with the well-known Dirac points on the honeycomb lattice relevant to graphene layers and the spectra of bands touching each other in zero-gap semiconductors. We also consider the lowest order dispersion around one of the semi-Dirac points and calculate the resulting electronic energy levels in an external magnetic field. In spite of apparently similar electronic structures, Dirac and semi-Dirac systems support diverse low-energy physics.
Development of full wave code for modeling RF fields in hot non-uniform plasmas
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Zhang, Tianhe C.; Grill, Warren M.
2010-12-01
Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
Stability Test for Transient-Temperature Calculations
NASA Technical Reports Server (NTRS)
Campbell, W.
1984-01-01
Graphical test helps assure numerical stability of calculations of transient temperature or diffusion in composite medium. Rectangular grid forms basis of two-dimensional finite-difference model for heat conduction or other diffusion like phenomena. Model enables calculation of transient heat transfer among up to four different materials that meet at grid point.
Continuous description of fluctuating eccentricities
NASA Astrophysics Data System (ADS)
Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves
2014-11-01
We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.
Galactic cosmic ray abundances and spectra behind defined shielding.
Heinrich, W; Benton, E V; Wiegel, B; Zens, R; Rusch, G
1994-10-01
LET spectra have been measured for lunar missions and for several near Earth orbits ranging from 28 degrees to 83 degrees inclination. In some of the experiments the flux of GCR was determined separately from contributions caused by interactions in the detector material. Results of these experiments are compared to model calculations. The general agreement justifies the use of the model to calculate GCR fluxes. The magnitude of variations caused by solar modulation, geomagnetic shielding, and shielding by matter determined from calculated LET spectra is generally in agreement with experimental data. However, more detailed investigations show that there are some weak points in modeling solar modulation and shielding by material. These points are discussed in more detail.
NASA Technical Reports Server (NTRS)
Jenkins, J. M.
1979-01-01
Additional information was added to a growing data base from which estimates of finite element model complexities can be made with respect to thermal stress analysis. The manner in which temperatures were smeared to the finite element grid points was examined from the point of view of the impact on thermal stress calculations. The general comparison of calculated and measured thermal stresses is guite good and there is little doubt that the finite element approach provided by NASTRAN results in correct thermal stress calculations. Discrepancies did exist between measured and calculated values in the skin and the skin/frame junctures. The problems with predicting skin thermal stress were attributed to inadequate temperature inputs to the structural model rather than modeling insufficiencies. The discrepancies occurring at the skin/frame juncture were most likely due to insufficient modeling elements rather than temperature problems.
Synthesis of Biofluidic Microsystems (SYNBIOSYS)
2007-10-01
reaction system. 58 FIGURE 41. The micro reactor is represented by a PFR network model. The calculation of reaction and convection is conducted in...one column of PFRs and the calculation of diffusional mixing is conducted between two columns of PFRs. 59 FIGURE 42. Apply the numerical method of...lines to calculate the diffusion in the channel width direction. Here, we take 10 discretized concentration points in the channel: ci1 - ci10. Points
Automated Mounting Bias Calibration for Airborne LIDAR System
NASA Astrophysics Data System (ADS)
Zhang, J.; Jiang, W.; Jiang, S.
2012-07-01
Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.
Modeling of the reburning process using sewage sludge-derived syngas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werle, Sebastian, E-mail: sebastian.werle@polsl.pl
2012-04-15
Highlights: Black-Right-Pointing-Pointer Gasification provides an attractive method for sewage sludges treatment. Black-Right-Pointing-Pointer Gasification generates a fuel gas (syngas) which can be used as a reburning fuel. Black-Right-Pointing-Pointer Reburning potential of sewage sludge gasification gases was defined. Black-Right-Pointing-Pointer Numerical simulation of co-combustion of syngases in coal fired boiler has been done. Black-Right-Pointing-Pointer Calculation shows that analysed syngases can provide higher than 80% reduction of NO{sub x}. - Abstract: Gasification of sewage sludge can provide clean and effective reburning fuel for combustion applications. The motivation of this work was to define the reburning potential of the sewage sludge gasification gas (syngas). Amore » numerical simulation of the co-combustion process of syngas in a hard coal-fired boiler was done. All calculations were performed using the Chemkin programme and a plug-flow reactor model was used. The calculations were modelled using the GRI-Mech 2.11 mechanism. The highest conversions for nitric oxide (NO) were obtained at temperatures of approximately 1000-1200 K. The combustion of hard coal with sewage sludge-derived syngas reduces NO emissions. The highest reduction efficiency (>90%) was achieved when the molar flow ratio of the syngas was 15%. Calculations show that the analysed syngas can provide better results than advanced reburning (connected with ammonia injection), which is more complicated process.« less
NASA Astrophysics Data System (ADS)
Benedek, Judit; Papp, Gábor; Kalmár, János
2018-04-01
Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable situation without the loss of reliability of the calculated gravity field parameters.
Nanoscale size dependence parameters on lattice thermal conductivity of Wurtzite GaN nanowires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamand, S.M., E-mail: soran.mamand@univsul.net; Omar, M.S.; Muhammad, A.J.
2012-05-15
Graphical abstract: Temperature dependence of calculated lattice thermal conductivity of Wurtzite GaN nanowires. Highlights: Black-Right-Pointing-Pointer A modified Callaway model is used to calculate lattice thermal conductivity of Wurtzite GaN nanowires. Black-Right-Pointing-Pointer A direct method is used to calculate phonon group velocity for these nanowires. Black-Right-Pointing-Pointer 3-Gruneisen parameter, surface roughness, and dislocations are successfully investigated. Black-Right-Pointing-Pointer Dislocation densities are decreases with the decrease of wires diameter. -- Abstract: A detailed calculation of lattice thermal conductivity of freestanding Wurtzite GaN nanowires with diameter ranging from 97 to 160 nm in the temperature range 2-300 K, was performed using a modified Callaway model.more » Both longitudinal and transverse modes are taken into account explicitly in the model. A method is used to calculate the Debye and phonon group velocities for different nanowire diameters from their related melting points. Effect of Gruneisen parameter, surface roughness, and dislocations as structure dependent parameters are successfully used to correlate the calculated values of lattice thermal conductivity to that of the experimentally measured curves. It was observed that Gruneisen parameter will decrease with decreasing nanowire diameters. Scattering of phonons is assumed to be by nanowire boundaries, imperfections, dislocations, electrons, and other phonons via both normal and Umklapp processes. Phonon confinement and size effects as well as the role of dislocation in limiting thermal conductivity are investigated. At high temperatures and for dislocation densities greater than 10{sup 14} m{sup -2} the lattice thermal conductivity would be limited by dislocation density, but for dislocation densities less than 10{sup 14} m{sup -2}, lattice thermal conductivity would be independent of that.« less
Wu, Chen; Xu, Bai-Nan; Sun, Zheng-Hui; Wang, Fu-Yu; Liu, Lei; Zhang, Xiao-Jun; Zhou, Ding-Biao
2012-01-01
Unclippable fusiform basilar trunk aneurysm is a formidable condition for surgical treatment. The aim of this study was to establish a computational model and to investigate the hemodynamic characteristics in a fusiform basilar trunk aneurysm. The three-dimensional digital model of a fusiform basilar trunk aneurysm was constructed using MIMICS, ANSYS and CFX software. Different hemodynamic modalities and border conditions were assigned to the model. Thirty points were selected randomly on the wall and within the aneurysm. Wall total pressure (WTP), wall shear stress (WSS), and blood flow velocity of each point were calculated and hemodynamic status was compared between different modalities. The quantitative average values of the 30 points on the wall and within the aneurysm were obtained by computational calculation point by point. The velocity and WSS in modalities A and B were different from those of the remaining 5 modalities; and the WTP in modalities A, E and F were higher than those of the remaining 4 modalities. The digital model of a fusiform basilar artery aneurysm is feasible and reliable. This model could provide some important information to clinical treatment options.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Band structure and orbital character of monolayer MoS2 with eleven-band tight-binding model
NASA Astrophysics Data System (ADS)
Shahriari, Majid; Ghalambor Dezfuli, Abdolmohammad; Sabaeian, Mohammad
2018-02-01
In this paper, based on a tight-binding (TB) model, first we present the calculations of eigenvalues as band structure and then present the eigenvectors as probability amplitude for finding electron in atomic orbitals for monolayer MoS2 in the first Brillouin zone. In these calculations we are considering hopping processes between the nearest-neighbor Mo-S, the next nearest-neighbor in-plan Mo-Mo, and the next nearest-neighbor in-plan and out-of-plan S-S atoms in a three-atom based unit cell of two-dimensional rhombic MoS2. The hopping integrals have been solved in terms of Slater-Koster and crystal field parameters. These parameters are calculated by comparing TB model with the density function theory (DFT) in the high-symmetry k-points (i.e. the K- and Γ-points). In our TB model all the 4d Mo orbitals and the 3p S orbitals are considered and detailed analysis of the orbital character of each energy level at the main high-symmetry points of the Brillouin zone is described. In comparison with DFT calculations, our results of TB model show a very good agreement for bands near the Fermi level. However for other bands which are far from the Fermi level, some discrepancies between our TB model and DFT calculations are observed. Upon the accuracy of Slater-Koster and crystal field parameters, on the contrary of DFT, our model provide enough accuracy to calculate all allowed transitions between energy bands that are very crucial for investigating the linear and nonlinear optical properties of monolayer MoS2.
An alternative method for centrifugal compressor loading factor modelling
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Acceleration of saddle-point searches with machine learning.
Peterson, Andrew A
2016-08-21
In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.
Acceleration of saddle-point searches with machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Andrew A., E-mail: andrew-peterson@brown.edu
In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the numbermore » of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.« less
Dearden, John C
2003-08-01
Boiling point, vapor pressure, and melting point are important physicochemical properties in the modeling of the distribution and fate of chemicals in the environment. However, such data often are not available, and therefore must be estimated. Over the years, many attempts have been made to calculate boiling points, vapor pressures, and melting points by using quantitative structure-property relationships, and this review examines and discusses the work published in this area, and concentrates particularly on recent studies. A number of software programs are commercially available for the calculation of boiling point, vapor pressure, and melting point, and these have been tested for their predictive ability with a test set of 100 organic chemicals.
C-5M Super Galaxy Utilization with Joint Precision Airdrop System
2012-03-22
System Notes FireFly 900-2,200 Steerable Parafoil Screamer 500-2,200 Steerable Parafoil w/additional chutes to slow touchdown Dragonfly...setting . This initial feasible solution provides the Nonlinear Program algorithm a starting point to continue its calculations. The model continues...provides the NLP with a starting point of 1. This provides the NLP algorithm a point within the feasible region to begin its calculations in an attempt
Accuracy assessment of building point clouds automatically generated from iphone images
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2014-06-01
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu; Kim, Jong Oh
2016-05-15
Purpose: To develop and evaluate a fast Monte Carlo (MC) dose calculation model of electronic portal imaging device (EPID) based on its effective atomic number modeling in the XVMC code. Methods: A previously developed EPID model, based on the XVMC code by density scaling of EPID structures, was modified by additionally considering effective atomic number (Z{sub eff}) of each structure and adopting a phase space file from the EGSnrc code. The model was tested under various homogeneous and heterogeneous phantoms and field sizes by comparing the calculations in the model with measurements in EPID. In order to better evaluate themore » model, the performance of the XVMC code was separately tested by comparing calculated dose to water with ion chamber (IC) array measurement in the plane of EPID. Results: In the EPID plane, calculated dose to water by the code showed agreement with IC measurements within 1.8%. The difference was averaged across the in-field regions of the acquired profiles for all field sizes and phantoms. The maximum point difference was 2.8%, affected by proximity of the maximum points to penumbra and MC noise. The EPID model showed agreement with measured EPID images within 1.3%. The maximum point difference was 1.9%. The difference dropped from the higher value of the code by employing the calibration that is dependent on field sizes and thicknesses for the conversion of calculated images to measured images. Thanks to the Z{sub eff} correction, the EPID model showed a linear trend of the calibration factors unlike those of the density-only-scaled model. The phase space file from the EGSnrc code sharpened penumbra profiles significantly, improving agreement of calculated profiles with measured profiles. Conclusions: Demonstrating high accuracy, the EPID model with the associated calibration system may be used for in vivo dosimetry of radiation therapy. Through this study, a MC model of EPID has been developed, and their performance has been rigorously investigated for transit dosimetry.« less
Spreadsheet Modeling of (Q,R) Inventory Policies
ERIC Educational Resources Information Center
Cobb, Barry R.
2013-01-01
This teaching brief describes a method for finding an approximately optimal combination of order quantity and reorder point in a continuous review inventory model using a discrete expected shortage calculation. The technique is an alternative to a model where expected shortage is calculated by integration, and can allow students who have not had a…
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.
2015-04-01
Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos-Mendez, J; Faddegon, B; Perl, J
2015-06-15
Purpose: To develop and verify an extension to TOPAS for calculation of dose response models (TCP/NTCP). TOPAS wraps and extends Geant4. Methods: The TOPAS DICOM interface was extended to include structure contours, for subsequent calculation of DVH’s and TCP/NTCP. The following dose response models were implemented: Lyman-Kutcher-Burman (LKB), critical element (CE), population based critical volume (CV), parallel-serials, a sigmoid-based model of Niemierko for NTCP and TCP, and a Poisson-based model for TCP. For verification, results for the parallel-serial and Poisson models, with 6 MV x-ray dose distributions calculated with TOPAS and Pinnacle v9.2, were compared to data from the benchmarkmore » configuration of the AAPM Task Group 166 (TG166). We provide a benchmark configuration suitable for proton therapy along with results for the implementation of the Niemierko, CV and CE models. Results: The maximum difference in DVH calculated with Pinnacle and TOPAS was 2%. Differences between TG166 data and Monte Carlo calculations of up to 4.2%±6.1% were found for the parallel-serial model and up to 1.0%±0.7% for the Poisson model (including the uncertainty due to lack of knowledge of the point spacing in TG166). For CE, CV and Niemierko models, the discrepancies between the Pinnacle and TOPAS results are 74.5%, 34.8% and 52.1% when using 29.7 cGy point spacing, the differences being highly sensitive to dose spacing. On the other hand, with our proposed benchmark configuration, the largest differences were 12.05%±0.38%, 3.74%±1.6%, 1.57%±4.9% and 1.97%±4.6% for the CE, CV, Niemierko and LKB models, respectively. Conclusion: Several dose response models were successfully implemented with the extension module. Reference data was calculated for future benchmarking. Dose response calculated for the different models varied much more widely for the TG166 benchmark than for the proposed benchmark, which had much lower sensitivity to the choice of DVH dose points. This work was supported by National Cancer Institute Grant R01CA140735.« less
A Three-Dimensional Unsteady CFD Model of Compressor Stability
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.
2006-01-01
A three-dimensional unsteady CFD code called CSTALL has been developed and used to investigate compressor stability. The code solved the Euler equations through the entire annulus and all blade rows. Blade row turning, losses, and deviation were modeled using body force terms which required input data at stations between blade rows. The input data was calculated using a separate Navier-Stokes turbomachinery analysis code run at one operating point near stall, and was scaled to other operating points using overall characteristic maps. No information about the stalled characteristic was used. CSTALL was run in a 2-D throughflow mode for very fast calculations of operating maps and estimation of stall points. Calculated pressure ratio characteristics for NASA stage 35 agreed well with experimental data, and results with inlet radial distortion showed the expected loss of range. CSTALL was also run in a 3-D mode to investigate inlet circumferential distortion. Calculated operating maps for stage 35 with 120 degree distortion screens showed a loss in range and pressure rise. Unsteady calculations showed rotating stall with two part-span stall cells. The paper describes the body force formulation in detail, examines the computed results, and concludes with observations about the code.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Calculation of electron Dose Point Kernel in water with GEANT4 for medical application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.
2009-06-03
The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less
Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
2012-01-01
A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.
Multipoint Green's functions in 1 + 1 dimensional integrable quantum field theories
Babujian, H. M.; Karowski, M.; Tsvelik, A. M.
2017-02-14
We calculate the multipoint Green functions in 1+1 dimensional integrable quantum field theories. We use the crossing formula for general models and calculate the 3 and 4 point functions taking in to account only the lower nontrivial intermediate states contributions. Then we apply the general results to the examples of the scaling Z 2 Ising model, sinh-Gordon model and Z 3 scaling Potts model. We demonstrate this calculations explicitly. The results can be applied to physical phenomena as for example to the Raman scattering.
Evaluation of a multi-point method for determining acoustic impedance
NASA Technical Reports Server (NTRS)
Jones, Michael G.; Parrott, Tony L.
1988-01-01
An investigation was conducted to explore potential improvements provided by a Multi-Point Method (MPM) over the Standing Wave Method (SWM) and Two-Microphone Method (TMM) for determining acoustic impedance. A wave propagation model was developed to model the standing wave pattern in an impedance tube. The acoustic impedance of a test specimen was calculated from a best fit of this standing wave pattern to pressure measurements obtained along the impedance tube centerline. Three measurement spacing distributions were examined: uniform, random, and selective. Calculated standing wave patterns match the point pressure measurement distributions with good agreement for a reflection factor magnitude range of 0.004 to 0.999. Comparisons of results using 2, 3, 6, and 18 measurement points showed that the most consistent results are obtained when using at least 6 evenly spaced pressure measurements per half-wavelength. Also, data were acquired with broadband noise added to the discrete frequency noise and impedances were calculated using the MPM and TMM algorithms. The results indicate that the MPM will be superior to the TMM in the presence of significant broadband noise levels associated with mean flow.
Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben
2003-01-01
A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.
Transverse spin correlations of the random transverse-field Ising model
NASA Astrophysics Data System (ADS)
Iglói, Ferenc; Kovács, István A.
2018-03-01
The critical behavior of the random transverse-field Ising model in finite-dimensional lattices is governed by infinite disorder fixed points, several properties of which have already been calculated by the use of the strong disorder renormalization-group (SDRG) method. Here we extend these studies and calculate the connected transverse-spin correlation function by a numerical implementation of the SDRG method in d =1 ,2 , and 3 dimensions. At the critical point an algebraic decay of the form ˜r-ηt is found, with a decay exponent being approximately ηt≈2 +2 d . In d =1 the results are related to dimer-dimer correlations in the random antiferromagnetic X X chain and have been tested by numerical calculations using free-fermionic techniques.
Wada, Yuji; Kundu, Tribikram; Nakamura, Kentaro
2014-08-01
The distributed point source method (DPSM) is extended to model wave propagation in viscous fluids. Appropriate estimation on attenuation and boundary layer formation due to fluid viscosity is necessary for the ultrasonic devices used for acoustic streaming or ultrasonic levitation. The equations for DPSM modeling in viscous fluids are derived in this paper by decomposing the linearized viscous fluid equations into two components-dilatational and rotational components. By considering complex P- and S-wave numbers, the acoustic fields in viscous fluids can be calculated following similar calculation steps that are used for wave propagation modeling in solids. From the calculations reported the precision of DPSM is found comparable to that of the finite element method (FEM) for a fundamental ultrasonic field problem. The particle velocity parallel to the two bounding surfaces of the viscous fluid layer between two rigid plates (one in motion and one stationary) is calculated. The finite element results agree well with the DPSM results that were generated faster than the transient FEM results.
SU-F-P-21: Study of Dosimetry Accuracy of Small Passively Scattered Proton Beam Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Gautam, A; Kerr, M
2016-06-15
Purpose: To study the accuracy of the dose distribution of very small irregular fields of passively scattered proton beams calculated by the analytical pencil beam model of the Eclipse treatment planning system (TPS). Methods: An irregular field with a narrow region (width < 1 cm) that was used for the treatment of a small volume adjacent to a previously treated area were chosen for this investigation. Point doses at different locations inside the field were measured with a small volume ion chamber (A26, Standard Imaging). 2-D dose distributions were measured using a 2-D ion chamber array (MatriXX, IBA). All themore » measurements were done in plastic water phantom. The measured dose distributions were compared with the verification plan dose calculated in a water like phantom for the patient treatment field without the use of the compensator. Results: Point doses measured with the ion chamber in the narrowest section of the field were found to differ as much as 10% from the Eclipse calculated dose at some of the points. The 2-D dose distribution measured with the MatriXX which was validated by comparison with limited film measurement, at the proximal 95%, center of the spread out Bragg Peak and distal 90% depths agreed reasonably well with the TPS calculated dose distribution with more than 92% of the pixels passing the 2% / 2 mm dose distance agreement. Conclusion: The dose calculated by the pencil beam model of the Eclipse TPS for narrow irregular fields may not be accurate within 5% at some locations of the field, especially at the points close to the field edge due to the limitation of the dose calculation model. Overall accuracy of the calculated 2-D dose distribution was found to be acceptable for the 2%/2 mm dose/distance agreement with the measurement.« less
NASA Astrophysics Data System (ADS)
Svensson, Mats; Humbel, Stéphane; Morokuma, Keiji
1996-09-01
The integrated MO+MO (IMOMO) method, recently proposed for geometry optimization, is tested for accurate single point calculations. The principle idea of the IMOMO method is to reproduce results of a high level MO calculation for a large ``real'' system by dividing it into a small ``model'' system and the rest and applying different levels of MO theory for the two parts. Test examples are the activation barrier of the SN2 reaction of Cl-+alkyl chlorides, the C=C double bond dissociation of olefins and the energy of reaction for epoxidation of benzene. The effects of basis set and method in the lower level calculation as well as the effects of the choice of model system are investigated in detail. The IMOMO method gives an approximation to the high level MO energetics on the real system, in most cases with very small errors, with a small additional cost over the low level calculation. For instance, when the MP2 (Møller-Plesset second-order perturbation) method is used as the lower level method, the IMOMO method reproduces the results of very high level MO method within 2 kcal/mol, with less than 50% of additional computer time, for the first two test examples. When the HF (Hartree-Fock) method is used as the lower level method, it is less accurate and depends more on the choice of model system, though the improvement over the HF energy is still very significant. Thus the IMOMO single point calculation provides a method for obtaining reliable local energetics such as bond energies and activation barriers for a large molecular system.
A model for the rapid assessment of the impact of aviation noise near airports.
Torija, Antonio J; Self, Rod H; Flindell, Ian H
2017-02-01
This paper introduces a simplified model [Rapid Aviation Noise Evaluator (RANE)] for the calculation of aviation noise within the context of multi-disciplinary strategic environmental assessment where input data are both limited and constrained by compatibility requirements against other disciplines. RANE relies upon the concept of noise cylinders around defined flight-tracks with the Noise Radius determined from publicly available Noise-Power-Distance curves rather than the computationally intensive multiple point-to-point grid calculation with subsequent ISO-contour interpolation methods adopted in the FAA's Integrated Noise Model (INM) and similar models. Preliminary results indicate that for simple single runway scenarios, changes in airport noise contour areas can be estimated with minimal uncertainty compared against grid-point calculation methods such as INM. In situations where such outputs are all that is required for preliminary strategic environmental assessment, there are considerable benefits in reduced input data and computation requirements. Further development of the noise-cylinder-based model (such as the incorporation of lateral attenuation, engine-installation-effects or horizontal track dispersion via the assumption of more complex noise surfaces formed around the flight-track) will allow for more complex assessment to be carried out. RANE is intended to be incorporated into technology evaluators for the noise impact assessment of novel aircraft concepts.
Scale-invariant curvature fluctuations from an extended semiclassical gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinamonti, Nicola, E-mail: pinamont@dima.unige.it, E-mail: siemssen@dima.unige.it; INFN Sezione di Genova, Via Dodecaneso 33, 16146 Genova; Siemssen, Daniel, E-mail: pinamont@dima.unige.it, E-mail: siemssen@dima.unige.it
2015-02-15
We present an extension of the semiclassical Einstein equations which couple n-point correlation functions of a stochastic Einstein tensor to the n-point functions of the quantum stress-energy tensor. We apply this extension to calculate the quantum fluctuations during an inflationary period, where we take as a model a massive conformally coupled scalar field on a perturbed de Sitter space and describe how a renormalization independent, almost-scale-invariant power spectrum of the scalar metric perturbation is produced. Furthermore, we discuss how this model yields a natural basis for the calculation of non-Gaussianities of the considered metric fluctuations.
Utilization of MAX and FAX human phantoms for space radiation exposure calculations using HZETRN
NASA Astrophysics Data System (ADS)
Qualls, Garry; Slaba, Tony; Clowdsley, Martha; Blattnig, Steve; Walker, Steven; Simonsen, Lisa
To estimate astronaut health risk due to space radiation, one must have the ability to calculate, for known radiation environments external to the body, particle spectra, LET spectra, dose, dose equivalent, or gray equivalent that are averaged over specific organs or tissue types. This may be accomplished using radiation transport software and computational human body tissue models. Historically, NASA scientists have used the HZETRN software to calculate radiation transport through both vehicle shielding materials and body tissue. The Computerized Anatomical Man (CAM) and the Computerized Anatomical Female (CAF) body models, combined with the CAMERA software, have been used for body tissue self-shielding calculations. The CAM and CAF, which were developed in 1973 and 1992, respectively, model the 50th percentile U.S. Air Force male and female and are constructed using individual quadric surfaces that combine to form thousands of solid regions that represent specific tissues and structures within the body. In order to transport an external radiation environment to a point within one of the body models using HZETRN, a directional distribution of the tissues surrounding that point is needed. The CAMERA software is used to "ray trace" the CAM and CAF models, providing the thickness of each tissue type traversed along each of a large number of rays originating at a dose point. More recently, R. Kramer of the Departmento de Energia Nuclear, Universidade Federal de Pernambuco in Brazil and his co-workers developed the Male Adult voXel (MAX) model and the Female Adult voXel (FAX). These voxel-based body models were developed using segmented Computed Tomography (CT) scans of adult cadavers, and the quantities and distributions of various body tissues have been adjusted to match those specified in the International Commission on Radiological Protection (ICRP) reference adult male and female. A new set of tools has been developed to facilitate space radiation exposure calculation using HZETRN and the MAX and FAX models. A new ray tracer was developed for these body models, as was a methodology for evaluating organ-averaged quantities. Both tools are described in this paper and utilized in sample calculations.
Advanced model for the prediction of the neutron-rich fission product yields
NASA Astrophysics Data System (ADS)
Rubchenya, V. A.; Gorelov, D.; Jokinen, A.; Penttilä, H.; Äystö, J.
2013-12-01
The consistent models for the description of the independent fission product formation cross sections in the spontaneous fission and in the neutron and proton induced fission at the energies up to 100 MeV is developed. This model is a combination of new version of the two-component exciton model and a time-dependent statistical model for fusion-fission process with inclusion of dynamical effects for accurate calculations of nucleon composition and excitation energy of the fissioning nucleus at the scission point. For each member of the compound nucleus ensemble at the scission point, the primary fission fragment characteristics: kinetic and excitation energies and their yields are calculated using the scission-point fission model with inclusion of the nuclear shell and pairing effects, and multimodal approach. The charge distribution of the primary fragment isobaric chains was considered as a result of the frozen quantal fluctuations of the isovector nuclear matter density at the scission point with the finite neck radius. Model parameters were obtained from the comparison of the predicted independent product fission yields with the experimental results and with the neutron-rich fission product data measured with a Penning trap at the Accelerator Laboratory of the University of Jyväskylä (JYFLTRAP).
Study of Fission Barrier Heights of Uranium Isotopes by the Macroscopic-Microscopic Method
NASA Astrophysics Data System (ADS)
Zhong, Chun-Lai; Fan, Tie-Shuan
2014-09-01
Potential energy surfaces of uranium nuclei in the range of mass numbers 229 through 244 are investigated in the framework of the macroscopic-microscopic model and the heights of static fission barriers are obtained in terms of a double-humped structure. The macroscopic part of the nuclear energy is calculated according to Lublin—Strasbourg-drop (LSD) model. Shell and pairing corrections as the microscopic part are calculated with a folded-Yukawa single-particle potential. The calculation is carried out in a five-dimensional parameter space of the generalized Lawrence shapes. In order to extract saddle points on the potential energy surface, a new algorithm which can effectively find an optimal fission path leading from the ground state to the scission point is developed. The comparison of our results with available experimental data and others' theoretical results confirms the reliability of our calculations.
MODTRAN3: Suitability as a flux-divergence code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, G.P.; Chetwynd, J.H.; Wang, J.
1995-04-01
The Moderate Resolution Atmospheric Radiance and Transmittance Model (MODTRAN3) is the developmental version of MODTRAN and MODTRAN2. The Geophysics Directorate, Phillips Laboratory, released a beta version of this model in October 1994. It encompasses all the capabilities of LOWTRAN7, the historic 20 cm{sup -1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup -1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Validation against full Voigt line-by-line calculations (e.g., FASCODE) has shown excellent agreement. In addition,more » simple timing runs demonstrate potential improvement of more than a factor of 100 for a typical 500 cm{sup -1} spectral interval and comparable vertical layering. Not only is MODTRAN an excellent band model for {open_quotes}full path{close_quotes} calculations (that is, radiance and/or transmittance from point A to point B), but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard spectrally integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies of both standard and trace atmospheric species.« less
Nonlocal screening effects on core-level photoemission spectra investigated by large-cluster models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, K.; Kotani, A.
1995-08-15
The copper 2{ital p} core-level x-ray photoemission spectrum in CuO{sub 2} plane systems is calculated by means of large-cluster models to investigate in detail the nonlocal screening effects, which were pointed out by van Veenendaal {ital et} {ital al}. [Phys. Rev. B 47, 11 462 (1993)]. Calculating the hole distributions for the initial and final states of photoemission, we show that the atomic coordination in a cluster strongly affects accessible final states. Accordingly, we point out that the interpretation for Cu{sub 3}O{sub 10} given by van Veenendaal {ital et} {ital al}. is not always general. Moreover, it is shown thatmore » the spectrum can be remarkably affected by whether or not the O 2{ital p}{sub {pi}} orbits are taken into account in the calculations. We also introduce a Hartree-Fock approximation in order to treat much larger-cluster models.« less
Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S
2016-08-01
A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.
3D Surface Reconstruction of Rills in a Spanish Olive Grove
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Seeger, Manuel; Wirtz, Stefan; Taguas, Encarnación; Ries, Johannes B.
2016-04-01
The low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique is used for 3D surface reconstruction and difference calculation of an 18 meter long rill in South Spain (Andalusia, Puente Genil). The images were taken with a Canon HD video camera before and after a rill experiment in an olive grove. Recording with a video camera has compared to a photo camera a huge time advantage and the method also guarantees more than adequately overlapping sharp images. For each model, approximately 20 minutes of video were taken. As SfM needs single images, the sharpest image was automatically selected from 8 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs and recovers the camera and feature positions. Finally, by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post model a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The results show that rills in olive groves have a high dynamic due to the lack of vegetation cover under the trees, so that the rill can incise until the bedrock. Another reason for the high activity is the intensive employment of machinery.
NASA Astrophysics Data System (ADS)
Purkayastha, Archak; Dhar, Abhishek; Kulkarni, Manas
2017-11-01
We investigate and map out the nonequilibrium phase diagram of a generalization of the well known Aubry-André-Harper (AAH) model. This generalized AAH (GAAH) model is known to have a single-particle mobility edge which also has an additional self-dual property akin to that of the critical point of the AAH model. By calculating the population imbalance, we get hints of a rich phase diagram. We also find a fascinating connection between single particle wave functions near the mobility edge of the GAAH model and the wave functions of the critical AAH model. By placing this model far from equilibrium with the aid of two baths, we investigate the open system transport via system size scaling of nonequilibrium steady state (NESS) current, calculated by fully exact nonequilibrium Green's function (NEGF) formalism. The critical point of the AAH model now generalizes to a `critical' line separating regions of ballistic and localized transport. Like the critical point of the AAH model, current scales subdiffusively with system size on the `critical' line (I ˜N-2 ±0.1 ). However, remarkably, the scaling exponent on this line is distinctly different from that obtained for the critical AAH model (where I ˜N-1.4 ±0.05 ). All these results can be understood from the above-mentioned connection between states near the mobility edge of the GAAH model and those of the critical AAH model. A very interesting high temperature nonequilibrium phase diagram of the GAAH model emerges from our calculations.
Simplified Model and Response Analysis for Crankshaft of Air Compressor
NASA Astrophysics Data System (ADS)
Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang
2017-11-01
The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.
NASA Astrophysics Data System (ADS)
Shalaginova, Z. I.
2016-03-01
The mathematical model and calculation method of the thermal-hydraulic modes of heat points, based on the theory of hydraulic circuits, being developed at the Melentiev Energy Systems Institute are presented. The redundant circuit of the heat point was developed, in which all possible connecting circuits (CC) of the heat engineering equipment and the places of possible installation of control valve were inserted. It allows simulating the operating modes both at central heat points (CHP) and individual heat points (IHP). The configuration of the desired circuit is carried out automatically by removing the unnecessary links. The following circuits connecting the heating systems (HS) are considered: the dependent circuit (direct and through mixing elevator) and independent one (through the heater). The following connecting circuits of the load of hot water supply (HWS) were considered: open CC (direct water pumping from pipelines of heat networks) and a closed CC with connecting the HWS heaters on single-level (serial and parallel) and two-level (sequential and combined) circuits. The following connecting circuits of the ventilation systems (VS) were also considered: dependent circuit and independent one through a common heat exchanger with HS load. In the heat points, water temperature regulators for the hot water supply and ventilation and flow regulators for the heating system, as well as to the inlet as a whole, are possible. According to the accepted decomposition, the model of the heat point is an integral part of the overall heat-hydraulic model of the heat-supplying system having intermediate control stages (CHP and IHP), which allows to consider the operating modes of the heat networks of different levels connected with each other through CHP as well as connected through IHP of consumers with various connecting circuits of local systems of heat consumption: heating, ventilation and hot water supply. The model is implemented in the Angara data-processing complex. An example of the multilevel calculation of the heat-hydraulic modes of main heat networks and those connected to them through central heat point distribution networks in Petropavlovsk-Kamchatskii is examined.
ERIC Educational Resources Information Center
Fitzsimmons, Charles P.
1986-01-01
Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)
batman: BAsic Transit Model cAlculatioN in Python
NASA Astrophysics Data System (ADS)
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
NASA Technical Reports Server (NTRS)
Krebs, R. P.
1972-01-01
The computer program described calculates the design-point characteristics of a gas generator or a turbojet lift engine for V/STOL applications. The program computes the dimensions and mass, as well as the thermodynamic performance of the model engine and its components. The program was written in FORTRAN 4 language. Provision has been made so that the program accepts input values in either SI Units or U.S. Customary Units. Each engine design-point calculation requires less than 0.5 second of 7094 computer time.
NASA Astrophysics Data System (ADS)
Pearce, Jonathan V.; Gisby, John A.; Steur, Peter P. M.
2016-08-01
A knowledge of the effect of impurities at the level of parts per million on the freezing temperature of very pure metals is essential for realisation of ITS-90 fixed points. New information has become available for use with the thermodynamic modelling software MTDATA, permitting calculation of liquidus slopes, in the low concentration limit, of a wider range of binary alloy systems than was previously possible. In total, calculated values for 536 binary systems are given. In addition, new experimental determinations of phase diagrams, in the low impurity concentration limit, have recently appeared. All available data have been combined to provide a comprehensive set of liquidus slopes for impurities in ITS-90 metal fixed points. In total, liquidus slopes for 838 systems are tabulated for the fixed points Hg, Ga, In, Sn, Zn, Al, Ag, Au, and Cu. It is shown that the value of the liquidus slope as a function of impurity element atomic number can be approximated using a simple formula, and good qualitative agreement with the existing data is observed for the fixed points Al, Ag, Au and Cu, but curiously the formula is not applicable to the fixed points Hg, Ga, In, Sn, and Zn. Some discussion is made concerning the influence of oxygen on the liquidus slopes, and some calculations using MTDATA are discussed. The BIPM’s consultative committee for thermometry has long recognised that the sum of individual estimates method is the ideal approach for assessing uncertainties due to impurities, but the community has been largely powerless to use the model due to lack of data. Here, not only is data provided, but a simple model is given to enable known thermophysical data to be used directly to estimate impurity effects for a large fraction of the ITS-90 fixed points.
Optical gain coefficients of silicon: a theoretical study
NASA Astrophysics Data System (ADS)
Tsai, Chin-Yi
2018-05-01
A theoretical model is presented and an explicit formula is derived for calculating the optical gain coefficients of indirect band-gap semiconductors. This model is based on the second-order time-dependent perturbation theory of quantum mechanics by incorporating all the eight processes of photon/phonon emission and absorption between the band edges of the conduction and valence bands. Numerical calculation results are given for Si. The calculated absorption coefficients agree well with the existing fitting formula of experiment data with two modes of phonons: optical phonons with energy of 57.73 meV and acoustic phonons with energy of 18.27 meV near (but not exactly at) the zone edge of the X-point in the dispersion relation of phonons. These closely match with existing data of 57.5 meV transverse optical (TO) phonons at the X4-point and 18.6 meV transverse acoustic (TA) phonons at the X3-point of the zone edge. The calculated results show that the material optical gain of Si will overcome free-carrier absorption if the energy separation of quasi-Fermi levels between electrons and holes exceeds 1.15 eV.
NASA Astrophysics Data System (ADS)
Tanigawa, Hiroyasu; Katoh, Yutai; Kohyama, Akira
1995-08-01
Effects of applied stress on early stages of interstitial type Frank loop evolution were investigated by both numerical calculation and irradiation experiments. The final objective of this research is to propose a comprehensive model of complex stress effects on microstructural evolution under various conditions. In the experimental part of this work, the microstructural analysis revealed that the differences in resolved normal stress caused those in the nucleation rates of Frank loops on {111} crystallographic family planes, and that with increasing external applied stress the total nucleation rate of Frank loops was increased. A numerical calculation was carried out primarily to evaluate the validity of models of stress effects on nucleation processes of Frank loop evolution. The calculation stands on rate equuations which describe evolution of point defects, small points defect clusters and Frank loops. The rate equations of Frank loop evolution were formulated for {111} planes, considering effects of resolved normal stress to clustering processes of small point defects and growth processes of Frank loops, separately. The experimental results and the predictions from the numerical calculation qualitatively coincided well with each other.
Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2017-10-01
FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.
A complex fermionic tensor model in d dimensions
NASA Astrophysics Data System (ADS)
Prakash, Shiroman; Sinha, Ritam
2018-02-01
In this note, we study a melonic tensor model in d dimensions based on three-index Dirac fermions with a four-fermion interaction. Summing the melonic diagrams at strong coupling allows one to define a formal large- N saddle point in arbitrary d and calculate the spectrum of scalar bilinear singlet operators. For d = 2 - ɛ the theory is an infrared fixed point, which we find has a purely real spectrum that we determine numerically for arbitrary d < 2, and analytically as a power series in ɛ. The theory appears to be weakly interacting when ɛ is small, suggesting that fermionic tensor models in 1-dimension can be studied in an ɛ expansion. For d > 2, the spectrum can still be calculated using the saddle point equations, which may define a formal large- N ultraviolet fixed point analogous to the Gross-Neveu model in d > 2. For 2 < d < 6, we find that the spectrum contains at least one complex scalar eigenvalue (similar to the complex eigenvalue present in the bosonic tensor model recently studied by Giombi, Klebanov and Tarnopolsky) which indicates that the theory is unstable. We also find that the fixed point is weakly-interacting when d = 6 (or more generally d = 4 n + 2) and has a real spectrum for 6 < d < 6 .14 which we present as a power series in ɛ in 6 + ɛ dimensions.
MODELING PARTICULATE CHARGING IN ESPS
In electrostatic precipitators there is a strong interaction between the particulate space charge and the operating voltage and current of an electrical section. Calculating either the space charge or the operating point when the other is fixed is not difficult, but calculating b...
Information pricing based on trusted system
NASA Astrophysics Data System (ADS)
Liu, Zehua; Zhang, Nan; Han, Hongfeng
2018-05-01
Personal information has become a valuable commodity in today's society. So our goal aims to develop a price point and a pricing system to be realistic. First of all, we improve the existing BLP system to prevent cascading incidents, design a 7-layer model. Through the cost of encryption in each layer, we develop PI price points. Besides, we use association rules mining algorithms in data mining algorithms to calculate the importance of information in order to optimize informational hierarchies of different attribute types when located within a multi-level trusted system. Finally, we use normal distribution model to predict encryption level distribution for users in different classes and then calculate information prices through a linear programming model with the help of encryption level distribution above.
A fast dynamic grid adaption scheme for meteorological flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, B.H.; Trapp, R.J.
1993-10-01
The continuous dynamic grid adaption (CDGA) technique is applied to a compressible, three-dimensional model of a rising thermal. The computational cost, per grid point per time step, of using CDGA instead of a fixed, uniform Cartesian grid is about 53% of the total cost of the model with CDGA. The use of general curvilinear coordinates contributes 11.7% to this total, calculating and moving the grid 6.1%, and continually updating the transformation relations 20.7%. Costs due to calculations that involve the gridpoint velocities (as well as some substantial unexplained costs) contribute the remaining 14.5%. A simple way to limit the costmore » of calculating the grid is presented. The grid is adapted by solving an elliptic equation for gridpoint coordinates on a coarse grid and then interpolating the full finite-difference grid. In this application, the additional costs per grid point of CDGA are shown to be easily offset by the savings resulting from the reduction in the required number of grid points. In simulation of the thermal costs are reduced by a factor of 3, as compared with those of a companion model with a fixed, uniform Cartesian grid. 8 refs., 8 figs.« less
ACTOMP - AUTOCAD TO MASS PROPERTIES
NASA Technical Reports Server (NTRS)
Jones, A.
1994-01-01
AutoCAD to Mass Properties was developed to facilitate quick mass properties calculations of structures having many simple elements in a complex configuration such as trusses or metal sheet containers. Calculating the mass properties of structures of this type can be a tedious and repetitive process, but ACTOMP helps automate the calculations. The structure can be modelled in AutoCAD or a compatible CAD system in a matter of minutes using the 3-Dimensional elements. This model provides all the geometric data necessary to make a mass properties calculation of the structure. ACTOMP reads the geometric data of a drawing from the Drawing Interchange File (DXF) used in AutoCAD. The geometric entities recognized by ACTOMP include POINTs, 3DLINEs, and 3DFACEs. ACTOMP requests mass, linear density, or area density of the elements for each layer, sums all the elements and calculates the total mass, center of mass (CM) and the mass moments of inertia (MOI). AutoCAD utilizes layers to define separate drawing planes. ACTOMP uses layers to differentiate between multiple types of similar elements. For example if a structure is made of various types of beams, modeled as 3DLINEs, each with a different linear density, the beams can be grouped by linear density and each group placed on a separate layer. The program will request the linear density of 3DLINEs for each new layer it finds as it processes the drawing information. The same is true with POINTs and 3DFACEs. By using layers this way a very complex model can be created. POINTs are used for point masses such as bolts, small machine parts, or small electronic boxes. 3DLINEs are used for beams, bars, rods, cables, and other similarly slender elements. 3DFACEs are used for planar elements. 3DFACEs may be created as 3 or 4 Point faces. Some examples of elements that might be modelled using 3DFACEs are plates, sheet metal, fabric, boxes, large diameter hollow cylinders and evenly distributed masses. ACTOMP was written in Microsoft QuickBasic (Version 2.0). It was developed for the IBM PC microcomputer and has been implemented on an IBM PC compatible under DOS 3.21. ACTOMP was developed in 1988 and requires approximately 5K bytes to operate.
NASA Astrophysics Data System (ADS)
Lundberg, Oskar E.; Nordborg, Anders; Lopez Arteaga, Ines
2016-03-01
A state-dependent contact model including nonlinear contact stiffness and nonlinear contact filtering is used to calculate contact forces and rail vibrations with a time-domain wheel-track interaction model. In the proposed method, the full three-dimensional contact geometry is reduced to a point contact in order to lower the computational cost and to reduce the amount of required input roughness-data. Green's functions including the linear dynamics of the wheel and the track are coupled with a point contact model, leading to a numerically efficient model for the wheel-track interaction. Nonlinear effects due to the shape and roughness of the wheel and the rail surfaces are included in the point contact model by pre-calculation of functions for the contact stiffness and contact filters. Numerical results are compared to field measurements of rail vibrations for passenger trains running at 200 kph on a ballast track. Moreover, the influence of vehicle pre-load and different degrees of roughness excitation on the resulting wheel-track interaction is studied by means of numerical predictions.
NASA Astrophysics Data System (ADS)
Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.
1994-02-01
A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.
Nuclear structure and weak rates of heavy waiting point nuclei under rp-process conditions
NASA Astrophysics Data System (ADS)
Nabi, Jameel-Un; Böyükata, Mahmut
2017-01-01
The structure and the weak interaction mediated rates of the heavy waiting point (WP) nuclei 80Zr, 84Mo, 88Ru, 92Pd and 96Cd along N = Z line were studied within the interacting boson model-1 (IBM-1) and the proton-neutron quasi-particle random phase approximation (pn-QRPA). The energy levels of the N = Z WP nuclei were calculated by fitting the essential parameters of IBM-1 Hamiltonian and their geometric shapes were predicted by plotting potential energy surfaces (PESs). Half-lives, continuum electron capture rates, positron decay rates, electron capture cross sections of WP nuclei, energy rates of β-delayed protons and their emission probabilities were later calculated using the pn-QRPA. The calculated Gamow-Teller strength distributions were compared with previous calculation. We present positron decay and continuum electron capture rates on these WP nuclei under rp-process conditions using the same model. For the rp-process conditions, the calculated total weak rates are twice the Skyrme HF+BCS+QRPA rates for 80Zr. For remaining nuclei the two calculations compare well. The electron capture rates are significant and compete well with the corresponding positron decay rates under rp-process conditions. The finding of the present study supports that electron capture rates form an integral part of the weak rates under rp-process conditions and has an important role for the nuclear model calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, Scott E., E-mail: sedavids@utmb.edu
Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less
Quantum critical point revisited by dynamical mean-field theory
NASA Astrophysics Data System (ADS)
Xu, Wenhu; Kotliar, Gabriel; Tsvelik, Alexei M.
2017-03-01
Dynamical mean-field theory is used to study the quantum critical point (QCP) in the doped Hubbard model on a square lattice. The QCP is characterized by a universal scaling form of the self-energy and a spin density wave instability at an incommensurate wave vector. The scaling form unifies the low-energy kink and the high-energy waterfall feature in the spectral function, while the spin dynamics includes both the critical incommensurate and high-energy antiferromagnetic paramagnons. We use the frequency-dependent four-point correlation function of spin operators to calculate the momentum-dependent correction to the electron self-energy. By comparing with the calculations based on the spin-fermion model, our results indicate the frequency dependence of the quasiparticle-paramagnon vertices is an important factor to capture the momentum dependence in quasiparticle scattering.
Quantum critical point revisited by dynamical mean-field theory
Xu, Wenhu; Kotliar, Gabriel; Tsvelik, Alexei M.
2017-03-31
Dynamical mean-field theory is used to study the quantum critical point (QCP) in the doped Hubbard model on a square lattice. We characterize the QCP by a universal scaling form of the self-energy and a spin density wave instability at an incommensurate wave vector. The scaling form unifies the low-energy kink and the high-energy waterfall feature in the spectral function, while the spin dynamics includes both the critical incommensurate and high-energy antiferromagnetic paramagnons. Here, we use the frequency-dependent four-point correlation function of spin operators to calculate the momentum-dependent correction to the electron self-energy. Furthermore, by comparing with the calculations basedmore » on the spin-fermion model, our results indicate the frequency dependence of the quasiparticle-paramagnon vertices is an important factor to capture the momentum dependence in quasiparticle scattering.« less
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Planar dielectric waveguides in rotation are optical fibers: comparison with the classical model.
Peña García, Antonio; Pérez-Ocón, Francisco; Jiménez, José Ramón
2008-01-21
A novel and simpler method to calculate the main parameters in fiber optics is presented. This method is based in a planar dielectric waveguide in rotation and, as an example, it is applied to calculate the turning points and the inner caustic in an optical fiber with a parabolic refractive index. It is shown that the solution found using this method agrees with the standard (and more complex) method, whose solutions for these points are also summarized in this paper.
On two-point boundary correlations in the six-vertex model with domain wall boundary conditions
NASA Astrophysics Data System (ADS)
Colomo, F.; Pronko, A. G.
2005-05-01
The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.
Orbital stability close to asteroid 624 Hektor using the polyhedral model
NASA Astrophysics Data System (ADS)
Jiang, Yu; Baoyin, Hexi; Li, Hengnian
2018-03-01
We investigate the orbital stability close to the unique L4-point Jupiter binary Trojan asteroid 624 Hektor. The gravitational potential of 624 Hektor is calculated using the polyhedron model with observational data of 2038 faces and 1021 vertexes. Previous studies have presented three different density values for 624 Hektor. The equilibrium points in the gravitational potential of 624 Hektor with different density values have been studied in detail. There are five equilibrium points in the gravitational potential of 624 Hektor no matter the density value. The positions, Jacobian, eigenvalues, topological cases, stability, as well as the Hessian matrix of the equilibrium points are investigated. For the three different density values the number, topological cases, and the stability of the equilibrium points with different density values are the same. However, the positions of the equilibrium points vary with the density value of the asteroid 624 Hektor. The outer equilibrium points move away from the asteroid's mass center when the density increases, and the inner equilibrium point moves close to the asteroid's mass center when the density increases. There exist unstable periodic orbits near the surface of 624 Hektor. We calculated an orbit near the primary's equatorial plane of this binary Trojan asteroid; the results indicate that the orbit remains stable after 28.8375 d.
GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT
NASA Astrophysics Data System (ADS)
Strubbe, David A.
GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.
Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.
Duckic, Paulina; Hayes, Robert B
2018-06-01
Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.
Modeling and calculation of impact friction caused by corner contact in gear transmission
NASA Astrophysics Data System (ADS)
Zhou, Changjiang; Chen, Siyu
2014-09-01
Corner contact in gear pair causes vibration and noise, which has attracted many attentions. However, teeth errors and deformation make it difficulty to determine the point situated at corner contact and study the mechanism of teeth impact friction in the current researches. Based on the mechanism of corner contact, the process of corner contact is divided into two stages of impact and scratch, and the calculation model including gear equivalent error—combined deformation is established along the line of action. According to the distributive law, gear equivalent error is synthesized by base pitch error, normal backlash and tooth profile modification on the line of action. The combined tooth compliance of the first point lying in corner contact before the normal path is inversed along the line of action, on basis of the theory of engagement and the curve of tooth synthetic compliance & load-history. Combined secondarily the equivalent error with the combined deflection, the position standard of the point situated at corner contact is probed. Then the impact positions and forces, from the beginning to the end during corner contact before the normal path, are calculated accurately. Due to the above results, the lash model during corner contact is founded, and the impact force and frictional coefficient are quantified. A numerical example is performed and the averaged impact friction coefficient based on the presented calculation method is validated. This research obtains the results which could be referenced to understand the complex mechanism of teeth impact friction and quantitative calculation of the friction force and coefficient, and to gear exact design for tribology.
Ravald, L; Fornstedt, T
2001-01-26
The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.
Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model
NASA Astrophysics Data System (ADS)
Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man
2017-03-01
Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.
Investigation of the 3-D actinic flux field in mountainous terrain
Wagner, J.E.; Angelini, F.; Blumthaler, M.; Fitzka, M.; Gobbi, G.P.; Kift, R.; Kreuter, A.; Rieder, H.E.; Simic, S.; Webb, A.; Weihs, P.
2011-01-01
During three field campaigns spectral actinic flux was measured from 290–500 nm under clear sky conditions in Alpine terrain and the associated O3- and NO2-photolysis frequencies were calculated and the measurement products were then compared with 1-D- and 3-D-model calculations. To do this 3-D-radiative transfer model was adapted for actinic flux calculations in mountainous terrain and the maps of the actinic flux field at the surface, calculated with the 3-D-radiative transfer model, are given. The differences between the 3-D- and 1-D-model results for selected days during the campaigns are shown, together with the ratios of the modeled actinic flux values to the measurements. In many cases the 1-D-model overestimates actinic flux by more than the measurement uncertainty of 10%. The results of using a 3-D-model generally show significantly lower values, and can underestimate the actinic flux by up to 30%. This case study attempts to quantify the impact of snow cover in combination with topography on spectral actinic flux. The impact of snow cover on the actinic flux was ~ 25% in narrow snow covered valleys, but for snow free areas there were no significant changes due snow cover in the surrounding area and it is found that the effect snow-cover at distances over 5 km from the point of interest was below 5%. Overall the 3-D-model can calculate actinic flux to the same accuracy as the 1-D-model for single points, but gives a much more realistic view of the surface actinic flux field in mountains as topography and obstruction of the horizon are taken into account. PMID:26412915
Lyman alpha initiated winds in late-type stars
NASA Technical Reports Server (NTRS)
Haisch, B. M.; Linsky, J. L.; Vanderhucht, K. A.
1979-01-01
The IUE survey of late-type stars revealed a sharp division in the HR diagram between stars with solar type spectra (chromosphere and transition region lines) and those with non-solar type spectra (only chromosphere lines). Models of both hot coronae and cool wind flows were calculated using stellar model chromospheres as starting points for stellar wind calculations in order to investigate the possibility of having a supersonic transition locus in the HR diagram dividing hot coronae from cool winds. From these models, it is concluded that the Lyman alpha flux may play an important role in determining the location of a stellar wind critical point. The interaction of Lyman alpha radiation pressure with Alfven waves in producing strong, low temperature stellar winds in the star Arcturus is examined.
The importance of the external potential on group electronegativity.
Leyssens, Tom; Geerlings, Paul; Peeters, Daniel
2005-11-03
The electronegativity of groups placed in a molecular environment is obtained using CCSD calculations of the electron affinity and ionization energy. A point charge model is used as an approximation of the molecular environment. The electronegativity values obtained in the presence of a point charge model are compared to the isolated group property to estimate the importance of the external potential on the group's electronegativity. The validity of the "group in molecule" electronegativities is verified by comparing EEM (electronegativity equalization method) charge transfer values to the explicitly calculated natural population analysis (NPA) ones, as well as by comparing the variation in electronegativity between the isolated functional group and the functional group in the presence of a modeled environment with the variation based on a perturbation expansion of the chemical potential.
Gravitational microlensing of gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1993-01-01
A Monte Carlo code is developed to calculate gravitational microlensing in three dimensions when the lensing optical depth is low or moderate (not greater than 0.25). The code calculates positions of microimages and time delays between the microimages. The majority of lensed gamma-ray bursts should show a simple double-burst structure, as predicted by a single point mass lens model. A small fraction should show complicated multiple events due to the collective effects of several point masses (black holes). Cosmological models with a significant fraction of mass density in massive compact objects can be tested by searching for microlensing events in the current BATSE data. Our catalog generated by 10,000 Monte Carlo models is accessible through the computer network. The catalog can be used to take realistic selection effects into account.
Scaling in the vicinity of the four-state Potts fixed point
NASA Astrophysics Data System (ADS)
Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.
2017-08-01
We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.
A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising
NASA Astrophysics Data System (ADS)
Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua
2018-04-01
In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.
Infrared image background modeling based on improved Susan filtering
NASA Astrophysics Data System (ADS)
Yuehua, Xia
2018-02-01
When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.
Ricker, Martin; Peña Ramírez, Víctor M.; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A·T+E, where for and for , A = initial relative growth to be estimated, , and E is an error term for each tree and time point. Furthermore, Ei[–b·r] = , , with TPR being the turning point radius in a sigmoid curve, and at is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth . One site (at the Popocatépetl volcano) stood out, with being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time. PMID:25402427
Gambling scores for earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Mojto, Viliam; Rausova, Zuzana; Chrenova, Jana; Dedik, Ladislav
2015-12-01
This work aimed to evaluate the use of a four-point glucagon stimulation test of C-peptide effect on glucose utilization in type 1 diabetic patients using a new mathematical model. A group of 32 type 1 diabetic patients and a group of 10 healthy control subjects underwent a four-point glucagon stimulation test with blood sampling at 0, 6, 15 and 30 min after 1 mg glucagon bolus intravenous administration. Pharmacokinetic and pharmacokinetic/pharmacodynamic models of C-peptide effect on glucose utilization versus area under curve (AUC) were used. A two-sample t test and ANOVA with Bonferroni correction were used to test the significance of differences between parameters. A significant difference between control and patient groups regarding the coefficient of whole-body glucose utilization and AUC C-peptide/AUC glucose ratio (p ≪ 0.001 and p = 0.002, respectively) was observed. The high correlation (r = 0.97) between modeled coefficient of whole-body glucose utilization and numerically calculated AUC C-peptide/AUC glucose ratio related to entire cohort indicated the stability of used method. The short-term four-point glucagon stimulation test allows the numerically calculated AUC C-peptide/AUC glucose ratio and/or the coefficient of whole-body glucose utilization calculated from model to be used to diagnostically identify type 1 diabetic patients.
Quantum Critical Point revisited by the Dynamical Mean Field Theory
NASA Astrophysics Data System (ADS)
Xu, Wenhu; Kotliar, Gabriel; Tsvelik, Alexei
Dynamical mean field theory is used to study the quantum critical point (QCP) in the doped Hubbard model on a square lattice. The QCP is characterized by a universal scaling form of the self energy and a spin density wave instability at an incommensurate wave vector. The scaling form unifies the low energy kink and the high energy waterfall feature in the spectral function, while the spin dynamics includes both the critical incommensurate and high energy antiferromagnetic paramagnons. We use the frequency dependent four-point correlation function of spin operators to calculate the momentum dependent correction to the electron self energy. Our results reveal a substantial difference with the calculations based on the Spin-Fermion model which indicates that the frequency dependence of the the quasiparitcle-paramagnon vertices is an important factor. The authors are supported by Center for Computational Design of Functional Strongly Correlated Materials and Theoretical Spectroscopy under DOE Grant DE-FOA-0001276.
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Speed Approach for UAV Collision Avoidance
NASA Astrophysics Data System (ADS)
Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.
2018-05-01
The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.
Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage
NASA Astrophysics Data System (ADS)
Bergeron, Dominic
Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient progressivement proportionnelle a T 2 a fort dopage. Quelques resultats avec sauts aux voisins plus eloignes sont aussi presentes. Mots-cles: Hubbard, point critique quantique, conductivite, corrections de vertex
NASA Astrophysics Data System (ADS)
Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.
2005-12-01
This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing
Development of FullWave : Hot Plasma RF Simulation Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei
2017-10-01
Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.
Identical superdeformed bands in yrast 152Dy: a systematic description
NASA Astrophysics Data System (ADS)
Dadwal, Anshul; Mittal, H. M.
2018-06-01
The nuclear softness (NS) formula, semiclassical particle rotor model (PRM) and modified exponential model with pairing attenuation are used for the systematic study of the identical superdeformed bands in the A ∼ 150 mass region. These formulae/models are employed to study the identical superdeformed bands relative to the yrast SD band 152Dy(1), {152Dy(1), 151Tb(2)}, {152Dy(1), 151Dy(4)} (midpoint), {152Dy(1), 153Dy(2)} (quarter point), {152Dy(1), 153Dy(3)} (three-quarter point). The parameters, baseline moment of inertia ({{I}}0), alignment (i) and effective pairing parameter (Δ0) are calculated using the least-squares fitting of the γ-ray transitions energies in the NS formula, semiclassical-PRM and modified exponential model with pairing attenuation, respectively. The calculated parameters are found to depend sensitively on the proposed baseline spin (I 0).
Dynamic Analysis of Geared Rotors by Finite Elements
NASA Technical Reports Server (NTRS)
Kahraman, A.; Ozguven, H. Nevzat; Houser, D. R.; Zakrajsek, J. J.
1992-01-01
A finite element model of a geared rotor system on flexible bearings has been developed. The model includes the rotary inertia of on shaft elements, the axial loading on shafts, flexibility and damping of bearings, material damping of shafts and the stiffness and the damping of gear mesh. The coupling between the torsional and transverse vibrations of gears were considered in the model. A constant mesh stiffness was assumed. The analysis procedure can be used for forced vibration analysis geared rotors by calculating the critical speeds and determining the response of any point on the shafts to mass unbalances, geometric eccentricities of gears, and displacement transmission error excitation at the mesh point. The dynamic mesh forces due to these excitations can also be calculated. The model has been applied to several systems for the demonstration of its accuracy and for studying the effect of bearing compliances on system dynamics.
NASA Astrophysics Data System (ADS)
Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu
2018-03-01
During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.
Regional model calculations over annual cycles have pointed to the need for accurately representing impacts of long-range transport. Linking regional and global scale models have met with mixed success as biases in the global model can propagate and influence regional calculatio...
Application of the QSPR approach to the boiling points of azeotropes.
Katritzky, Alan R; Stoyanova-Slavova, Iva B; Tämm, Kaido; Tamm, Tarmo; Karelson, Mati
2011-04-21
CODESSA Pro derivative descriptors were calculated for a data set of 426 azeotropic mixtures by the centroid approximation and the weighted-contribution-factor approximation. The two approximations produced almost identical four-descriptor QSPR models relating the structural characteristic of the individual components of azeotropes to the azeotropic boiling points. These models were supported by internal and external validations. The descriptors contributing to the QSPR models are directly related to the three components of the enthalpy (heat) of vaporization.
Metric Scale Calculation for Visual Mapping Algorithms
NASA Astrophysics Data System (ADS)
Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.
2018-05-01
Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blijderveen, Maarten van; University of Twente, Department of Thermal Engineering, Drienerlolaan 5, 7522 NB Enschede; Bramer, Eddy A.
Highlights: Black-Right-Pointing-Pointer We model piloted ignition times of wood and plastics. Black-Right-Pointing-Pointer The model is applied on a packed bed. Black-Right-Pointing-Pointer When the air flow is above a critical level, no ignition can take place. - Abstract: To gain insight in the startup of an incinerator, this article deals with piloted ignition. A newly developed model is described to predict the piloted ignition times of wood, PMMA and PVC. The model is based on the lower flammability limit and the adiabatic flame temperature at this limit. The incoming radiative heat flux, sample thickness and moisture content are some of themore » used variables. Not only the ignition time can be calculated with the model, but also the mass flux and surface temperature at ignition. The ignition times for softwoods and PMMA are mainly under-predicted. For hardwoods and PVC the predicted ignition times agree well with experimental results. Due to a significant scatter in the experimental data the mass flux and surface temperature calculated with the model are hard to validate. The model is applied on the startup of a municipal waste incineration plant. For this process a maximum allowable primary air flow is derived. When the primary air flow is above this maximum air flow, no ignition can be obtained.« less
Effects of damping on mode shapes, volume 1
NASA Technical Reports Server (NTRS)
Gates, R. M.
1977-01-01
Displacement, velocity, and acceleration admittances were calculated for a realistic NASTRAN structural model of space shuttle for three conditions: liftoff, maximum dynamic pressure and end of solid rocket booster burn. The realistic model of the orbiter, external tank, and solid rocket motors included the representation of structural joint transmissibilities by finite stiffness and damping elements. Methods developed to incorporate structural joints and their damping characteristics into a finite element model of the space shuttle, to determine the point damping parameters required to produce realistic damping in the primary modes, and to calculate the effect of distributed damping on structural resonances through the calculation of admittances.
Analysis of data from NASA B-57B gust gradient program
NASA Technical Reports Server (NTRS)
Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.
1985-01-01
Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.
Three-dimensional Simulations of Pure Deflagration Models for Thermonuclear Supernovae
NASA Astrophysics Data System (ADS)
Long, Min; Jordan, George C., IV; van Rossum, Daniel R.; Diemer, Benedikt; Graziani, Carlo; Kessler, Richard; Meyer, Bradley; Rich, Paul; Lamb, Don Q.
2014-07-01
We present a systematic study of the pure deflagration model of Type Ia supernovae (SNe Ia) using three-dimensional, high-resolution, full-star hydrodynamical simulations, nucleosynthetic yields calculated using Lagrangian tracer particles, and light curves calculated using radiation transport. We evaluate the simulations by comparing their predicted light curves with many observed SNe Ia using the SALT2 data-driven model and find that the simulations may correspond to under-luminous SNe Iax. We explore the effects of the initial conditions on our results by varying the number of randomly selected ignition points from 63 to 3500, and the radius of the centered sphere they are confined in from 128 to 384 km. We find that the rate of nuclear burning depends on the number of ignition points at early times, the density of ignition points at intermediate times, and the radius of the confining sphere at late times. The results depend primarily on the number of ignition points, but we do not expect this to be the case in general. The simulations with few ignition points release more nuclear energy E nuc, have larger kinetic energies E K, and produce more 56Ni than those with many ignition points, and differ in the distribution of 56Ni, Si, and C/O in the ejecta. For these reasons, the simulations with few ignition points exhibit higher peak B-band absolute magnitudes M B and light curves that rise and decline more quickly; their M B and light curves resemble those of under-luminous SNe Iax, while those for simulations with many ignition points are not.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong
2014-01-01
The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.
NASA Astrophysics Data System (ADS)
Fan, T. S.; Wang, Z. M.; Zhu, X.; Zhu, W. J.; Zhong, C. L.
2017-09-01
In this work, the nuclear potential-energy of the deformed nuclei as a function of shape coordinates is calculated in a five-dimensional (5D) parameter space of the axially symmetric generalized Lawrence shapes, on the basis of the macroscopic-microscopic method. The liquid-drop part of the nuclear energy is calculated according to the Myers-Swiatecki model and the Lublin-Strasbourg-drop (LSD) formula. The Woods-Saxon and the folded-Yukawa potentials for deformed nuclei are used for the shell and pairing corrections of the Strutinsky-type. The pairing corrections are calculated at zero temperature, T, related to the excitation energy. The eigenvalues of Hamiltonians for protons and neutrons are found by expanding the eigen-functions in terms of harmonic-oscillator wave functions of a spheroid. Then the BCS pair is applied on the smeared-out single-particle spectrum. By comparing the results obtained by different models, the most favorable combination of the macroscopic-microscopic model is known as the LSD formula with the folded-Yukawa potential. Potential-energy landscapes for actinide isotopes are investigated based on a grid of more than 4,000,000 deformation points and the heights of static fission barriers are obtained in terms of a double-humped structure on the full 5D parameter space. In order to locate the ground state shapes, saddle points, scission points and optimal fission path on the calculated 5D potential-energy surface, the falling rain algorithm and immersion method are designed and implemented. The comparison of our results with available experimental data and others' theoretical results confirms the reliability of our calculations.
Program Calculates Forces in Bolted Structural Joints
NASA Technical Reports Server (NTRS)
Buder, Daniel A.
2005-01-01
FORTRAN 77 computer program calculates forces in bolts in the joints of structures. This program is used in conjunction with the NASTRAN finite-element structural-analysis program. A mathematical model of a structure is first created by approximating its load-bearing members with representative finite elements, then NASTRAN calculates the forces and moments that each finite element contributes to grid points located throughout the structure. The user selects the finite elements that correspond to structural members that contribute loads to the joints of interest, and identifies the grid point nearest to each such joint. This program reads the pertinent NASTRAN output, combines the forces and moments from the contributing elements to determine the resultant force and moment acting at each proximate grid point, then transforms the forces and moments from these grid points to the centroids of the affected joints. Then the program uses these joint loads to obtain the axial and shear forces in the individual bolts. The program identifies which bolts bear the greatest axial and/or shear loads. The program also performs a fail-safe analysis in which the foregoing calculations are repeated for a sequence of cases in which each fastener, in turn, is assumed not to transmit an axial force.
NASA Astrophysics Data System (ADS)
Wang, Yimin; Braams, Bastiaan J.; Bowman, Joel M.; Carter, Stuart; Tew, David P.
2008-06-01
Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcal/mol, in excellent agreement with the reported ab initio value. Model one-dimensional and ``exact'' full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased ``fixed-node'' diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm-1 in Cartesian coordinates and 22.6 cm-1 in normal coordinates, with an uncertainty of 2-3 cm-1. This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm-1. The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm-1. These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm-1, and agree well with the experimental values of 21.6 and 2.9 cm-1 for the H and D transfer, respectively.
Wang, Yimin; Braams, Bastiaan J; Bowman, Joel M; Carter, Stuart; Tew, David P
2008-06-14
Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcalmol, in excellent agreement with the reported ab initio value. Model one-dimensional and "exact" full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased "fixed-node" diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm(-1) in Cartesian coordinates and 22.6 cm(-1) in normal coordinates, with an uncertainty of 2-3 cm(-1). This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm(-1). The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm(-1). These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm(-1), and agree well with the experimental values of 21.6 and 2.9 cm(-1) for the H and D transfer, respectively.
PynPoint code for exoplanet imaging
NASA Astrophysics Data System (ADS)
Amara, A.; Quanz, S. P.; Akeret, J.
2015-04-01
We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).
NASA Astrophysics Data System (ADS)
Nabi, Jameel-Un; Böyükata, Mahmut
2016-03-01
We investigate even-even nuclei in the A ∼ 70 mass region within the framework of the proton-neutron quasi-particle random phase approximation (pn-QRPA) and the interacting boson model-1 (IBM-1). Our work includes calculation of the energy spectra and the potential energy surfaces V (β , γ) of Zn, Ge, Se, Kr and Sr nuclei with the same proton and neutron number, N = Z. The parametrization of the IBM-1 Hamiltonian was performed for the calculation of the energy levels in the ground state bands. Geometric shape of the nuclei was predicted by plotting the potential energy surfaces V (β , γ) obtained from the IBM-1 Hamiltonian in the classical limit. The pn-QRPA model was later used to compute half-lives of the neutron-deficient nuclei which were found to be in very good agreement with the measured ones. The pn-QRPA model was also used to calculate the Gamow-Teller strength distributions and was found to be in decent agreement with the measured data. We further calculate the electron capture and positron decay rates for these N = Z waiting point (WP) nuclei in the stellar environment employing the pn-QRPA model. For the rp-process conditions, our total weak rates are within a factor two compared with the Skyrme HF +BCS +QRPA calculation. All calculated electron capture rates are comparable to the competing positron decay rates under rp-process conditions. Our study confirms the finding that electron capture rates form an integral part of the weak rates under rp-process conditions and should not be neglected in the nuclear network calculations.
NASA Technical Reports Server (NTRS)
Glass, Christopher E.
1990-01-01
The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.
NASA Astrophysics Data System (ADS)
Glass, Christopher E.
1990-08-01
The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.
Multiradar tracking for theater missile defense
NASA Astrophysics Data System (ADS)
Sviestins, Egils
1995-09-01
A prototype system for tracking tactical ballistic missiles using multiple radars has been developed. The tracking is based on measurement level fusion (`true' multi-radar) tracking. Strobes from passive sensors can also be used. We describe various features of the system with some emphasis on the filtering technique. This is based on the Interacting Multiple Model framework where the states are Free Flight, Drag, Boost, and Auxiliary. Measurement error modeling includes the signal to noise ratio dependence; outliers and miscorrelations are handled in the same way. The launch point is calculated within one minute from the detection of the missile. The impact point, and its uncertainty region, is calculated continually by extrapolating the track state vector using the equations of planetary motion.
NASA Astrophysics Data System (ADS)
Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.
2013-12-01
Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
NASA Astrophysics Data System (ADS)
Saleh, D.; Domagalski, J. L.
2012-12-01
Sources and factors affecting the transport of total nitrogen are being evaluated for a study area that covers most of California and some areas in Oregon and Nevada, by using the SPARROW model (SPAtially Referenced Regression On Watershed attributes) developed by the U.S. Geological Survey. Mass loads of total nitrogen calculated for monitoring sites at stream gauging stations are regressed against land-use factors affecting nitrogen transport, including fertilizer use, recharge, atmospheric deposition, stream characteristics, and other factors to understand how total nitrogen is transported under average conditions. SPARROW models have been used successfully in other parts of the country to understand how nutrients are transported, and how management strategies can be formulated, such as with Total Maximum Daily Load (TMDL) assessments. Fertilizer use, atmospheric deposition, and climatic data were obtained for 2002, and loads for that year were calculated for monitored streams and point sources (mostly from wastewater treatment plants). The stream loads were calculated by using the adjusted maximum likelihood estimation method (AMLE). River discharge and nitrogen concentrations were de-trended in these calculations in order eliminate the effect of temporal changes on stream load. Effluent discharge information as well as total nitrogen concentrations from point sources were obtained from USEPA databases and from facility records. The model indicates that atmospheric deposition and fertilizer use account for a large percentage of the total nitrogen load in many of the larger watersheds throughout the study area. Point sources, on the other hand, are generally localized around large cities, are considered insignificant sources, and account for a small percentage of the total nitrogen loads throughout the study area.
Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA
NASA Astrophysics Data System (ADS)
Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude
2017-05-01
When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.
Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models
NASA Astrophysics Data System (ADS)
Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.
2017-12-01
While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API interface to our Enhanced Magnetic Model (EMM).
NASA Astrophysics Data System (ADS)
Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.
2017-01-01
We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.
Critical behavior of the spin-1 and spin-3/2 Baxter-Wu model in a crystal field.
Dias, D A; Xavier, J C; Plascak, J A
2017-01-01
The phase diagram and the critical behavior of the spin-1 and the spin-3/2 two-dimensional Baxter-Wu model in a crystal field are studied by conventional finite-size scaling and conformal invariance theory. The phase diagram of this model, for the spin-1 case, is qualitatively the same as those of the diluted 4-states Potts model and the spin-1 Blume-Capel model. However, for the present case, instead of a tricritical point one has a pentacritical point for a finite value of the crystal field, in disagreement with previous work based on finite-size calculations. On the other hand, for the spin-3/2 case, the phase diagram is much richer and can present, besides a pentacritical point, an additional multicritical end point. Our results also support that the universality class of the critical behavior of the spin-1 and spin-3/2 Baxter-Wu model in a crystal field is the same as the pure Baxter-Wu model, even at the multicritical points.
NASA Astrophysics Data System (ADS)
Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.
2016-12-01
The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.
Performance of a laser microsatellite network with an optical preamplifier.
Arnon, Shlomi
2005-04-01
Laser satellite communication (LSC) uses free space as a propagation medium for various applications, such as intersatellite communication or satellite networking. An LSC system includes a laser transmitter and an optical receiver. For communication to occur, the line of sight of the transmitter and the receiver must be aligned. However, mechanical vibration and electronic noise in the control system reduce alignment between the transmitter laser beam and the receiver field of view (FOV), which results in pointing errors. The outcome of pointing errors is fading of the received signal, which leads to impaired link performance. An LSC system is considered in which the optical preamplifier is incorporated into the receiver, and a bit error probability (BEP) model is derived that takes into account the statistics of the pointing error as well as the optical amplifier and communication system parameters. The model and the numerical calculation results indicate that random pointing errors of sigma(chi)2G > 0.05 penalize communication performance dramatically for all combinations of optical amplifier gains and noise figures that were calculated.
Modeling of microclimatic characteristics of highland area
NASA Astrophysics Data System (ADS)
Sitdikova, Iuliia; Rusin, Igor
2013-04-01
Microclimatic characteristics of highlands may vary considerably over distances of a few meters depending on slope and aspect. There is a problem of estimation of components of surface energy balance based on observation of single stations for description of microclimate highlands. The aim of this paper is to develop a method that would restore microclimatic characteristics of terrain, based on observations of the single station, by physical extrapolation. The input parameters to obtain the microclimatic characteristics are as follows: air temperature, relative humidity, and wind speed on two vertical levels, air pressure, surface temperature, direct and diffused solar radiation and surface albedo. The recent version of the Meteorological Radiation Model (MRM) has been used to calculate a solar radiation over the area and to estimate an influence of cloudiness amounts. The height, slope and aspect were accounted at each point with using a digital elevation model. Have been supposed that air temperature and specific humidity vary with altitude only. Net radiation was calculated at all points of the area. Supposed that the difference between the surface temperature and the air temperature is a linear function of net radiation. The empirical coefficient, which depends on wind speed with adjustment of given area. Latent and sensible fluxes are calculated by using the modified Bowen ratio, which varies on the area. Method was tested on field research in Krasnodar region (RF). The meteorological observations were made every three hour on actinometric and gradient sites. The editional gradient site with different orientation of the slope was organized from 400 meters of the main site. Topographic survey of area was made 1x1,3 km in size for a digital elevation model constructing. At all points of the area of radiation and heat balance were calculated. The results of researches are the maps of surface temperature, net radiation, latent and sensible fluxes. The calculations showed that the average value of components of heat balance by area differ significantly from the data observed on meteorological station.
NASA Technical Reports Server (NTRS)
Cline, M. C.
1981-01-01
A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.
KDEP: A resource for calculating particle deposition in the respiratory tract
Klumpp, John A.; Bertelli, Luiz
2017-08-01
This study presents KDEP, an open-source implementation of the ICRP lung deposition model developed by the authors. KDEP, which is freely available to the public, can be used to calculate lung deposition values under a variety of different conditions using the ICRP methodology. The paper describes how KDEP implements this model and discusses some key points of the implementation. The published lung deposition values for intakes by workers were reproduced, and new deposition values were calculated for intakes by members of the public. KDEP can be obtained for free at github.com or by emailing the authors directly.
Tree Branching: Leonardo da Vinci's Rule versus Biomechanical Models
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule. PMID:24714065
Tree branching: Leonardo da Vinci's rule versus biomechanical models.
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule.
TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model
NASA Astrophysics Data System (ADS)
Meurice, Y.
2007-06-01
We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).
Nutaro, James; Kuruganti, Teja
2017-02-24
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators
Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.; ...
2018-02-13
Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory (DFT), have found widespread use in the calculation of point-defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT) tomore » expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. As a result, we anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.« less
PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators
NASA Astrophysics Data System (ADS)
Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.; Yu, Guodong; Canning, Andrew; Haranczyk, Maciej; Asta, Mark; Hautier, Geoffroy
2018-05-01
Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory (DFT), have found widespread use in the calculation of point-defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT) to expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. We anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.
PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.
Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory DFT), have found widespread use in the calculation of point defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT)more » to expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. We anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.« less
PyCDT: A Python toolkit for modeling point defects in semiconductors and insulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broberg, Danny; Medasani, Bharat; Zimmermann, Nils E. R.
Point defects have a strong impact on the performance of semiconductor and insulator materials used in technological applications, spanning microelectronics to energy conversion and storage. The nature of the dominant defect types, how they vary with processing conditions, and their impact on materials properties are central aspects that determine the performance of a material in a certain application. This information is, however, difficult to access directly from experimental measurements. Consequently, computational methods, based on electronic density functional theory (DFT), have found widespread use in the calculation of point-defect properties. Here we have developed the Python Charged Defect Toolkit (PyCDT) tomore » expedite the setup and post-processing of defect calculations with widely used DFT software. PyCDT has a user-friendly command-line interface and provides a direct interface with the Materials Project database. This allows for setting up many charged defect calculations for any material of interest, as well as post-processing and applying state-of-the-art electrostatic correction terms. Our paper serves as a documentation for PyCDT, and demonstrates its use in an application to the well-studied GaAs compound semiconductor. As a result, we anticipate that the PyCDT code will be useful as a framework for undertaking readily reproducible calculations of charged point-defect properties, and that it will provide a foundation for automated, high-throughput calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James; Kuruganti, Teja
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Emery-Kivelson solution of the two-channel Kondo problem
NASA Astrophysics Data System (ADS)
Sengupta, Anirvan M.; Georges, Antoine
1994-04-01
We consider the two-channel Kondo model in the Emery-Kivelson approach, and calculate the total susceptibility enhancement due to the impurity χimp=χ-χbulk. We find that χimp exactly vanishes at the solvable point, in a completely analogous way to the singular part of the specific heat Cimp. A perturbative calculation around the solvable point yields the generic behavior χimp~log(1/T), Cimp~T logT and the known universal value of the Wilson ratio RW=8/3. From this calculation, the Kondo temperature can be identified and is found to behave as the inverse square of the perturbation parameter. The small-field, zero-temperature behavior χimp~log(1/h) is also recovered.
Biomechanics of the incudo-malleolar-joint - Experimental investigations for quasi-static loads.
Ihrle, S; Gerig, R; Dobrev, I; Röösli, C; Sim, J H; Huber, A M; Eiber, A
2016-10-01
Under large quasi-static loads, the incudo-malleolar joint (IMJ), connecting the malleus and the incus, is highly mobile. It can be classified as a mechanical filter decoupling large quasi-static motions while transferring small dynamic excitations. This is presumed to be due to the complex geometry of the joint inducing a spatial decoupling between the malleus and incus under large quasi-static loads. Spatial Laser Doppler Vibrometer (LDV) displacement measurements on isolated malleus-incus-complexes (MICs) were performed. With the malleus firmly attached to a probe holder, the incus was excited by applying quasi-static forces at different points. For each force application point the resulting displacement was measured subsequently at different points on the incus. The location of the force application point and the LDV measurement points were calculated in a post-processing step combining the position of the LDV points with geometric data of the MIC. The rigid body motion of the incus was then calculated from the multiple displacement measurements for each force application point. The contact regions of the articular surfaces for different load configurations were calculated by applying the reconstructed motion to the geometry model of the MIC and calculate the minimal distance of the articular surfaces. The reconstructed motion has a complex spatial characteristic and varies for different force application points. The motion changed with increasing load caused by the kinematic guidance of the articular surfaces of the joint. The IMJ permits a relative large rotation around the anterior-posterior axis through the joint when a force is applied at the lenticularis in lateral direction before impeding the motion. This is part of the decoupling of the malleus motion from the incus motion in case of large quasi-static loads. Copyright © 2015 Elsevier B.V. All rights reserved.
Pan, Feng; Tao, Guohua
2013-03-07
Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.
Study of the deoxidation of steel with aluminum wire injection in a gas-stirred ladle
NASA Astrophysics Data System (ADS)
Beskow, K.; Jonsson, L.; Sichen, Du; Viswanathan, N. N.
2001-04-01
In the present work, the deoxidation of liquid steel with aluminum wire injection in a gas-stirred ladle was studied by mathematical modeling using a computational fluid dynamics (CFD) approach. This was complemented by an industrial trial study conducted at Uddeholm Tooling AB (Hagfors, Sweden). The results of the industrial trials were found to be in accordance with the results of the model calculation. In order to study the aspect of nucleation of alumina, emphasis was given to the initial period of deoxidation, when aluminum wire was injected into the bath. The concentration distributions of aluminum and oxygen were calculated both by considering and not considering the chemical reaction. Both calculations revealed that the driving force for the nucleation fo Al2O3 was very high in the region near the upper surface of the bath and close to the wire injection. The estimated nucleation rate in the vicinity of the aluminum wire injection point was much higher than the recommended value for spontaneously homogeneous nucleation, 103 nuclei/(cm3/s). The results of the model calculation also showed that the alumina nuclei generated at the vicinity of the wire injection point are transported to other regions by the flow.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
NASA Astrophysics Data System (ADS)
Nigg, D. W.; Wheeler, F. J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigg, D.W.; Wheeler, F.J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less
Magnetic properties of single crystal alpha-benzoin oxime: An EPR study
NASA Astrophysics Data System (ADS)
Sayin, Ulku; Dereli, Ömer; Türkkan, Ercan; Ozmen, Ayhan
2012-02-01
The electron paramagnetic resonance (EPR) spectra of gamma irradiated single crystals of alpha-benzoinoxime (ABO) have been examined between 120 and 440 K. Considering the dependence on temperature and the orientation of the spectra of single crystals in the magnetic field, we identified two different radicals formed in irradiated ABO single crystals. To theoretically determine the types of radicals, the most stable structure of ABO was obtained by molecular mechanic and B3LYP/6-31G(d,p) calculations. Four possible radicals were modeled and EPR parameters were calculated for the modeled radicals using the B3LYP method and the TZVP basis set. Calculated values of two modeled radicals were in strong agreement with experimental EPR parameters determined from the spectra. Additional simulated spectra of the modeled radicals, where calculated hyperfine coupling constants were used as starting points for simulations, were well matched with experimental spectra.
Development of mapped stress-field boundary conditions based on a Hill-type muscle model.
Cardiff, P; Karač, A; FitzPatrick, D; Flavin, R; Ivanković, A
2014-09-01
Forces generated in the muscles and tendons actuate the movement of the skeleton. Accurate estimation and application of these musculotendon forces in a continuum model is not a trivial matter. Frequently, musculotendon attachments are approximated as point forces; however, accurate estimation of local mechanics requires a more realistic application of musculotendon forces. This paper describes the development of mapped Hill-type muscle models as boundary conditions for a finite volume model of the hip joint, where the calculated muscle fibres map continuously between attachment sites. The applied muscle forces are calculated using active Hill-type models, where input electromyography signals are determined from gait analysis. Realistic muscle attachment sites are determined directly from tomography images. The mapped muscle boundary conditions, implemented in a finite volume structural OpenFOAM (ESI-OpenCFD, Bracknell, UK) solver, are employed to simulate the mid-stance phase of gait using a patient-specific natural hip joint, and a comparison is performed with the standard point load muscle approach. It is concluded that physiological joint loading is not accurately represented by simplistic muscle point loading conditions; however, when contact pressures are of sole interest, simplifying assumptions with regard to muscular forces may be valid. Copyright © 2014 John Wiley & Sons, Ltd.
On determining dose rate constants spectroscopically.
Rodriguez, M; Rogers, D W O
2013-01-01
To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of (125)I and (103)Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated (125)I and (103)Pd sources. Spectra generated by 14 (125)I and 6 (103)Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm(3) voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the (125)I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for (103)Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ≤0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. The ratio of the intensity of the 31 keV line relative to that of the main peak in (125)I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The (103)Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the (125)I and (103)Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.
Tóth, Gergely; Bodai, Zsolt; Héberger, Károly
2013-10-01
Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Wenhu; Kotliar, Gabriel; Tsvelik, Alexei M.
Dynamical mean-field theory is used to study the quantum critical point (QCP) in the doped Hubbard model on a square lattice. We characterize the QCP by a universal scaling form of the self-energy and a spin density wave instability at an incommensurate wave vector. The scaling form unifies the low-energy kink and the high-energy waterfall feature in the spectral function, while the spin dynamics includes both the critical incommensurate and high-energy antiferromagnetic paramagnons. Here, we use the frequency-dependent four-point correlation function of spin operators to calculate the momentum-dependent correction to the electron self-energy. Furthermore, by comparing with the calculations basedmore » on the spin-fermion model, our results indicate the frequency dependence of the quasiparticle-paramagnon vertices is an important factor to capture the momentum dependence in quasiparticle scattering.« less
Optimal solar sail planetocentric trajectories
NASA Technical Reports Server (NTRS)
Sackett, L. L.
1977-01-01
The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.
Predicting Bird Response to Alternative Management Scenarios on a Ranch in Campeche, México
Paul A. Wood; Deanna K. Dawson; John R. Sauer; Marcia H. Wilson
2005-01-01
We developed models to predict the potential response of wintering Neotropical migrant and resident bird species to alternative management scenarios, using data from point counts of birds along with habitat variables measured or estimated from remotely sensed data in a Geographic Information System. Expected numbers of occurrences at points were calculated for 100...
BSM Kaon Mixing at the Physical Point
NASA Astrophysics Data System (ADS)
Boyle, Peter; Garron, Nicolas; Kettle, Julia; Khamseh, Ava; Tsang, Justus Tobias
2018-03-01
We present a progress update on the RBC-UKQCD calculation of beyond the standard model (BSM) kaon mixing matrix elements at the physical point. Simulations are performed using 2+1 flavour domain wall lattice QCD with the Iwasaki gauge action at 3 lattice spacings and with pion masses ranging from 430 MeV to the physical pion mass.
New generation of universal modeling for centrifugal compressors calculation
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng
2015-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team
2013-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
Ricker, Martin; Peña Ramírez, Víctor M; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A · T+E, where for b ≠ 0 : Q = Ei[-b · r]-Ei[-b · r1] and for b = 0 : Q = Ln[r/r1], A = initial relative growth to be estimated, T = t-t1, and E is an error term for each tree and time point. Furthermore, Ei[-b · r] = ∫(Exp[-b · r]/r)dr, b = -1/TPR, with TPR being the turning point radius in a sigmoid curve, and r1 at t1 is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth A. One site (at the Popocatépetl volcano) stood out, with A being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time.
Modeling diffuse phosphorus emissions to assist in best management practice designing
NASA Astrophysics Data System (ADS)
Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne
2010-05-01
A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.
Use of refinery computer model to predict fuel production
NASA Technical Reports Server (NTRS)
Flores, F. J.
1979-01-01
Several factors (crudes, refinery operation and specifications) that affect yields and properties of broad specification jet fuel were parameterized using the refinery simulation model which can simulate different types of refineries were used to make the calculations. Results obtained from the program are used to correlate yield as a function of final boiling point, hydrogen content and freezing point for jet fuels produced in two refinery configurations, each one processing a different crude mix. Refinery performances are also compared in terms of energy consumption.
The Use of Pro/Engineer CAD Software and Fishbowl Tool Kit in Ray-tracing Analysis
NASA Technical Reports Server (NTRS)
Nounu, Hatem N.; Kim, Myung-Hee Y.; Ponomarev, Artem L.; Cucinotta, Francis A.
2009-01-01
This document is designed as a manual for a user who wants to operate the Pro/ENGINEER (ProE) Wildfire 3.0 with the NASA Space Radiation Program's (SRP) custom-designed Toolkit, called 'Fishbowl', for the ray tracing of complex spacecraft geometries given by a ProE CAD model. The analysis of spacecraft geometry through ray tracing is a vital part in the calculation of health risks from space radiation. Space radiation poses severe risks of cancer, degenerative diseases and acute radiation sickness during long-term exploration missions, and shielding optimization is an important component in the application of radiation risk models. Ray tracing is a technique in which 3-dimensional (3D) vehicle geometry can be represented as the input for the space radiation transport code and subsequent risk calculations. In ray tracing a certain number of rays (on the order of 1000) are used to calculate the equivalent thickness, say of aluminum, of the spacecraft geometry seen at a point of interest called the dose point. The rays originate at the dose point and terminate at a homogenously distributed set of points lying on a sphere that circumscribes the spacecraft and that has its center at the dose point. The distance a ray traverses in each material is converted to aluminum or other user-selected equivalent thickness. Then all equivalent thicknesses are summed up for each ray. Since each ray points to a direction, the aluminum equivalent of each ray represents the shielding that the geometry provides to the dose point from that particular direction. This manual will first list for the user the contact information for help in installing ProE and Fishbowl in addition to notes on the platform support and system requirements information. Second, the document will show the user how to use the software to ray trace a Pro/E-designed 3-D assembly and will serve later as a reference for troubleshooting. The user is assumed to have previous knowledge of ProE and CAD modeling.
Gruden, Maja; Andjeklović, Ljubica; Jissy, Akkarapattiakal Kuriappan; Stepanović, Stepan; Zlatar, Matija; Cui, Qiang; Elstner, Marcus
2017-09-30
Density Functional Tight Binding (DFTB) models are two to three orders of magnitude faster than ab initio and Density Functional Theory (DFT) methods and therefore are particularly attractive in applications to large molecules and condensed phase systems. To establish the applicability of DFTB models to general chemical reactions, we conduct benchmark calculations for barrier heights and reaction energetics of organic molecules using existing databases and several new ones compiled in this study. Structures for the transition states and stable species have been fully optimized at the DFTB level, making it possible to characterize the reliability of DFTB models in a more thorough fashion compared to conducting single point energy calculations as done in previous benchmark studies. The encouraging results for the diverse sets of reactions studied here suggest that DFTB models, especially the most recent third-order version (DFTB3/3OB augmented with dispersion correction), in most cases provide satisfactory description of organic chemical reactions with accuracy almost comparable to popular DFT methods with large basis sets, although larger errors are also seen for certain cases. Therefore, DFTB models can be effective for mechanistic analysis (e.g., transition state search) of large (bio)molecules, especially when coupled with single point energy calculations at higher levels of theory. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Konca, A. O.; Ji, C.; Helmberger, D. V.
2004-12-01
We observed the effect of the fault finiteness in the Pnl waveforms from regional distances (4° to 12° ) for the Mw6.5 San Simeon Earthquake on 22 December 2003. We aimed to include more of the high frequencies (2 seconds and longer periods) than the studies that use regional data for focal solutions (5 to 8 seconds and longer periods). We calculated 1-D synthetic seismograms for the Pn_l portion for both a point source, and a finite fault solution. The comparison of the point source and finite fault waveforms with data show that the first several seconds of the point source synthetics have considerably higher amplitude than the data, while finite fault does not have a similar problem. This can be explained by reversely polarized depth phases overlapping with the P waves from the later portion of the fault, and causing smaller amplitudes for the beginning portion of the seismogram. This is clearly a finite fault phenomenon; therefore, can not be explained by point source calculations. Moreover, the point source synthetics, which are calculated with a focal solution from a long period regional inversion, are overestimating the amplitude by three to four times relative to the data amplitude, while finite fault waveforms have the similar amplitudes to the data. Hence, a moment estimation based only on the point source solution of the regional data could have been wrong by half of magnitude. We have also calculated the shifts of synthetics relative to data to fit the seismograms. Our results reveal that the paths from Central California to the south are faster than to the paths to the east and north. The P wave arrival to the TUC station in Arizona is 4 seconds earlier than the predicted Southern California model, while most stations to the east are delayed around 1 second. The observed higher uppermost mantle velocities to the south are consistent with some recent tomographic models. Synthetics generated with these models significantly improves the fits and the timing at most stations. This means that regional waveform data can be used to help locate and establish source complexities for future events.
NASA Astrophysics Data System (ADS)
Hustim, M.; Arifin, Z.; Aly, S. H.; Ramli, M. I.; Zakaria, R.; Liputo, A.
2018-04-01
This research aimed to predict the noise produced by the traffic in the road network in Makassar City using ASJ-RTN Model 2008 by calculating the horn sound. Observations were taken at 37 survey points on road side. The observations were conducted at 06.00 - 18.00 and 06.00 - 21.00 which research objects were motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). The observed data were traffic volume, vehicle speed, number of horn and traffic noise using Sound Level Meter Tenmars TM-103. The research result indicates that prediction noise model by calculating the horn sound produces the average noise level value of 78.5 dB having the Pearson’s correlation and RMSE of 0.95 and 0.87. Therefore, ASJ-RTN Model 2008 prediction model by calculating the horn sound is said to be sufficiently good for predicting noise level.
SU-E-T-17: A Mathematical Model for PinPoint Chamber Correction in Measuring Small Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T; Zhang, Y; Li, X
2014-06-01
Purpose: For small field dosimetry, such as measuring the cone output factor for stereotactic radiosurgery, ion chambers often result in underestimation of the dose, due to both the volume averaging effect and the lack of electron equilibrium. The purpose of this work is to develop a mathematical model, specifically for the pinpoint chamber, to calculate the correction factors corresponding to different type of small fields, including single cone-based circular field and non-standard composite fields. Methods: A PTW 0.015cc PinPoint chamber was used in the study. Its response in a certain field was modeled as the total contribution of many smallmore » beamlets, each with different response factor depending on the relative strength, radial distance to the chamber axis, and the beam angle. To get these factors, 12 cone-shaped circular fields (5mm,7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm) were irradiated and measured with the PinPoint chamber. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. These readings were then compared with the theoretical doses as obtained with Monte Carlo calculation. A penalized-least-square optimization algorithm was developed to find out the beamlet response factors. After the parameter fitting, the established mathematical model was validated with the same MC code for other non-circular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the Monte Carlo calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for the PinPoint chamber for dosimetric measurement of small fields. The current model is applicable only when the beam axis is perpendicular to the chamber axis. It can be applied to non-standard composite fields. Further validation with other type of detectors is being conducted.« less
Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael
2007-08-21
Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.
NASA Astrophysics Data System (ADS)
Garcia-Adeva, Angel J.; Huber, David L.
2001-07-01
In this work we generalize and subsequently apply the effective-field renormalization-group (EFRG) technique to the problem of ferro- and antiferromagnetically coupled Ising spins with local anisotropy axes in geometrically frustrated geometries (kagomé and pyrochlore lattices). In this framework, we calculate the various ground states of these systems and the corresponding critical points. Excellent agreement is found with exact and Monte Carlo results. The effects of frustration are discussed. As pointed out by other authors, it turns out that the spin-ice model can be exactly mapped to the standard Ising model, but with effective interactions of the opposite sign to those in the original Hamiltonian. Therefore, the ferromagnetic spin ice is frustrated and does not order. Antiferromagnetic spin ice (in both two and three dimensions) is found to undergo a transition to a long-range-ordered state. The thermal and magnetic critical exponents for this transition are calculated. It is found that the thermal exponent is that of the Ising universality class, whereas the magnetic critical exponent is different, as expected from the fact that the Zeeman term has a different symmetry in these systems. In addition, the recently introduced generalized constant coupling method is also applied to the calculation of the critical points and ground-state configurations. Again, a very good agreement is found with exact, Monte Carlo, and renormalization-group calculations for the critical points. Incidentally, we show that the generalized constant coupling approach can be regarded as the lowest-order limit of the EFRG technique, in which correlations outside a frustrated unit are neglected, and scaling is substituted by strict equality of the thermodynamic quantities.
NASA Technical Reports Server (NTRS)
Merrill, John T.; Rodriguez, Jose M.
1991-01-01
Trajectory and photochemical model calculations based on retrospective meteorological data for the operations areas of the NASA Pacific Exploratory Mission (PEM)-West mission are summarized. The trajectory climatology discussed here is intended to provide guidance for flight planning and initial data interpretation during the field phase of the expedition by indicating the most probable path air parcels are likely to take to reach various points in the area. The photochemical model calculations which are discussed indicate the sensitivity of the chemical environment to various initial chemical concentrations and to conditions along the trajectory. In the post-expedition analysis these calculations will be used to provide a climatological context for the meteorological conditions which are encountered in the field.
Dynamic global model of oxide Czochralski process with weighing control
NASA Astrophysics Data System (ADS)
Mamedov, V. M.; Vasiliev, M. G.; Yuferev, V. S.
2011-03-01
A dynamic model of oxide Czochralski growth with weighing control has been developed for the first time. A time-dependent approach is used for the calculation of temperature fields in different parts of a crystallization set-up and convection patterns in a melt, while internal radiation in crystal is considered in a quasi-steady approximation. A special algorithm is developed for the calculation of displacement of a triple point and simulation of a crystal surface formation. To calculate variations in the heat generation, a model of weighing control with a commonly used PID regulator is applied. As an example, simulation of the growth process of gallium-gadolinium garnet (GGG) crystals starting from the stage of seeding is performed.
NASA Astrophysics Data System (ADS)
Mikhailov, S. Ia.; Tumatov, K. I.
The paper compares the results obtained using two methods to calculate the amplitude of a short-wave signal field incident on or reflected from a perfectly conducting earth. A technique is presented for calculating the geometric characteristics of the field based on the waveguide approach. It is shown that applying an extended system of characteristic equations to calculate the field amplitude is inadmissible in models which include the discontinuity second derivatives of the permittivity unless a suitable treament of the discontinuity points is applied.
Theoretical studies of dissociative recombination
NASA Technical Reports Server (NTRS)
Guberman, S. L.
1985-01-01
The calculation of dissociative recombination rates and cross sections over a wide temperature range by theoretical quantum chemical techniques is described. Model calculations on electron capture by diatomic ions are reported which illustrate the dependence of the rates and cross sections on electron energy, electron temperature, and vibrational temperature for three model crossings of neutral and ionic potential curves. It is shown that cross sections for recombination to the lowest vibrational level of the ion can vary by several orders of magnitude depending upon the position of the neutral and ionic potential curve crossing within the turning points of the v = 1 vibrational level. A new approach for calculating electron capture widths is reported. Ab initio calculations are described for recombination of O2(+) leading to excited O atoms.
Equation of state and phase diagram of carbon
NASA Astrophysics Data System (ADS)
Averin, A. B.; Dremov, V. V.; Samarin, S. I.; Sapozhnikov, A. T.
1996-05-01
Thermodynamically consistent equation of state (EOS) for graphite and diamond is proposed. The EOS satisfactorily describes experimental data on shock compression, heat capacity, thermal expansion and phase equilibrium and can be used in mathematical models and computer codes for calculation of graphite-diamond phase transition under dynamic loading. Monte-Carlo calculations of diamond thermodynamic properties have been carried out to check correctness of the EOS in the regions of phase diagram where experimental data are absent. On the basis of the EOS and Grover's model of liquid state the EOS of liquid carbon have been constructed and carbon phase diagram (graphite and diamond melting curves and triple point) have been calculated. Comparison of calculated and experimental Hugoniots has stated a question about diamond melting curve.
Two-point spectral model for variable density homogeneous turbulence
NASA Astrophysics Data System (ADS)
Pal, Nairita; Kurien, Susan; Clark, Timothy; Aslangil, Denis; Livescu, Daniel
2017-11-01
We present a comparison between a two-point spectral closure model for buoyancy-driven variable density homogeneous turbulence, with Direct Numerical Simulation (DNS) data of the same system. We wish to understand how well a suitable spectral model might capture variable density effects and the transition to turbulence from an initially quiescent state. Following the BHRZ model developed by Besnard et al. (1990), the spectral model calculation computes the time evolution of two-point correlations of the density fluctuations with the momentum and the specific-volume. These spatial correlations are expressed as function of wavenumber k and denoted by a (k) and b (k) , quantifying mass flux and turbulent mixing respectively. We assess the accuracy of the model, relative to a full DNS of the complete hydrodynamical equations, using a and b as metrics. Work at LANL was performed under the auspices of the U.S. DOE Contract No. DE-AC52-06NA25396.
Temperature distribution model for the semiconductor dew point detector
NASA Astrophysics Data System (ADS)
Weremczuk, Jerzy; Gniazdowski, Z.; Jachowicz, Ryszard; Lysko, Jan M.
2001-08-01
The simulation results of temperature distribution in the new type silicon dew point detector are presented in this paper. Calculations were done with use of the SMACEF simulation program. Fabricated structures, apart from the impedance detector used to the dew point detection, contained the resistive four terminal thermometer and two heaters. Two detector structures, the first one located on the silicon membrane and the second one placed on the bulk materials were compared in this paper.
Elastic dipoles of point defects from atomistic simulations
NASA Astrophysics Data System (ADS)
Varvenne, Céline; Clouet, Emmanuel
2017-12-01
The interaction of point defects with an external stress field or with other structural defects is usually well described within continuum elasticity by the elastic dipole approximation. Extraction of the elastic dipoles from atomistic simulations is therefore a fundamental step to connect an atomistic description of the defect with continuum models. This can be done either by a fitting of the point-defect displacement field, by a summation of the Kanzaki forces, or by a linking equation to the residual stress. We perform here a detailed comparison of these different available methods to extract elastic dipoles, and show that they all lead to the same values when the supercell of the atomistic simulations is large enough and when the anharmonic region around the point defect is correctly handled. But, for small simulation cells compatible with ab initio calculations, only the definition through the residual stress appears tractable. The approach is illustrated by considering various point defects (vacancy, self-interstitial, and hydrogen solute atom) in zirconium, using both empirical potentials and ab initio calculations.
Determining Surface Roughness in Urban Areas Using Lidar Data
NASA Technical Reports Server (NTRS)
Holland, Donald
2009-01-01
An automated procedure has been developed to derive relevant factors, which can increase the ability to produce objective, repeatable methods for determining aerodynamic surface roughness. Aerodynamic surface roughness is used for many applications, like atmospheric dispersive models and wind-damage models. For this technique, existing lidar data was used that was originally collected for terrain analysis, and demonstrated that surface roughness values can be automatically derived, and then subsequently utilized in disaster-management and homeland security models. The developed lidar-processing algorithm effectively distinguishes buildings from trees and characterizes their size, density, orientation, and spacing (see figure); all of these variables are parameters that are required to calculate the estimated surface roughness for a specified area. By using this algorithm, aerodynamic surface roughness values in urban areas can then be extracted automatically. The user can also adjust the algorithm for local conditions and lidar characteristics, like summer/winter vegetation and dense/sparse lidar point spacing. Additionally, the user can also survey variations in surface roughness that occurs due to wind direction; for example, during a hurricane, when wind direction can change dramatically, this variable can be extremely significant. In its current state, the algorithm calculates an estimated surface roughness for a square kilometer area; techniques using the lidar data to calculate the surface roughness for a point, whereby only roughness elements that are upstream from the point of interest are used and the wind direction is a vital concern, are being investigated. This technological advancement will improve the reliability and accuracy of models that use and incorporate surface roughness.
Fission barriers at the end of the chart of the nuclides
NASA Astrophysics Data System (ADS)
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; Iwamoto, Akira; Mumpower, Matthew
2015-02-01
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤A ≤330 . The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop model with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than 5 000 000 different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ɛ ) and the spherical-harmonic (β ) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ɛ ,γ ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about 1 MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β -delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. These studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.
Photochemical ozone budget during the BIBLE A and B campaigns
NASA Astrophysics Data System (ADS)
Ko, Malcolm; Hu, Wenjie; Rodríguez, José M.; Kondo, Yutaka; Koike, Makoto; Kita, Kazuyuki; Kawakami, Shuji; Blake, Donald; Liu, Shaw; Ogawa, Toshihiro
2003-02-01
Using the measured concentrations of NO, O3, H2O, CO, CH4, and NMHCs along the flight tracks, a photochemical box model is used to calculate the concentrations of the Ox radicals, the HOx radicals, and the nitrogen species at the sampling points. The calculations make use of the measurements from radiometers to scale clear sky photolysis rates to account for cloud cover and ground albedo at the sampling time/point. The concentrations of the nitrogen species in each of the sampled air parcels are computed assuming they are in instantaneous equilibrium with the measured NO and O3. The diurnally varying species concentrations are next calculated using the box model and used to estimate the diurnally averaged production and removal rates of ozone for the sampled air parcels. Clear sky photolysis rates are used in the diurnal calculations. The campaign also provided measured concentration of NOy. The observed NO/NOy ratio is usually larger than the model calculated equilibrium value. There are several possible explanations. It could be a result of recent injection of NO into the air parcel, recent removal of HNO3 from the parcel, recent rapid transport of an air parcel from another location, or a combination of all processes. Our analyses suggest that the local production rate of O3 can be used as another indicator of recent NO injection. However, more direct studies using air trajectory analyses and other collaborative evidences are needed to ascertain the roles played by individual process.
Photochemical ozone budget during the BIBLE A and B campaigns
NASA Astrophysics Data System (ADS)
Ko, Malcolm; Hu, Wenjie; RodríGuez, José M.; Kondo, Yutaka; Koike, Makoto; Kita, Kazuyuki; Kawakami, Shuji; Blake, Donald; Liu, Shaw; Ogawa, Toshihiro
2002-02-01
Using the measured concentrations of NO, O3, H2O, CO, CH4, and NMHCs along the flight tracks, a photochemical box model is used to calculate the concentrations of the Ox radicals, the HOx radicals, and the nitrogen species at the sampling points. The calculations make use of the measurements from radiometers to scale clear sky photolysis rates to account for cloud cover and ground albedo at the sampling time/point. The concentrations of the nitrogen species in each of the sampled air parcels are computed assuming they are in instantaneous equilibrium with the measured NO and O3. The diurnally varying species concentrations are next calculated using the box model and used to estimate the diurnally averaged production and removal rates of ozone for the sampled air parcels. Clear sky photolysis rates are used in the diurnal calculations. The campaign also provided measured concentration of NOy. The observed NO/NOy ratio is usually larger than the model calculated equilibrium value. There are several possible explanations. It could be a result of recent injection of NO into the air parcel, recent removal of HNO3 from the parcel, recent rapid transport of an air parcel from another location, or a combination of all processes. Our analyses suggest that the local production rate of O3 can be used as another indicator of recent NO injection. However, more direct studies using air trajectory analyses and other collaborative evidences are needed to ascertain the roles played by individual process.
Conformational analysis of cellobiose by electronic structure theories.
French, Alfred D; Johnson, Glenn P; Cramer, Christopher J; Csonka, Gábor I
2012-03-01
Adiabatic Φ/ψ maps for cellobiose were prepared with B3LYP density functional theory. A mixed basis set was used for minimization, followed with 6-31+G(d) single-point calculations, with and without SMD continuum solvation. Different arrangements of the exocyclic groups (38 starting geometries) were considered for each Φ/ψ point. The vacuum calculations agreed with earlier computational and experimental results on the preferred gas phase conformation (anti-Φ(H), syn-ψ(H)), and the results from the solvated calculations were consistent with the (syn Φ(H)/ψ(H) conformations from condensed phases (crystals or solutions). Results from related studies were compared, and there is substantial dependence on the solvation model as well as arrangements of exocyclic groups. New stabilizing interactions were revealed by Atoms-In-Molecules theory. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Shishlov, A. V.; Sagatelyan, G. R.; Shashurin, V. D.
2017-12-01
A mathematical model is proposed to calculate the growth rate of the thin-film coating thickness at various points in a flat substrate surface during planetary motion of the substrate, which makes it possible to calculate an expected coating thickness distribution. Proper software package is developed. The coefficients used for computer simulation are experimentally determined.
NASA Technical Reports Server (NTRS)
Krebs, R. P.
1971-01-01
The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.
NASA Astrophysics Data System (ADS)
Majkráková, Miroslava; Papčo, Juraj; Zahorec, Pavol; Droščák, Branislav; Mikuška, Ján; Marušiak, Ivan
2016-09-01
The vertical reference system in the Slovak Republic is realized by the National Levelling Network (NLN). The normal heights according to Molodensky have been introduced as reference heights in the NLN in 1957. Since then, the gravity correction, which is necessary to determine the reference heights in the NLN, has been obtained by an interpolation either from the simple or complete Bouguer anomalies. We refer to this method as the "original". Currently, the method based on geopotential numbers is the preferred way to unify the European levelling networks. The core of this article is an analysis of different ways to the gravity determination and their application for the calculation of geopotential numbers at the points of the NLN. The first method is based on the calculation of gravity at levelling points from the interpolated values of the complete Bouguer anomaly using the CBA2G_SK software. The second method is based on the global geopotential model EGM2008 improved by the Residual Terrain Model (RTM) approach. The calculated gravity is used to determine the normal heights according to Molodensky along parts of the levelling lines around the EVRF2007 datum point EH-V. Pitelová (UELN-1905325) and the levelling line of the 2nd order NLN to Kráľova hoľa Mountain (the highest point measured by levelling). The results from our analysis illustrate that the method based on the interpolated value of gravity is a better method for gravity determination when we do not know the measured gravity. It was shown that this method is suitable for the determination of geopotential numbers and reference heights in the Slovak national levelling network at the points in which the gravity is not observed directly. We also demonstrated the necessity of using the precise RTM for the refinement of the results derived solely from the EGM2008.
Tang, Céline; Giaume, Domitille; Guerlou-Demourgues, Liliane; Lefèvre, Grégory; Barboux, Philippe
2018-05-30
To design novel layered materials, bottom-up strategy is very promising. It consists of (1) synthesizing various layered oxides, (2) exfoliating them, then (3) restacking them in a controlled way. The last step is based on electrostatic interactions between different layered oxides and is difficult to control. The aim of this study is to facilitate this step by predicting the isoelectric point (IEP) of exfoliated materials. The Multisite Complexation model (MUSIC) was used for this objective and was shown to be able to predict IEP from the mean oxidation state of the metal in the (hydr)oxides, as the main parameter. Moreover, the effect of exfoliation on IEP has also been calculated. Starting from platelets with a high basal surface area over total surface area, we show that the exfoliation process has no impact on calculated IEP value, as verified with experiments. Moreover, the restacked materials containing different monometallic (hydr)oxide layers also have an IEP consistent with values calculated with the model. This study proves that MUSIC model is a useful tool to predict IEP of various complex metal oxides and hydroxides.
Detection and Classification of Pole-Like Objects from Mobile Mapping Data
NASA Astrophysics Data System (ADS)
Fukano, K.; Masuda, H.
2015-08-01
Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Qi-Jun, E-mail: dianerliu@yahoo.com.cn; Liu, Zheng-Tang; Feng, Li-Ping
2012-12-15
On the plane-wave ultrasoft pseudopotential technique based on the first-principles density functional theory (DFT), we calculated the structural, elastic, electronic and optical properties of the seven different phases of SrZrO{sub 3}. The obtained ground-state properties are in good agreement with previous experiments and calculations, which indicate that the most stable phase is orthorhombic Pnma structure. Seven phases of SrZrO{sub 3} are mechanically stable with cubic, tetragonal and orthorhombic structures. The mechanical and thermodynamic properties have been obtained by using the Voigt-Reuss-Hill approach and Debye-Grueneisen model. The electronic structures and optical properties are obtained and compared with the available experimental andmore » theoretical data. - Graphical abstract: Energy versus volume of seven phases SrZrO{sub 3} shows the Pnma phase has the minimum ground-state energy. Highlights: Black-Right-Pointing-Pointer We calculated the physical and chemical properties of seven SrZrO{sub 3} polymorphs. Black-Right-Pointing-Pointer The order of stability is Pnma>Imma>Cmcm>I4/mcm>P4/mbm>P4mm>Pm3-bar m. Black-Right-Pointing-Pointer The most stable phase is orthorhombic Pnma structure. Black-Right-Pointing-Pointer Seven phases of SrZrO{sub 3} are mechanically stable. Black-Right-Pointing-Pointer The relationship between n and {rho}{sub m} is n=1+0.18{rho}{sub m}.« less
An Overview of FlamMap Fire Modeling Capabilities
Mark A. Finney
2006-01-01
Computerized and manual systems for modeling wildland fire behavior have long been available (Rothermel 1983, Andrews 1986). These systems focus on one-dimensional behaviors and assume the fire geometry is a spreading line-fire (in contrast with point or area-source fires). Models included in these systems were developed to calculate fire spread rate (Rothermel 1972,...
Configuration Analysis of the ERS Points in Large-Volume Metrology System
Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin
2015-01-01
In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685
NASA Astrophysics Data System (ADS)
Wu, J. Z.; Fang, L.; Shao, L.; Lu, L. P.
2018-06-01
In order to introduce new physics to traditional two-point correlations, we define the second-order correlation of longitudinal velocity increments at three points and obtain the analytical expressions in isotropic turbulence. By introducing the Kolmogorov 4/5 law, this three-point correlation explicitly contains velocity second- and third-order moments, which correspond to energy and energy transfer respectively. The combination of them then shows additional information of non-equilibrium turbulence by comparing to two-point correlations. Moreover, this three-point correlation shows the underlying inconsistency between numerical interpolation and three-point scaling law in numerical calculations, and inspires a preliminary model to correct this problem in isotropic turbulence.
Lu, Zhen; McKellop, Harry A
2014-03-01
This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.
Modeling Nuclear Decay: A Point of Integration between Chemistry and Mathematics.
ERIC Educational Resources Information Center
Crippen, Kent J.; Curtright, Robert D.
1998-01-01
Describes four activities that use graphing calculators to model nuclear-decay phenomena. Students ultimately develop a notion about the radioactive waste produced by nuclear fission. These activities are in line with national educational standards and allow for the integration of science and mathematics. Contains 13 references. (Author/WRM)
Second derivative in the model of classical binary system
NASA Astrophysics Data System (ADS)
Abubekerov, M. K.; Gostev, N. Yu.
2016-06-01
We have obtained an analytical expression for the second derivatives of the light curve with respect to geometric parameters in the model of eclipsing classical binary systems. These expressions are essentially efficient algorithm to calculate the numerical values of these second derivatives for all physical values of geometric parameters. Knowledge of the values of second derivatives of the light curve at some point provides additional information about asymptotical behaviour of the function near this point and can significantly improve the search for the best-fitting light curve through the use of second-order optimization method. We write the expression for the second derivatives in a form which is most compact and uniform for all values of the geometric parameters and so make it easy to write a computer program to calculate the values of these derivatives.
δ M formalism and anisotropic chaotic inflation power spectrum
NASA Astrophysics Data System (ADS)
Talebian-Ashkezari, A.; Ahmadi, N.
2018-05-01
A new analytical approach to linear perturbations in anisotropic inflation has been introduced in [A. Talebian-Ashkezari, N. Ahmadi and A.A. Abolhasani, JCAP 03 (2018) 001] under the name of δ M formalism. In this paper we apply the mentioned approach to a model of anisotropic inflation driven by a scalar field, coupled to the kinetic term of a vector field with a U(1) symmetry. The δ M formalism provides an efficient way of computing tensor-tensor, tensor-scalar as well as scalar-scalar 2-point correlations that are needed for the analysis of the observational features of an anisotropic model on the CMB. A comparison between δ M results and the tedious calculations using in-in formalism shows the aptitude of the δ M formalism in calculating accurate two point correlation functions between physical modes of the system.
Users Manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE)
NASA Technical Reports Server (NTRS)
Ruff, Gary A.; Berkowitz, Brian M.
1990-01-01
LEWICE is an ice accretion prediction code that applies a time-stepping procedure to calculate the shape of an ice accretion. The potential flow field is calculated in LEWICE using the Douglas Hess-Smith 2-D panel code (S24Y). This potential flow field is then used to calculate the trajectories of particles and the impingement points on the body. These calculations are performed to determine the distribution of liquid water impinging on the body, which then serves as input to the icing thermodynamic code. The icing thermodynamic model is based on the work of Messinger, but contains several major modifications and improvements. This model is used to calculate the ice growth rate at each point on the surface of the geometry. By specifying an icing time increment, the ice growth rate can be interpreted as an ice thickness which is added to the body, resulting in the generation of new coordinates. This procedure is repeated, beginning with the potential flow calculations, until the desired icing time is reached. The operation of LEWICE is illustrated through the use of five examples. These examples are representative of the types of applications expected for LEWICE. All input and output is discussed, along with many of the diagnostic messages contained in the code. Several error conditions that may occur in the code for certain icing conditions are identified, and a course of action is recommended. LEWICE has been used to calculate a variety of ice shapes, but should still be considered a research code. The code should be exercised further to identify any shortcomings and inadequacies. Any modifications identified as a result of these cases, or of additional experimental results, should be incorporated into the model. Using it as a test bed for improvements to the ice accretion model is one important application of LEWICE.
System-size convergence of point defect properties: The case of the silicon vacancy
NASA Astrophysics Data System (ADS)
Corsetti, Fabiano; Mostofi, Arash A.
2011-07-01
We present a comprehensive study of the vacancy in bulk silicon in all its charge states from 2+ to 2-, using a supercell approach within plane-wave density-functional theory, and systematically quantify the various contributions to the well-known finite size errors associated with calculating formation energies and stable charge state transition levels of isolated defects with periodic boundary conditions. Furthermore, we find that transition levels converge faster with respect to supercell size when only the Γ-point is sampled in the Brillouin zone, as opposed to a dense k-point sampling. This arises from the fact that defect level at the Γ-point quickly converges to a fixed value which correctly describes the bonding at the defect center. Our calculated transition levels with 1000-atom supercells and Γ-point only sampling are in good agreement with available experimental results. We also demonstrate two simple and accurate approaches for calculating the valence band offsets that are required for computing formation energies of charged defects, one based on a potential averaging scheme and the other using maximally-localized Wannier functions (MLWFs). Finally, we show that MLWFs provide a clear description of the nature of the electronic bonding at the defect center that verifies the canonical Watkins model.
Hart, Francis X; Easterly, Clay E
2004-05-01
The electric field pulse shape and change in transmembrane potential produced at various points within a sphere by an intense, ultrawideband pulse are calculated in a four stage, analytical procedure. Spheres of two sizes are used to represent the head of a human and the head of a rat. In the first stage, the pulse is decomposed into its Fourier components. In the second stage, Mie scattering analysis (MSA) is performed for a particular point in the sphere on each of the Fourier components, and the resulting electric field pulse shape is obtained for that point. In the third stage, the long wavelength approximation (LWA) is used to obtain the change in transmembrane potential in a cell at that point. In the final stage, an energy analysis is performed. These calculations are performed at 45 points within each sphere. Large electric fields and transmembrane potential changes on the order of a millivolt are produced within the brain, but on a time scale on the order of nanoseconds. The pulse shape within the brain differs considerably from that of the incident pulse. Comparison of the results for spheres of different sizes indicates that scaling of such pulses across species is complicated. Published 2004 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong
2018-03-01
This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.
A fast analytical undulator model for realistic high-energy FEL simulations
NASA Astrophysics Data System (ADS)
Tatchyn, R.; Cremer, T.
1997-02-01
A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.
North, Frederick; Fox, Samuel; Chaudhry, Rajeev
2016-07-20
Risk calculation is increasingly used in lipid management, congestive heart failure, and atrial fibrillation. The risk scores are then used for decisions about statin use, anticoagulation, and implantable defibrillator use. Calculating risks for patients and making decisions based on these risks is often done at the point of care and is an additional time burden for clinicians that can be decreased by automating the tasks and using clinical decision-making support. Using Morae Recorder software, we timed 30 healthcare providers tasked with calculating the overall risk of cardiovascular events, sudden death in heart failure, and thrombotic event risk in atrial fibrillation. Risk calculators used were the American College of Cardiology Atherosclerotic Cardiovascular Disease risk calculator (AHA-ASCVD risk), Seattle Heart Failure Model (SHFM risk), and CHA2DS2VASc. We also timed the 30 providers using Ask Mayo Expert care process models for lipid management, heart failure management, and atrial fibrillation management based on the calculated risk scores. We used the Mayo Clinic primary care panel to estimate time for calculating an entire panel risk. Mean provider times to complete the CHA2DS2VASc, AHA-ASCVD risk, and SHFM were 36, 45, and 171 s respectively. For decision making about atrial fibrillation, lipids, and heart failure, the mean times (including risk calculations) were 85, 110, and 347 s respectively. Even under best case circumstances, providers take a significant amount of time to complete risk assessments. For a complete panel of patients this can lead to hours of time required to make decisions about prescribing statins, use of anticoagulation, and medications for heart failure. Informatics solutions are needed to capture data in the medical record and serve up automatically calculated risk assessments to physicians and other providers at the point of care.
NASA Astrophysics Data System (ADS)
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, Damien P.; Mooij, Sander; Postma, Marieke, E-mail: dpg39@cam.ac.uk, E-mail: sander.mooij@ing.uchile.cl, E-mail: mpostma@nikhef.nl
We compute the one-loop renormalization group equations for Standard Model Higgs inflation. The calculation is done in the Einstein frame, using a covariant formalism for the multi-field system. All counterterms, and thus the betafunctions, can be extracted from the radiative corrections to the two-point functions; the calculation of higher n-point functions then serves as a consistency check of the approach. We find that the theory is renormalizable in the effective field theory sense in the small, mid and large field regime. In the large field regime our results differ slightly from those found in the literature, due to a differentmore » treatment of the Goldstone bosons.« less
Information retrieval from wide-band meteorological data - An example
NASA Technical Reports Server (NTRS)
Adelfang, S. I.; Smith, O. E.
1983-01-01
The methods proposed by Smith and Adelfang (1981) and Smith et al. (1982) are used to calculate probabilities over rectangles and sectors of the gust magnitude-gust length plane; probabilities over the same regions are also calculated from the observed distributions and a comparison is also presented to demonstrate the accuracy of the statistical model. These and other statistical results are calculated from samples of Jimsphere wind profiles at Cape Canaveral. The results are presented for a variety of wavelength bands, altitudes, and seasons. It is shown that wind perturbations observed in Jimsphere wind profiles in various wavelength bands can be analyzed by using digital filters. The relationship between gust magnitude and gust length is modeled with the bivariate gamma distribution. It is pointed out that application of the model to calculate probabilities over specific areas of the gust magnitude-gust length plane can be useful in aerospace design.
Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.
Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong
2012-10-17
We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.
Computing Fault Displacements from Surface Deformations
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy
2006-01-01
Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work
Model Estimated GCR Particle Flux Variation - Assessment with CRIS Data
NASA Astrophysics Data System (ADS)
Saganti, Premkumar
We present model calculated particle flux as a function of time during the current solar cycle along with the comparisons from the ACE/CRIS data and the Mars/MARIE data. In our model calculations we make use of the NASA's HZETRN (High Z and Energy Transport) code along with the nuclear fragmentation cross sections that are described by the quantum multiple scattering (QMSFRG) model. The time dependant variation of the GCR environment is derived making use of the solar modulation potential, phi. For the past ten years, Advanced Composition Explorer (ACE) has been in orbit at the Sun- Earth libration point (L1). Data from the Cosmic Ray Isotope Spectrometer (CRIS) instrument onboard the ACE spacecraft has been available from 1997 through the present time. Our model calculated particle flux showed high degree of correlation during the earlier phase of the current solar cycle (2003) in the lower Z region within 15
Calculation of surface temperature and surface fluxes in the GLAS GOM
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Abeles, J. A.
1981-01-01
Because the GLAS model's surface fluxes of sensible and latent heat exhibit strong 2 delta t oscillations at the individual grid points as well as in the zonal hemispheric averages and because a basic weakness of the GLAS model lower evaporation over oceans and higher evaporation over land in a typical monthly simulation, the GLAS model PBL parameterization was changed to calculate the mixed layer temperature gradient by solution of a quadratic equation for a stable PBL and by a curve fit relation for an unstable PBL. The new fluxes without any 2 delta t oscillation. Also, the geographical distributions of the surface fluxes are improved. The parameterization presented is incorporated into the new GLAS climate model. Some results which compare the evaporation over land and ocean between old and new calculations are appended.
Calculation of transient potential rise on the wind turbine struck by lightning.
Xiaoqing, Zhang
2014-01-01
A circuit model is proposed in this paper for calculating the transient potential rise on the wind turbine struck by lightning. The model integrates the blade, sliding contact site, and tower and grounding system of the wind turbine into an equivalent circuit. The lightning current path from the attachment point to the ground can be fully described by the equivalent circuit. The transient potential responses are obtained in the different positions on the wind turbine by solving the circuit equations. In order to check the validity of the model, the laboratory measurement is made with a reduced-scale wind turbine. The measured potential waveform is compared with the calculated one and a better agreement is shown between them. The practical applicability of the model is also examined by a numerical example of a 2 MW Chinese-built wind turbine.
Density calculations for silicate liquids: Reply to a Critical Comment by Ghiorso and Carmichael
NASA Astrophysics Data System (ADS)
Bottinga, Y.; Weill, D. F.; Richet, P.
1984-02-01
The analysis of the liquid silicate density model recently proposed in BOTTINGAet al. (1982) by GHIORSO and CARMICHAEL (1984) is shown to be based on a combination of unwarranted mathematical assumptions, refusal to recognize experimental and theoretical evidence for the non-linear effect of composition on liquid silicate density, and a totally unrealistic view of the accuracy with which the thermal expansion of silicate liquids can be measured. As a consequence, none of the general or specific points raised by Ghiorso and Carmichael are relevant to the issue of which of the existing calculation models ( BOTTINGA and WEILL, 1970; NELSON and CARMICHAEL, 1979; MOet al., 1982; or BOTTINGAet al., 1982, 1983) should be used. As stated in BOTTINGA, RICHET and WEILL (1983), there is a problem in using a combination of the molar volume parameters from the first three of these models because they are not mutually independent. However, the set of partial molar volumes and thermal expansion constants given in BOTTINGAet al. (1982, 1983) are internally consistent and mutually compatible. We remain firmly of the opinion that our latest model is an improvement over previous attempts because it conforms to a much wider set of observations, it incorporates a larger set of melt components, it calculates density and thermal expansion more accurately, and it points the way to one possible method of accommodating a non-linear phenomenon into a nonlinear model.
Mori, Yukie; Takano, Keiko
2012-08-21
Two-dimensional potential energy surfaces (PESs) were calculated for the degenerate intramolecular proton transfer (PT) in two N-H···N hydrogen-bonded systems, (Z)-2-(2-pyridylmethylidene)-1,2-dihydropyridine (1) and monoprotonated di(2-pyridyl) ether (2), at the MP2/cc-pVDZ level of theory. The calculated PES had two minima in both cases. The energy barrier in 1 was higher than the zero-point energy (ZPE) level, while that in 2 was close to the ZPE. Vibrational wavefunctions were obtained by solving time-independent Schrödinger equations with the calculated PESs. The maximum points of the probability density were shifted from the energy minima towards the region where the covalent N-H bond was elongated and the N···N distance shortened. The effects of a polar solvent on the PES were investigated with the continuum or cluster models in such a way that the solute-solvent electrostatic interactions could be taken into account under non-equilibrated conditions. A solvated contact ion-pair was modelled by a cluster consisting of one cation 2, one chloride ion and 26 molecules of acetonitrile. The calculation with this model suggested that the bridging proton is localised in the deeper well due to the significant asymmetry of the PES and the high potential barrier.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.
1994-01-01
A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, M; Salinas Aranda, F; 21st Century Oncology, Ft. Myers, FL
2014-06-01
Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanningmore » system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was reduced to a few hours.« less
Linear ground-water flow, flood-wave response program for programmable calculators
Kernodle, John Michael
1978-01-01
Two programs are documented which solve a discretized analytical equation derived to determine head changes at a point in a one-dimensional ground-water flow system. The programs, written for programmable calculators, are in widely divergent but commonly encountered languages and serve to illustrate the adaptability of the linear model to use in situations where access to true computers is not possible or economical. The analytical method assumes a semi-infinite aquifer which is uniform in thickness and hydrologic characteristics, bounded on one side by an impermeable barrier and on the other parallel side by a fully penetrating stream in complete hydraulic connection with the aquifer. Ground-water heads may be calculated for points along a line which is perpendicular to the impermeable barrie and the fully penetrating stream. Head changes at the observation point are dependent on (1) the distance between that point and the impermeable barrier, (2) the distance between the line of stress (the stream) and the impermeable barrier, (3) aquifer diffusivity, (4) time, and (5) head changes along the line of stress. The primary application of the programs is to determine aquifer diffusivity by the flood-wave response technique. (Woodard-USGS)
On 2- and 3-person games on polyhedral sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belenky, A.S.
1994-12-31
Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less
Workshop on Engineering Turbulence Modeling
NASA Technical Reports Server (NTRS)
Povinelli, Louis A. (Editor); Liou, W. W. (Editor); Shabbir, A. (Editor); Shih, T.-H. (Editor)
1992-01-01
Discussed here is the future direction of various levels of engineering turbulence modeling related to computational fluid dynamics (CFD) computations for propulsion. For each level of computation, there are a few turbulence models which represent the state-of-the-art for that level. However, it is important to know their capabilities as well as their deficiencies in order to help engineers select and implement the appropriate models in their real world engineering calculations. This will also help turbulence modelers perceive the future directions for improving turbulence models. The focus is on one-point closure models (i.e., from algebraic models to higher order moment closure schemes and partial differential equation methods) which can be applied to CFD computations. However, other schemes helpful in developing one-point closure models, are also discussed.
Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve
2006-11-01
A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1
Nadobny, Jacek; Fähling, Horst; Hagmann, Mark J; Turner, Paul F; Wlodarczyk, Waldemar; Gellermann, Johanna M; Deuflhard, Peter; Wust, Peter
2002-11-01
Experimental and numerical methods were used to determine the coupling of energy in a multichannel three-dimensional hyperthermia applicator (SIGMA-Eye), consisting of 12 short dipole antenna pairs with stubs for impedance matching. The relationship between the amplitudes and phases of the forward waves from the amplifiers, to the resulting amplitudes and phases at the antenna feed-points was determined in terms of interaction matrices. Three measuring methods were used: 1) a differential probe soldered directly at the antenna feed-points; 2) an E-field sensor placed near the feed-points; and 3) measurements were made at the outputs of the amplifier. The measured data were compared with finite-difference time-domain (FDTD) calculations made with three different models. The first model assumes that single antennas are fed independently. The second model simulates antenna pairs connected to the transmission lines. The measured data correlate best with the latter FDTD model, resulting in an improvement of more than 20% and 20 degrees (average difference in amplitudes and phases) when compared with the two simpler FDTD models.
A Composite Source Model With Fractal Subevent Size Distribution
NASA Astrophysics Data System (ADS)
Burjanek, J.; Zahradnik, J.
A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lohrasbi, J.
Dose calculations for atmospheric radionuclide releases from the Hanford Site for calendar year (CY) 1992 were performed by Pacific Northwest Laboratory (PNL) using the approved US Environmental Protection Agency (EPA) CAP-88 computer model. Emissions from discharge points in the Hanford Site 100, 200, 300, 400, and 600 areas were calculated based on results of analyses of continuous and periodic sampling conducted at the discharge points. These calculated emissions were provided for inclusion in the CAP-88 model by area and by individual facility for those facilities having the potential to contribute more than 10 percent of the Hanford Site total ormore » to result in an impact of greater than 0.1 mrem per year to the maximally exposed individual (MEI). Also included in the assessment of offsite dose modeling are the measured radioactive emissions from all Hanford Site stacks that have routine monitoring performed. Record sampling systems have been installed on all stacks and vents that use exhaust fans to discharge air that potentially may carry airborne radioactivity. Estimation of activity from ingrowth of long-lived radioactive progeny is not included in the CAP-88 model; therefore, the Hanford Site GENII code (Napier et al. 1988) was used to supplement the CAP-88 dose calculations. When the dose to the MEI located in the Ringold area was calculated, the effective dose equivalent (EDE) from combined Hanford Site radioactive airborne emissions was shown to be 3.7E-03 mrem. This value was reported in the annual air emission report prepared for the Hanford Site (RL 1993).« less
Li, Hui
2009-11-14
Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.
Gok, Kadir; Inal, Sermet; Gok, Arif; Gulbandilar, Eyyup
2017-05-01
In this study, biomechanical behaviors of three different screw materials (stainless steel, titanium and cobalt-chromium) have analyzed to fix with triangle fixation under axial loading in femoral neck fracture and which material is best has been investigated. Point cloud obtained after scanning the human femoral model with the three dimensional (3D) scanner and this point cloud has been converted to 3D femoral model by Geomagic Studio software. Femoral neck fracture was modeled by SolidWorks software for only triangle configuration and computer-aided numerical analyses of three different materials have been carried out by AnsysWorkbench finite element analysis (FEA) software. The loading, boundary conditions and material properties have prepared for FEA and Von-Misses stress values on upper and lower proximity of the femur and screws have been calculated. At the end of numerical analyses, the best advantageous screw material has calculated as titanium because it creates minimum stress at the upper and lower proximity of the fracture line.
NASA Astrophysics Data System (ADS)
Kolyari I., G.
2018-05-01
The proposed theoretical model allows for the perfectly elastic collision of three bodies (three mass points) to calculate: 1) the definite value of the three bodies' projected velocities after the collision with a straight line, along which the bodies moved before the collision; 2) the definite value of the scattering bodies' velocities on the plane and the definite value of the angles between the bodies' momenta (or velocities), which the bodies obtain after the collision when moving on the plane. The proposed calculation model of the velocities of the three collided bodies is consistent with the dynamic model of the same bodies' interaction during the collision, taking into account that the energy flow is conserved for the entire system before and after the collision. It is shown that under the perfectly elastic interaction during the collision of three bodies the energy flow is conserved in addition to the momentum and energy conservation.
Comparison of UWCC MOX fuel measurements to MCNP-REN calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.; Baker, M.; Jie, R.
1998-12-31
The development of neutron coincidence counting has greatly improved the accuracy and versatility of neutron-based techniques to assay fissile materials. Today, the shift register analyzer connected to either a passive or active neutron detector is widely used by both domestic and international safeguards organizations. The continued development of these techniques and detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model, as it is currently used, fails to accurately predict detector response in highly multiplying mediums such as mixed-oxide (MOX) lightmore » water reactor fuel assemblies. For this reason, efforts have been made to modify the currently used Monte Carlo codes and to develop new analytical methods so that this model is not required to predict detector response. The authors describe their efforts to modify a widely used Monte Carlo code for this purpose and also compare calculational results with experimental measurements.« less
Steady state operation simulation of the Francis-99 turbine by means of advanced turbulence models
NASA Astrophysics Data System (ADS)
Gavrilov, A.; Dekterev, A.; Minakov, A.; Platonov, D.; Sentyabov, A.
2017-01-01
The paper presents numerical simulation of the flow in hydraulic turbine based on the experimental data of the II Francis-99 workshop. The calculation domain includes the wicket gate, runner and draft tube with rotating reference frame for the runner zone. Different turbulence models such as k-ω SST, ζ-f and RSM were considered. The calculations were performed by means of in-house CFD code SigmaFlow. The numerical simulation for part load, high load and best efficiency operation points were performed.
Space resection model calculation based on Random Sample Consensus algorithm
NASA Astrophysics Data System (ADS)
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Lattice field theory applications in high energy physics
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
2016-10-01
Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.
Combinatorial and High Throughput Discovery of High Temperature Piezoelectric Ceramics
2011-10-10
the known candidate piezoelectric ferroelectric perovskites. Unlike most computational studies on crystal chemistry, where the starting point is some...studies on crystal chemistry, where the starting point is some form of electronic structure calculation, we use a data driven approach to initiate our...experimental measurements reported in the literature. Given that our models are based solely on crystal and electronic structure data and did not
A thermochemical model of radiation damage and annealing applied to GaAs solar cells
NASA Technical Reports Server (NTRS)
Conway, E. J.; Walker, G. H.; Heinbockel, J. H.
1981-01-01
Calculations of the equilibrium conditions for continuous radiation damage and thermal annealing are reported. The calculations are based on a thermochemical model developed to analyze the incorporation of point imperfections in GaAs, and modified by introducing the radiation to produce native lattice defects rather than high-temperature and arsenic atmospheric pressure. The concentration of a set of defects, including vacancies, divacancies, and impurity vacancy complexes, are calculated as a function of temperature. Minority carrier lifetimes, short circuit current, and efficiency are deduced for a range of equilibrium temperatures. The results indicate that GaAs solar cells could have a mission life which is not greatly limited by radiation damage.
Models of primary runaway electron distribution in the runaway vortex regime
Guo, Zehua; Tang, Xian-Zhu; McDevitt, Christopher J.
2017-11-01
Generation of runaway electrons (RE) beams can possibly induce the most deleterious effect of tokamak disruptions. A number of recent numerical calculations have confirmed the formation of a RE bump in their energy distribution by taking into account Synchrontron radiational damping force due to RE’s gyromotions. Here, we present a detailed examination on how the bump location changes at different pitch-angle and the characteristics of the RE pitch-angle distribution. Although REs moving along the magnetic field are preferably accelerated and then populate the phase-space of larger pitch-angle mainly through diffusions, an off-axis peak can still form due to the presencemore » of the vortex structure which causes accumulation of REs at low pitch-angle. A simplified Fokker- Planck model and its semi-analytical solutions based on local expansions around the O point is used to illustrate the characteristics of RE distribution around the O point of the runaway vortex in phase-space. The calculated energy location of the O point together with the local energy and pitch-angle distributions agree with the full numerical solution.« less
A program to calculate pulse transmission responses through transversely isotropic media
NASA Astrophysics Data System (ADS)
Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei
2018-05-01
We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.
Equation of state of solid, liquid and gaseous tantalum from first principles
Miljacic, Ljubomir; Demers, Steven; Hong, Qi-Jun; ...
2015-09-18
Here, we present ab initio calculations of the phase diagram and the equation of state of Ta in a wide range of volumes and temperatures, with volumes from 9 to 180 Å 3/atom, temperature as high as 20000 K, and pressure up to 7 Mbars. The calculations are based on first principles, in combination with techniques of molecular dynamics, thermodynamic integration, and statistical modeling. Multiple phases are studied, including the solid, fluid, and gas single phases, as well as two-phase coexistences. We calculate the critical point by direct molecular dynamics sampling, and extend the equation of state to very lowmore » density through virial series fitting. The accuracy of the equation of state is assessed by comparing both the predicted melting curve and the critical point with previous experimental and theoretical investigations.« less
A Data Snapshot Approach for Making Real-Time Predictions in Basketball.
Kayhan, Varol Onur; Watkins, Alison
2018-06-08
This article proposes a novel approach, called data snapshots, to generate real-time probabilities of winning for National Basketball Association (NBA) teams while games are being played. The approach takes a snapshot from a live game, identifies historical games that have the same snapshot, and uses the outcomes of these games to calculate the winning probabilities of the teams in this game as the game is underway. Using data obtained from 20 seasons worth of NBA games, we build three models and compare their accuracies to a baseline accuracy. In Model 1, each snapshot includes the point difference between the home and away teams at a given second of the game. In Model 2, each snapshot includes the net team strength in addition to the point difference at a given second. In Model 3, each snapshot includes the rate of score change in addition to the point difference at a given second. The results show that all models perform better than the baseline accuracy, with Model 1 being the best model.
NASA Astrophysics Data System (ADS)
Eschenbach, Wolfram; Budziak, Dörte; Elbracht, Jörg; Höper, Heinrich; Krienen, Lisa; Kunkel, Ralf; Meyer, Knut; Well, Reinhard; Wendland, Frank
2018-06-01
Valid models for estimating nitrate emissions from agriculture to groundwater are an indispensable forecasting tool. A major challenge for model validation is the spatial and temporal inconsistency between data from groundwater monitoring points and modelled nitrate inputs into groundwater, and the fact that many existing groundwater monitoring wells cannot be used for validation. With the help of the N2/Ar-method, groundwater monitoring wells in areas with reduced groundwater can now be used for model validation. For this purpose, 484 groundwater monitoring wells were sampled in Lower Saxony. For the first time, modelled potential nitrate concentrations in groundwater recharge (from the DENUZ model) were compared with nitrate input concentrations, which were calculated using the N2/Ar method. The results show a good agreement between both methods for glacial outwash plains and moraine deposits. Although the nitrate degradation processes in groundwater and soil merge seamlessly in areas with a shallow groundwater table, the DENUZ model only calculates denitrification in the soil zone. The DENUZ model thus predicts 27% higher nitrate emissions into the groundwater than the N2/Ar method in such areas. To account for high temporal and spatial variability of nitrate emissions into groundwater, a large number of groundwater monitoring points must be investigated for model validation.
SU-E-T-757: TMRs Calculated From PDDs Versus the Direct Measurements for Small Field SRS Cones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H; Zhong, H; Song, K
2015-06-15
Purpose: To investigate the variation of TMR for SRS cones obtained by TMR scanning, calculation from PDDs, and point measurements. The obtained TMRs were also compared to the representative data from the vendor. Methods: TMRs for conical cones of 4, 5, 7.5, 10, 12.5, 15, and 17.5 mm diameter (jaws set to 5×5 cm) were obtained for 6X FFF and 10X FFF energies on a Varian Edge linac. TMR scanning was performed with a Sun Nuclear 3D scanner and Edge detector at 100 cm SDD. TMR point measurements were measured with a Wellhofer tank and Edge detector, at multiple depthsmore » from 0.5 to 20 cm and 100 cm SDD. PDDs for converting to TMR were scanned with a Wellhofer system and SFD detector. The formalism of converting PDD to TMR, given in Khan’s book (4th Edition, p.161) was applied. Sp values at dmax were obtained by measuring Scp and Sc of the cones (jaws set to 5×5 cm) using the Edge detector, and normalized to the 10×10 cm field. Results: Along the central axis beyond dmax, the RMS and maximum percent difference of TMRs obtained with different methods were as follows: (a) 1.3% (max=3.5%) for the calculated TMRs from PDDs versus direct scanning; (b) 1.2% (max=3.3%) for direct scanning versus point measurement; (c) 1.8% (max=5.1%) for the calculated versus point measurements; (d) 1.0% (max=3.6%) for direct scanning versus vendor data; (e) 1.6% (max=7.2%) for the calculated versus vendor data. Conclusion: The overall accuracy of TMRs calculated from PDDs was comparable with that of direct scanning. However, the uncertainty at depths greater than 20 cm, increased up to 5% when compared to point measurements. This issue must be considered when developing a beam model for small field SRS planning using cones.« less
Tight-Binding Description of Impurity States in Semiconductors
ERIC Educational Resources Information Center
Dominguez-Adame, F.
2012-01-01
Introductory textbooks in solid state physics usually present the hydrogenic impurity model to calculate the energy of carriers bound to donors or acceptors in semiconductors. This model treats the pure semiconductor as a homogeneous medium and the impurity is represented as a fixed point charge. This approach is only valid for shallow impurities…
Bradshaw, Richard T; Essex, Jonathan W
2016-08-09
Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.
Charged Particle Environment Definition for NGST: Model Development
NASA Technical Reports Server (NTRS)
Blackwell, William C.; Minow, Joseph I.; Evans, Steven W.; Hardage, Donna M.; Suggs, Robert M.
2000-01-01
NGST will operate in a halo orbit about the L2 point, 1.5 million km from the Earth, where the spacecraft will periodically travel through the magnetotail region. There are a number of tools available to calculate the high energy, ionizing radiation particle environment from galactic cosmic rays and from solar disturbances. However, space environment tools are not generally available to provide assessments of charged particle environment and its variations in the solar wind, magnetosheath, and magnetotail at L2 distances. An engineering-level phenomenology code (LRAD) was therefore developed to facilitate the definition of charged particle environments in the vicinity of the L2 point in support of the NGST program. LRAD contains models tied to satellite measurement data of the solar wind and magnetotail regions. The model provides particle flux and fluence calculations necessary to predict spacecraft charging conditions and the degradation of materials used in the construction of NGST. This paper describes the LRAD environment models for the deep magnetotail (XGSE < -100 Re) and solar wind, and presents predictions of the charged particle environment for NGST.
Mathematical models for optimization of the centrifugal stage of a refrigerating compressor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuzhdin, A.S.
1987-09-01
The authors describe a general approach to the creating of mathematical models of energy and head losses in the flow part of the centrifugal compressor. The mathematical model of the pressure head and efficiency of a two-section stage proposed in this paper is meant for determining its characteristics for the assigned geometric dimensions and for optimizing by variance calculations. Characteristic points on the plot of velocity distribution over the margin of the vanes of the impeller and the diffuser of the centrifugal stage with a combined diffuser are presented. To assess the reliability of the mathematical model the authors comparedmore » some calculated data with the experimental ones.« less
Classical and quantum aspects of Yang-Baxter Wess-Zumino models
NASA Astrophysics Data System (ADS)
Demulder, Saskia; Driezen, Sibylle; Sevrin, Alexander; Thompson, Daniel C.
2018-03-01
We investigate the integrable Yang-Baxter deformation of the 2d Principal Chiral Model with a Wess-Zumino term. For arbitrary groups, the one-loop β-functions are calculated and display a surprising connection between classical and quantum physics: the classical integrability condition is necessary to prevent new couplings being generated by renormalisation. We show these theories admit an elegant realisation of Poisson-Lie T-duality acting as a simple inversion of coupling constants. The self-dual point corresponds to the Wess-Zumino-Witten model and is the IR fixed point under RG. We address the possibility of having supersymmetric extensions of these models showing that extended supersymmetry is not possible in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yongsoo; Chen, Chien-Chun; Scott, M. C.
Perfect crystals are rare in nature. Real materials often contain crystal defects and chemical order/disorder such as grain boundaries, dislocations, interfaces, surface reconstructions and point defects. Such disruption in periodicity strongly affects material properties and functionality. Despite rapid development of quantitative material characterization methods, correlating three-dimensional (3D) atomic arrangements of chemical order/disorder and crystal defects with material properties remains a challenge. On a parallel front, quantum mechanics calculations such as density functional theory (DFT) have progressed from the modelling of ideal bulk systems to modelling ‘real’ materials with dopants, dislocations, grain boundaries and interfaces; but these calculations rely heavily onmore » average atomic models extracted from crystallography. To improve the predictive power of first-principles calculations, there is a pressing need to use atomic coordinates of real systems beyond average crystallographic measurements. Here we determine the 3D coordinates of 6,569 iron and 16,627 platinum atoms in an iron-platinum nanoparticle, and correlate chemical order/disorder and crystal defects with material properties at the single-atom level. We identify rich structural variety with unprecedented 3D detail including atomic composition, grain boundaries, anti-phase boundaries, anti-site point defects and swap defects. We show that the experimentally measured coordinates and chemical species with 22 picometre precision can be used as direct input for DFT calculations of material properties such as atomic spin and orbital magnetic moments and local magnetocrystalline anisotropy. The work presented here combines 3D atomic structure determination of crystal defects with DFT calculations, which is expected to advance our understanding of structure–property relationships at the fundamental level.« less
Deciphering chemical order/disorder and material properties at the single-atom level.
Yang, Yongsoo; Chen, Chien-Chun; Scott, M C; Ophus, Colin; Xu, Rui; Pryor, Alan; Wu, Li; Sun, Fan; Theis, Wolfgang; Zhou, Jihan; Eisenbach, Markus; Kent, Paul R C; Sabirianov, Renat F; Zeng, Hao; Ercius, Peter; Miao, Jianwei
2017-02-01
Perfect crystals are rare in nature. Real materials often contain crystal defects and chemical order/disorder such as grain boundaries, dislocations, interfaces, surface reconstructions and point defects. Such disruption in periodicity strongly affects material properties and functionality. Despite rapid development of quantitative material characterization methods, correlating three-dimensional (3D) atomic arrangements of chemical order/disorder and crystal defects with material properties remains a challenge. On a parallel front, quantum mechanics calculations such as density functional theory (DFT) have progressed from the modelling of ideal bulk systems to modelling 'real' materials with dopants, dislocations, grain boundaries and interfaces; but these calculations rely heavily on average atomic models extracted from crystallography. To improve the predictive power of first-principles calculations, there is a pressing need to use atomic coordinates of real systems beyond average crystallographic measurements. Here we determine the 3D coordinates of 6,569 iron and 16,627 platinum atoms in an iron-platinum nanoparticle, and correlate chemical order/disorder and crystal defects with material properties at the single-atom level. We identify rich structural variety with unprecedented 3D detail including atomic composition, grain boundaries, anti-phase boundaries, anti-site point defects and swap defects. We show that the experimentally measured coordinates and chemical species with 22 picometre precision can be used as direct input for DFT calculations of material properties such as atomic spin and orbital magnetic moments and local magnetocrystalline anisotropy. This work combines 3D atomic structure determination of crystal defects with DFT calculations, which is expected to advance our understanding of structure-property relationships at the fundamental level.
Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor
NASA Technical Reports Server (NTRS)
Butler, C.; Albright, D.
2007-01-01
Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.
Polarizable six-point water models from computational and empirical optimization.
Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul
2014-02-13
Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to other temperatures and conditions like the melting of ice are also discussed.
Probability distribution of the entanglement across a cut at an infinite-randomness fixed point
NASA Astrophysics Data System (ADS)
Devakul, Trithep; Majumdar, Satya N.; Huse, David A.
2017-03-01
We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.
Excitonic structure of the optical conductivity in MoS2 monolayers
NASA Astrophysics Data System (ADS)
Ridolfi, Emilia; Lewenkopf, Caio H.; Pereira, Vitor M.
2018-05-01
We investigate the excitonic spectrum of MoS2 monolayers and calculate its optical absorption properties over a wide range of energies. Our approach takes into account the anomalous screening in two dimensions and the presence of a substrate, both cast by a suitable effective Keldysh potential. We solve the Bethe-Salpeter equation using as a basis a Slater-Koster tight-binding model parameterized to fit the ab initio MoS2 band structure calculations. The resulting optical conductivity is in good quantitative agreement with existing measurements up to ultraviolet energies. We establish that the electronic contributions to the C excitons arise not from states at the Γ point, but from a set of k points over extended portions of the Brillouin zone. Our results reinforce the advantages of approaches based on effective models to expeditiously explore the properties and tunability of excitons in TMD systems.
NASA Astrophysics Data System (ADS)
Khaimovich, I. N.
2017-10-01
The articles provides the calculation algorithms for blank design and die forming fitting to produce the compressor blades for aircraft engines. The design system proposed in the article allows generating drafts of trimming and reducing dies automatically, leading to significant reduction of work preparation time. The detailed analysis of the blade structural elements features was carried out, the taken limitations and technological solutions allowed forming generalized algorithms of forming parting stamp face over the entire circuit of the engraving for different configurations of die forgings. The author worked out the algorithms and programs to calculate three dimensional point locations describing the configuration of die cavity. As a result the author obtained the generic mathematical model of final die block in the form of three-dimensional array of base points. This model is the base for creation of engineering documentation of technological equipment and means of its control.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
Integration of Heterogenous Digital Surface Models
NASA Astrophysics Data System (ADS)
Boesch, R.; Ginzler, C.
2011-08-01
The application of extended digital surface models often reveals, that despite an acceptable global accuracy for a given dataset, the local accuracy of the model can vary in a wide range. For high resolution applications which cover the spatial extent of a whole country, this can be a major drawback. Within the Swiss National Forest Inventory (NFI), two digital surface models are available, one derived from LiDAR point data and the other from aerial images. Automatic photogrammetric image matching with ADS80 aerial infrared images with 25cm and 50cm resolution is used to generate a surface model (ADS-DSM) with 1m resolution covering whole switzerland (approx. 41000 km2). The spatially corresponding LiDAR dataset has a global point density of 0.5 points per m2 and is mainly used in applications as interpolated grid with 2m resolution (LiDAR-DSM). Although both surface models seem to offer a comparable accuracy from a global view, local analysis shows significant differences. Both datasets have been acquired over several years. Concerning LiDAR-DSM, different flight patterns and inconsistent quality control result in a significantly varying point density. The image acquisition of the ADS-DSM is also stretched over several years and the model generation is hampered by clouds, varying illumination and shadow effects. Nevertheless many classification and feature extraction applications requiring high resolution data depend on the local accuracy of the used surface model, therefore precise knowledge of the local data quality is essential. The commercial photogrammetric software NGATE (part of SOCET SET) generates the image based surface model (ADS-DSM) and delivers also a map with figures of merit (FOM) of the matching process for each calculated height pixel. The FOM-map contains matching codes like high slope, excessive shift or low correlation. For the generation of the LiDAR-DSM only first- and last-pulse data was available. Therefore only the point distribution can be used to derive a local accuracy measure. For the calculation of a robust point distribution measure, a constrained triangulation of local points (within an area of 100m2) has been implemented using the Open Source project CGAL. The area of each triangle is a measure for the spatial distribution of raw points in this local area. Combining the FOM-map with the local evaluation of LiDAR points allows an appropriate local accuracy evaluation of both surface models. The currently implemented strategy ("partial replacement") uses the hypothesis, that the ADS-DSM is superior due to its better global accuracy of 1m. If the local analysis of the FOM-map within the 100m2 area shows significant matching errors, the corresponding area of the triangulated LiDAR points is analyzed. If the point density and distribution is sufficient, the LiDAR-DSM will be used in favor of the ADS-DSM at this location. If the local triangulation reflects low point density or the variance of triangle areas exceeds a threshold, the investigated location will be marked as NODATA area. In a future implementation ("anisotropic fusion") an anisotropic inverse distance weighting (IDW) will be used, which merges both surface models in the point data space by using FOM-map and local triangulation to derive a quality weight for each of the interpolation points. The "partial replacement" implementation and the "fusion" prototype for the anisotropic IDW make use of the Open Source projects CGAL (Computational Geometry Algorithms Library), GDAL (Geospatial Data Abstraction Library) and OpenCV (Open Source Computer Vision).
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Calculation of Transient Potential Rise on the Wind Turbine Struck by Lightning
Xiaoqing, Zhang
2014-01-01
A circuit model is proposed in this paper for calculating the transient potential rise on the wind turbine struck by lightning. The model integrates the blade, sliding contact site, and tower and grounding system of the wind turbine into an equivalent circuit. The lightning current path from the attachment point to the ground can be fully described by the equivalent circuit. The transient potential responses are obtained in the different positions on the wind turbine by solving the circuit equations. In order to check the validity of the model, the laboratory measurement is made with a reduced-scale wind turbine. The measured potential waveform is compared with the calculated one and a better agreement is shown between them. The practical applicability of the model is also examined by a numerical example of a 2 MW Chinese-built wind turbine. PMID:25254231
NASA Astrophysics Data System (ADS)
Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.
2014-02-01
A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.
A DFT study on the failure mechanism of Al2O3 film by various point defects in solution
NASA Astrophysics Data System (ADS)
Zhang, Chuan-Hui; Chen, Bao; Jin, Ying; Sun, Dong-Bai
2018-03-01
The defects on oxide film surface are very important, and they would occur when the film is peeled or scratched. The periodic DFT calculations have been performed on Al2O3 surface to model the influences of various point-defects. Three kinds of point defect surfaces (vacancy, inversion, substitution) are considered, and the molecular H2O dissociation and the transition state are calculated. The predicted formation energy of O vacancy is 8.30 eV, whereas that corresponding to the formation of Al vacancy is found to be at least a 55% larger. On the vacancy point defect surfaces, upward H2O molecule surfaces prefer to occur chemical reaction, leading the surfaces to be hydroxylated. And then the D-Cl-substitution-Al surface is corroded, which suggests a Cl adsorption induced failure mechanism of the oxide film. At last, the process of H2O dissociation on the OH-substitution-Al surfaces with four or five transition paths are discussed.
Fast integration-based prediction bands for ordinary differential equation models.
Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel
2016-04-15
To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Kohn anomalies in momentum dependence of magnetic susceptibility of some three-dimensional systems
NASA Astrophysics Data System (ADS)
Stepanenko, A. A.; Volkova, D. O.; Igoshev, P. A.; Katanin, A. A.
2017-11-01
We study a question of the presence of Kohn points, yielding at low temperatures nonanalytic momentum dependence of magnetic susceptibility near its maximum, in electronic spectra of some threedimensional systems. In particular, we consider a one-band model on face-centered cubic lattice with hopping between the nearest and next-nearest neighbors, which models some aspects of the dispersion of ZrZn2, and the two-band model on body-centered cubic lattice, modeling the dispersion of chromium. For the former model, it is shown that Kohn points yielding maxima of susceptibility exist in a certain (sufficiently wide) region of electronic concentrations; the dependence of the wave vectors, corresponding to the maxima, on the chemical potential is investigated. For the two-band model, we show the existence of the lines of Kohn points, yielding maximum susceptibility, whose position agrees with the results of band structure calculations and experimental data on the wave vector of antiferromagnetism of chromium.
Local and nonlocal order parameters in the Kitaev chain
NASA Astrophysics Data System (ADS)
Chitov, Gennady Y.
2018-02-01
We have calculated order parameters for the phases of the Kitaev chain with interaction and dimerization at a special symmetric point applying the Jordan-Wigner and other duality transformations. We use string order parameters (SOPs) defined via the correlation functions of the Majorana string operators. The SOPs are mapped onto the local order parameters of some dual Hamiltonians and easily calculated. We have shown that the phase diagram of the interacting dimerized chain comprises the phases with the conventional local order as well as the phases with nonlocal SOPs. From the results for the critical indices, we infer the two-dimensional Ising universality class of criticality at the particular symmetry point where the model is exactly solvable.
Carbon Nanotube Field Emission Arrays
2011-06-01
K , and M [14]. Using the tight binding energy model, the energy dispersion relations for graphene can be calculated for the triangle formed from...The corresponding reciprocal lattice vectors, b1 and b2, and Brillouin zone of graphene [14]. 19 graphene band structure is the six K ...points where the two bands are degenerate and the Fermi level passes. It has been shown through thorough calculations that at T = 0 K , the density
Shear modulus of neutron star crust
NASA Astrophysics Data System (ADS)
Baiko, D. A.
2011-09-01
The shear modulus of solid neutron star crust is calculated by the thermodynamic perturbation theory, taking into account ion motion. At a given density, the crust is modelled as a body-centred cubic Coulomb crystal of fully ionized atomic nuclei of one type with a uniform charge-compensating electron background. Classic and quantum regimes of ion motion are considered. The calculations in the classic temperature range agree well with previous Monte Carlo simulations. At these temperatures, the shear modulus is given by the sum of a positive contribution due to the static lattice and a negative ∝ T contribution due to the ion motion. The quantum calculations are performed for the first time. The main result is that at low temperatures the contribution to the shear modulus due to the ion motion saturates at a constant value, associated with zero-point ion vibrations. Such behaviour is qualitatively similar to the zero-point ion motion contribution to the crystal energy. The quantum effects may be important for lighter elements at higher densities, where the ion plasma temperature is not entirely negligible compared to the typical Coulomb ion interaction energy. The results of numerical calculations are approximated by convenient fitting formulae. They should be used for precise neutron star oscillation modelling, a rapidly developing branch of stellar seismology.
On determining dose rate constants spectroscopically
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, M.; Rogers, D. W. O.
2013-01-15
Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of {sup 125}I and {sup 103}Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated {sup 125}I and {sup 103}Pd sources. Methods: Spectra generated by 14 {sup 125}I and 6 {sup 103}Pd seedsmore » were calculated in vacuo at 10 cm from the source in a 2.7 Multiplication-Sign 2.7 Multiplication-Sign 0.05 cm{sup 3} voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the {sup 125}I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for {sup 103}Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were Less-Than-Or-Slanted-Equal-To 0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in {sup 125}I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The {sup 103}Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the {sup 125}I and {sup 103}Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Conclusions: Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.« less
Zvyagin, V N; Rakitin, V A; Fomina, E E
The objective of the present study was the development of the point-digital model for the scaless interpretation of the dermatoglyphic papillary patterns on human fingers that would allow to comprehensively describe, in digital terms, the main characteristics of the traits and perform the quantitative assessment of the frequency of their inheritance. A specially developed computer program, D.glyphic. 7-14 was used to mark the dermatoglyphic patterns on the fingerprints obtained from 30 familial triplets (father + mother + child).The values of all the studied traits for kinship diagnostics were found by calculating the ratios of the sums of differences between the traits in the parent-parent pairs to those in the respective parent-child pairs. The algorithms for the point marking of the traits and reading out the digital information about them have been developed. The traditional dermatoglyphic patterns were selected and the novel ones applied for the use in the framework of the point-digital model for the interpretation of the for diagnostics of consanguineous relationship. The present experimental study has demonstrated the high level of inheritance of the selected traits and the possibility to develop the algorithms and computation techniques for the calculation of consanguineous relationship coefficients based on these traits.
Dai, Zuyang; Gao, Shuming; Wang, Jia; Mo, Yuxiang
2014-10-14
The torsional energy levels of CH3OH(+), CH3OD(+), and CD3OD(+) have been determined for the first time using one-photon zero kinetic energy photoelectron spectroscopy. The adiabatic ionization energies for CH3OH, CH3OD, and CD3OD are determined as 10.8396, 10.8455, and 10.8732 eV with uncertainties of 0.0005 eV, respectively. Theoretical calculations have also been performed to obtain the torsional energy levels for the three isotopologues using a one-dimensional model with approximate zero-point energy corrections of the torsional potential energy curves. The calculated values are in good agreement with the experimental data. The barrier height of the torsional potential energy without zero-point energy correction was calculated as 157 cm(-1), which is about half of that of the neutral (340 cm(-1)). The calculations showed that the cation has eclipsed conformation at the energy minimum and staggered one at the saddle point, which is the opposite of what is observed in the neutral molecule. The fundamental C-O stretch vibrational energy level for CD3OD(+) has also been determined. The energy levels for the combinational excitation of the torsional vibration and the fundamental C-O stretch vibration indicate a strong torsion-vibration coupling.
Sellers, Michael S; Lísal, Martin; Brennan, John K
2016-03-21
We present an extension of various free-energy methodologies to determine the chemical potential of the solid and liquid phases of a fully-flexible molecule using classical simulation. The methods are applied to the Smith-Bharadwaj atomistic potential representation of cyclotrimethylene trinitramine (RDX), a well-studied energetic material, to accurately determine the solid and liquid phase Gibbs free energies, and the melting point (Tm). We outline an efficient technique to find the absolute chemical potential and melting point of a fully-flexible molecule using one set of simulations to compute the solid absolute chemical potential and one set of simulations to compute the solid-liquid free energy difference. With this combination, only a handful of simulations are needed, whereby the absolute quantities of the chemical potentials are obtained, for use in other property calculations, such as the characterization of crystal polymorphs or the determination of the entropy. Using the LAMMPS molecular simulator, the Frenkel and Ladd and pseudo-supercritical path techniques are adapted to generate 3rd order fits of the solid and liquid chemical potentials. Results yield the thermodynamic melting point Tm = 488.75 K at 1.0 atm. We also validate these calculations and compare this melting point to one obtained from a typical superheated simulation technique.
SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties
NASA Astrophysics Data System (ADS)
Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc
2014-04-01
Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, R; Zhu, X; Li, S
Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less
NASA Astrophysics Data System (ADS)
Hodge, R. A.; Voepel, H.; Leyland, J.; Sear, D. A.; Ahmed, S. I.
2017-12-01
The shear stress at which a grain is entrained is determined by the balance between the applied fluid forces, and the resisting forces of the grain. Recent research has tended to focus on the applied fluid forces; calculating the resisting forces requires measurement of the geometry of in-situ sediment grains which has previously been very difficult to measure. We have used CT scanning to measure the grain geometry of in-situ water-worked grains, and from these data have calculated metrics that are relevant to grain entrainment. We use these metrics to parameterise a new, fully 3D, moment-balance model of grain entrainment. Inputs to the model are grain dimensions, exposed area, elevation relative to the velocity profile, the location of grain-grain contact points, and contact area with fine matrix sediment. The new CT data and model mean that assumptions of previous grain-entrainment models (e.g. spherical grains, 1D measurements of protrusion, entrainment in the downstream direction) are no longer necessary. The model calculates the critical shear stress for each possible set of contact points, and outputs the lowest value. Consequently, metrics including pivot angle and the direction of grain entrainment are now model outputs, rather than having to be pre-determined. We use the CT data and model to calculate the critical shear stress of 1092 in-situ grains from baskets that were buried and water-worked in a flume prior to scanning. We find that there is no consistent relationship between relative grain size (D/D50) and pivot angle, whereas there is a negative relationship between D/D50 and protrusion. Out of all measured metrics, critical shear stress is most strongly controlled by protrusion. This finding suggests that grain-scale topographic data could be used to estimate grain protrusion and hence improve estimates of critical shear stress.
Xia, Futing; Zhu, Hua
2011-09-01
The alkaline hydrolysis reaction of ethylene phosphate (EP) has been investigated using a supermolecule model, in which several explicit water molecules are included. The structures and single-point energies for all of the stationary points are calculated in the gas phase and in solution at the B3LYP/6-31++G(df,p) and MP2/6-311++G(df,2p) levels. The effect of water bulk solvent is introduced by the polarizable continuum model (PCM). Water attack and hydroxide attack pathways are taken into account for the alkaline hydrolysis of EP. An associative mechanism is observed for both of the two pathways with a kinetically insignificant intermediate. The water attack pathway involves a water molecule attacking and a proton transfer from the attacking water to the hydroxide in the first step, followed by an endocyclic bond cleavage to the leaving group. While in the first step of the hydroxide attack pathway the nucleophile is the hydroxide anion. The calculated barriers in aqueous solution for the water attack and hydroxide attack pathways are all about 22 kcal/mol. The excellent agreement between the calculated and observed values demonstrates that both of the two pathways are possible for the alkaline hydrolysis of EP. Copyright © 2011 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verzhbitskiy, I. A.; Chrysos, M.; Kouzov, A. P.
2010-11-15
Collision-induced Raman bandshapes and zeroth-order spectral moments are calculated both for the depolarized spectrum and for the extremely weak isotropic spectrum of the SF{sub 6}({nu}{sub 1}) +N{sub 2}({nu}{sub 1}) double-Raman-scattering band. A critical comparison is made with experiments conducted recently by the authors [Phys. Rev. A 81, 012702 (2010); 81, 042705 (2010)]. The study of this transition, hitherto restricted to the model framework of two point-polarizable molecules, is now completed to incorporate effects beyond the point-molecule approximation. Whereas the extended model offers a few percent improvement in the depolarized spectrum, it reveals a huge 80% increase in the isotropic spectrummore » and its moment, owing essentially to the polarizability anisotropy of N{sub 2}. For both spectra, agreement between quantum-mechanical calculations and our experiments is found, provided that the best ab initio data for the (hyper)polarizability parameters are used. This refined study shows clearly the need to include all mechanisms and data to a high level of accuracy and allows one to decide between alternatives about difficult and controversial issues such as the intermolecular potential or the sensitive Hamaker force constants.« less
Chapela, Gustavo A; Guzmán, Orlando; Díaz-Herrera, Enrique; del Río, Fernando
2015-04-21
A model of a room temperature ionic liquid can be represented as an ion attached to an aliphatic chain mixed with a counter ion. The simple model used in this work is based on a short rigid tangent square well chain with an ion, represented by a hard sphere interacting with a Yukawa potential at the head of the chain, mixed with a counter ion represented as well by a hard sphere interacting with a Yukawa potential of the opposite sign. The length of the chain and the depth of the intermolecular forces are investigated in order to understand which of these factors are responsible for the lowering of the critical temperature. It is the large difference between the ionic and the dispersion potentials which explains this lowering of the critical temperature. Calculation of liquid-vapor equilibrium orthobaric curves is used to estimate the critical points of the model. Vapor pressures are used to obtain an estimate of the triple point of the different models in order to calculate the span of temperatures where they remain a liquid. Surface tensions and interfacial thicknesses are also reported.
Four pi calibration and modeling of a bare germanium detector in a cylindrical field source
NASA Astrophysics Data System (ADS)
Dewberry, R. A.; Young, J. E.
2012-05-01
In this paper we describe a 4π cylindrical field acquisition configuration surrounding a bare (unshielded, uncollimated) high purity germanium detector. We perform an efficiency calibration with a flexible planar source and model the configuration in the 4π cylindrical field. We then use exact calculus to model the flux on the cylindrical sides and end faces of the detector. We demonstrate that the model accurately represents the experimental detection efficiency compared to that of a point source and to Monte Carlo N-particle (MCNP) calculations of the flux. The model sums over the entire source surface area and the entire detector surface area including both faces and the detector's cylindrical sides. Agreement between the model and both experiment and the MCNP calculation is within 8%.
MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.E.; Baker, M.C.
1999-07-25
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less
Testing the Simple Biosphere model (SiB) using point micrometeorological and biophysical data
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Dorman, J. L.
1987-01-01
The suitability of the Simple Biosphere (SiB) model of Sellers et al. (1986) for calculation of the surface fluxes for use within general circulation models is assessed. The structure of the SiB model is described, and its performance is evaluated in terms of its ability to realistically and accurately simulate biophysical processes over a number of test sites, including Ruthe (Germany), South Carolina (U.S.), and Central Wales (UK), for which point biophysical and micrometeorological data were available. The model produced simulations of the energy balances of barley, wheat, maize, and Norway Spruce sites over periods ranging from 1 to 40 days. Generally, it was found that the model reproduced time series of latent, sensible, and ground-heat fluxes and surface radiative temperature comparable with the available data.
Radiation absorbed dose to bladder walls from positron emitters in the bladder content.
Powell, G F; Chen, C T
1987-01-01
A method to calculate absorbed doses at depths in the walls of a static spherical bladder from a positron emitter in the bladder content has been developed. The beta ray dose component is calculated for a spherical model by employing the solutions to the integration of Loevinger and Bochkarev point source functions over line segments and a line segment source array technique. The gamma ray dose is determined using the specific gamma ray constant. As an example, absorbed radiation doses to the bladder walls from F-18 in the bladder content are presented for static spherical bladder models having radii of 2.0 and 3.5 cm, respectively. Experiments with ultra-thin thermoluminescent dosimeters (TLD's) were performed to verify the results of the calculations. Good agreement between TLD measurements and calculations was obtained.
Optical model calculations of heavy-ion target fragmentation
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Norbury, J. W.
1986-01-01
The fragmentation of target nuclei by relativistic protons and heavy ions is described within the context of a simple abrasion-ablation-final-state interaction model. Abrasion is described by a quantum mechanical formalism utilizing an optical model potential approximation. Nuclear charge distributions of the excited prefragments are calculated by both a hypergeometric distribution and a method based upon the zero-point oscillations of the giant dipole resonance. Excitation energies are estimated from the excess surface energy resulting from the abrasion process and the additional energy deposited by frictional spectator interactions of the abraded nucleons. The ablation probabilities are obtained from the EVA-3 computer program. Isotope production cross sections for the spallation of copper targets by relativistic protons and for the fragmenting of carbon targets by relativistic carbon, neon, and iron projectiles are calculated and compared with available experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos-Mendez, J; Faddegon, B; Paganetti, H
2015-06-15
Purpose: We used TOPAS (TOPAS wraps and extends Geant4 for medical physicists) to compare Geant4 physics models with published data for neutron shielding calculations. Subsequently, we calculated the source terms and attenuation lengths (shielding data) of the total ambient dose equivalent (TADE) in concrete for neutrons produced by protons in brass. Methods: Stage1: The Bertini and Binary nuclear models available in Geant4 were compared with published attenuation at depth of the TADE in concrete and iron. Stage2: Shielding data of the TADE in concrete was calculated for 50– 200 MeV proton beams on brass. Stage3: Shielding data from Stage2 wasmore » extrapolated for 235 MeV proton beams. This data was used in a point-line-source analytical model to calculate the ambient dose per unit therapeutic dose at two locations inside one treatment room at the Francis H Burr Proton Therapy Center. Finally, we compared these results with experimental data and full TOPAS simulations. Results: At larger angles (∼130o) the TADE in concrete calculated with the Bertini model was about 9 times larger than that calculated with the Binary model. The attenuation length in concrete calculated with the Binary model agreed with published data within 7%±0.4% (statistical uncertainty) for the deepest regions and 5%±0.1% for shallower regions. For iron the agreement was within 3%±0.1%. The ambient dose per therapeutic dose calculated with the Binary model, relative to the experimental data, was a ratio of 0.93±0.16 and 1.23±0.24 for two locations. The analytical model overestimated the dose by four orders of magnitude. These differences are attributed to the complexity of the geometry. Conclusion: The Binary and Bertini models gave comparable results, with the Binary model giving the best agreement with published data at large angle. Shielding data we calculated using the Binary model is useful for fast shielding calculations with other analytical models. This work was supported by National Cancer Institute Grant R01CA140735.« less
The effect of Reynolds number and turbulence on airfoil aerodynamics at -90 degrees incidence
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1993-01-01
A method has been developed for calculating the viscous flow about airfoils in with and without deflected flaps at -90 deg incidence. This method provides for the solution of the unsteady incompressible Navier-Stokes equations by means of an implicit technique. The solution is calculated on a body-fitted computational mesh using a staggered grid method. The vorticity is defined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and the continuity equation at the mesh-cell centers. The method provides for the direct solution of the flow field and satisfies the continuity equation to machine zero at each time-step. The method is evaluated in terms of its ability to predict two-dimensional flow about an airfoil at -90 degrees incidence for varying Reynolds number and different boundary layer models. A laminar and a turbulent boundary layer models. A laminar and a turbulent boundary layer model are considered in the evaluation of the method. The variation of the average loading and surface pressure distribution due to flap deflection, Reynolds number, and laminar or turbulent flow are presented and compared with experimental results. The comparisons indicate that the calculated drag and drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results at a similar Reynolds number.
NASA Astrophysics Data System (ADS)
Elwood, Teri; Simmons-Potter, Kelly
2017-08-01
Quantification of the effect of temperature on photovoltaic (PV) module efficiency is vital to the correct interpretation of PV module performance under varied environmental conditions. However, previous work has demonstrated that PV module arrays in the field are subject to significant location-based temperature variations associated with, for example, local heating/cooling and array edge effects. Such thermal non-uniformity can potentially lead to under-prediction or over-prediction of PV array performance due to an incorrect interpretation of individual module temperature de-rating. In the current work, a simulated method for modeling the thermal profile of an extended PV array has been investigated through extensive computational modeling utilizing ANSYS, a high-performance computational fluid dynamics (CFD) software tool. Using the local wind speed as an input, simulations were run to determine the velocity at particular points along modular strings corresponding to the locations of temperature sensors along strings in the field. The point velocities were utilized along with laminar flow theories in order to calculate Nusselt's number for each point. These calculations produced a heat flux profile which, when combined with local thermal and solar radiation profiles, were used as inputs in an ANSYS Thermal Transient model that generated a solar string operating temperature profile. A comparison of the data collected during field testing, and the data fabricated by ANSYS simulations, will be discussed in order to authenticate the accuracy of the model.
NASA Astrophysics Data System (ADS)
Elkhateeb, Esraa
2018-01-01
We consider a cosmological model based on a generalization of the equation of state proposed by Nojiri and Odintsov (2004) and Štefančić (2005, 2006). We argue that this model works as a dark fluid model which can interpolate between dust equation of state and the dark energy equation of state. We show how the asymptotic behavior of the equation of state constrained the parameters of the model. The causality condition for the model is also studied to constrain the parameters and the fixed points are tested to determine different solution classes. Observations of Hubble diagram of SNe Ia supernovae are used to further constrain the model. We present an exact solution of the model and calculate the luminosity distance and the energy density evolution. We also calculate the deceleration parameter to test the state of the universe expansion.
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Ramsey, J.; Moser, A.
1975-01-01
A very general method for calculating compressible three-dimensional laminar and turbulent boundary layers on arbitrary wings is described. The method utilizes a nonorthogonal coordinate system for the boundary-layer calculations and includes a geometry package that represents the wing analytically. In the calculations all the geometric parameters of the coordinate system are accounted for. The Reynolds shear-stress terms are modeled by an eddy-viscosity formulation developed by Cebeci. The governing equations are solved by a very efficient two-point finite-difference method used earlier by Keller and Cebeci for two-dimensional flows and later by Cebeci for three-dimensional flows.
Wavelength dependence in radio-wave scattering and specular-point theory
NASA Technical Reports Server (NTRS)
Tyler, G. L.
1976-01-01
Radio-wave scattering from natural surfaces contains a strong quasispecular component that at fixed wavelengths is consistent with specular-point theory, but often has a strong wavelength dependence that is not predicted by physical optics calculations under the usual limitations of specular-point models. Wavelength dependence can be introduced by a physical approximation that preserves the specular-point assumptions with respect to the radii of curvature of a fictitious, effective scattering surface obtained by smoothing the actual surface. A uniform low-pass filter model of the scattering process yields explicit results for the effective surface roughness versus wavelength. Interpretation of experimental results from planetary surfaces indicates that the asymptotic surface height spectral densities fall at least as fast as an inverse cube of spatial frequency. Asymptotic spectral densities for Mars and portions of the lunar surface evidently decrease more rapidly.
FATE 5: A natural attenuation calibration tool for groundwater fate and transport modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nevin, J.P.; Connor, J.A.; Newell, C.J.
1997-12-31
A new groundwater attenuation modeling tool (FATE 5) has been developed to assist users with determining site-specific natural attenuation rates for organic constituents dissolved in groundwater. FATE 5 is based on and represents an enhancement to the Domenico analytical groundwater transport model. These enhancements include use of an optimization routine to match results from the Domenico model to actual measured site concentrations, an extensive database of chemical property data, and calculation of an estimate of the length of time needed for a plume to reach steady state conditions. FATE 5 was developed in Microsoft{reg_sign} Excel and is controlled by meansmore » of a simple, user-friendly graphical interface. Using the Solver routine built into Excel, FATE 5 is able to calibrate the attenuation rate used by the Domenico model to match site-specific data. By calibrating the decay rate to site-specific measurements, FATE 5 can yield accurate predictions of long-term natural attenuation processes within a groundwater within a groundwater plume. In addition, FATE 5 includes a formulation of the transient Domenico solution used to help the user determine if the steady-state assumptions employed by the model are appropriate. The calibrated groundwater flow model can then be used either to (i) predict upper-bound constituent concentrations in groundwater, based on an observed source zone concentration, or (ii) back-calculate a lower-bound SSTL value, based on a user-specified exposure point concentration at the groundwater point of exposure (POE). This paper reviews the major elements of the FATE 5 model - and gives results for real-world applications. Key modeling assumptions and summary guidelines regarding calculation procedures and input parameter selection are also addressed.« less
Ohno, Shotaro; Takahashi, Kana; Inoue, Aimi; Takada, Koki; Ishihara, Yoshiaki; Tanigawa, Masaru; Hirao, Kazuki
2017-12-01
This study aims to examine the smallest detectable change (SDC) and test-retest reliability of the Center for Epidemiologic Studies Depression Scale (CES-D), General Self-Efficacy Scale (GSES), and 12-item General Health Questionnaire (GHQ-12). We tested 154 young adults at baseline and 2 weeks later. We calculated the intra-class correlation coefficients (ICCs) for test-retest reliability with a two-way random effects model for agreement. We then calculated the standard error of measurement (SEM) for agreement using the ICC formula. The SEM for agreement was used to calculate SDC values at the individual level (SDC ind ) and group level (SDC group ). The study participants included 137 young adults. The ICCs for all self-reported outcome measurement scales exceeded 0.70. The SEM of CES-D was 3.64, leading to an SDC ind of 10.10 points and SDC group of 0.86 points. The SEM of GSES was 1.56, leading to an SDC ind of 4.33 points and SDC group of 0.37 points. The SEM of GHQ-12 with bimodal scoring was 1.47, leading to an SDC ind of 4.06 points and SDC group of 0.35 points. The SEM of GHQ-12 with Likert scoring was 2.44, leading to an SDC ind of 6.76 points and SDC group of 0.58 points. To confirm that the change was not a result of measurement error, a score of self-reported outcome measurement scales would need to change by an amount greater than these SDC values. This has important implications for clinicians and epidemiologists when assessing outcomes. © 2017 John Wiley & Sons, Ltd.
Butlitsky, M A; Zelener, B B; Zelener, B V
2014-07-14
A two-component plasma model, which we called a "shelf Coulomb" model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The "shelf Coulomb" model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for large distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ɛ parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ɛ and γ = βe(2)n(1/3) (where β = 1/kBT, n is the particle's density, kB is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ɛ and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ɛ(crit) ≈ 13(T(*)(crit) ≈ 0.076), γ(crit) ≈ 1.8(v(*)(crit) ≈ 0.17), P(*)(crit) ≈ 0.39, where specific volume v* = 1/γ(3) and reduced temperature T(*) = ɛ(-1).
Bi-local holography in the SYK model: Perturbations
Jevicki, Antal; Suzuki, Kenta
2016-11-08
We continue the study of the Sachdev-Ye-Kitaev model in the Large N limit. Following our formulation in terms of bi-local collective fields with dynamical reparametrization symmetry, we perform perturbative calculations around the conformal IR point. As a result, these are based on an ε expansion which allows for analytical evaluation of correlators and finite temperature quantities.
N-point functions in rolling tachyon background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokela, Niko; Keski-Vakkuri, Esko; Department of Physics, P.O. Box 64, FIN-00014, University of Helsinki
2009-04-15
We study n-point boundary correlation functions in timelike boundary Liouville theory, relevant for open string multiproduction by a decaying unstable D brane. We give an exact result for the one-point function of the tachyon vertex operator and show that it is consistent with a previously proposed relation to a conserved charge in string theory. We also discuss when the one-point amplitude vanishes. Using a straightforward perturbative expansion, we find an explicit expression for a tachyon n-point amplitude for all n, however the result is still a toy model. The calculation uses a new asymptotic approximation for Toeplitz determinants, derived bymore » relating the system to a Dyson gas at finite temperature.« less
Ma, Songling; Hwang, Sungbo; Lee, Sehan; Acree, William E; No, Kyoung Tai
2018-04-23
To describe the physically realistic solvation free energy surface of a molecule in a solvent, a generalized version of the solvation free energy density (G-SFED) calculation method has been developed. In the G-SFED model, the contribution from the hydrogen bond (HB) between a solute and a solvent to the solvation free energy was calculated as the product of the acidity of the donor and the basicity of the acceptor of an HB pair. The acidity and basicity parameters of a solute were derived using the summation of acidities and basicities of the respective acidic and basic functional groups of the solute, and that of the solvent was experimentally determined. Although the contribution of HBs to the solvation free energy could be evenly distributed to grid points on the surface of a molecule, the G-SFED model was still inadequate to describe the angle dependency of the HB of a solute with a polarizable continuum solvent. To overcome this shortcoming of the G-SFED model, the contribution of HBs was formulated using the geometric parameters of the grid points described in the HB coordinate system of the solute. We propose an HB angle dependency incorporated into the G-SFED model, i.e., the G-SFED-HB model, where the angular-dependent acidity and basicity densities are defined and parametrized with experimental data. The G-SFED-HB model was then applied to calculate the solvation free energies of organic molecules in water, various alcohols and ethers, and the log P values of diverse organic molecules, including peptides and a protein. Both the G-SFED model and the G-SFED-HB model reproduced the experimental solvation free energies with similar accuracy, whereas the distributions of the SFED on the molecular surface calculated by the G-SFED and G-SFED-HB models were quite different, especially for molecules having HB donors or acceptors. Since the angle dependency of HBs was included in the G-SFED-HB model, the SFED distribution of the G-SFED-HB model is well described as compared to that of the G-SFED model.
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2017-09-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2018-07-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
Calculation of the wetting parameter from a cluster model in the framework of nanothermodynamics.
García-Morales, V; Cervera, J; Pellicer, J
2003-06-01
The critical wetting parameter omega(c) determines the strength of interfacial fluctuations in critical wetting transitions. In this Brief Report, we calculate omega(c) from considerations on critical liquid clusters inside a vapor phase. The starting point is a cluster model developed by Hill and Chamberlin in the framework of nanothermodynamics [Proc. Natl. Acad. Sci. USA 95, 12779 (1998)]. Our calculations yield results for omega(c) between 0.52 and 1.00, depending on the degrees of freedom considered. The findings are in agreement with previous experimental results and give an idea of the universal dynamical behavior of the clusters when approaching criticality. We suggest that this behavior is a combination of translation and vortex rotational motion (omega(c)=0.84).
Cooper, Michael William D.; Fitzpatrick, M. E.; Tsoukalas, L. H.; ...
2016-06-06
ThO 2 is a candidate material for use in nuclear fuel applications and as such it is important to investigate its materials properties over a range of temperatures and pressures. In the present study molecular dynamics calculations are used to calculate elastic and expansivity data. These are used in the framework of a thermodynamic model, the cBΩ model, to calculate the oxygen self-diffusion coefficient in ThO 2 over a range of pressures (–10–10 GPa) and temperatures (300–1900 K). As a result, increasing the hydrostatic pressure leads to a significant reduction in oxygen self-diffusion. Conversely, negative hydrostatic pressure significantly enhances oxygenmore » self-diffusion.« less
System and method for measuring residual stress
Prime, Michael B.
2002-01-01
The present invention is a method and system for determining the residual stress within an elastic object. In the method, an elastic object is cut along a path having a known configuration. The cut creates a portion of the object having a new free surface. The free surface then deforms to a contour which is different from the path. Next, the contour is measured to determine how much deformation has occurred across the new free surface. Points defining the contour are collected in an empirical data set. The portion of the object is then modeled in a computer simulator. The points in the empirical data set are entered into the computer simulator. The computer simulator then calculates the residual stress along the path which caused the points within the object to move to the positions measured in the empirical data set. The calculated residual stress is then presented in a useful format to an analyst.
Development of a coordinate measuring machine (CMM) touch probe using a multi-axis force sensor
NASA Astrophysics Data System (ADS)
Park, Jae-jun; Kwon, Kihwan; Cho, Nahmgyoo
2006-09-01
Traditional touch trigger probes are widely used on most commercial coordinate measuring machines (CMMs). However, the CMMs with these probes have a systematic error due to the shape of the probe tip and elastic deformation of the stylus resulting from contact pressure with the specimen. In this paper, a new touch probe with a three degrees-of-freedom force sensor is proposed. From relationships between an obtained contact force vector and the geometric shape of the probe, it is possible to calculate the coordinates of the exact probe-specimen contact points. An empirical model of the probe is applied to calculate the coordinates of the contact points and the amount of pretravel. With the proposed probing system, the measuring error induced by the indeterminateness of the probe-specimen contact point and the pretravel can be estimated and compensated for successfully.
An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter
NASA Astrophysics Data System (ADS)
Chang, M.; Kang, Z.
2017-09-01
Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.
SU-E-T-276: Dose Calculation Accuracy with a Standard Beam Model for Extended SSD Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisling, K; Court, L; Kirsner, S
2015-06-15
Purpose: While most photon treatments are delivered near 100cm SSD or less, a subset of patients may benefit from treatment at SSDs greater than 100cm. A proposed rotating chair for upright treatments would enable isocentric treatments at extended SSDs. The purpose of this study was to assess the accuracy of the Pinnacle{sup 3} treatment planning system dose calculation for standard beam geometries delivered at extended SSDs with a beam model commissioned at 100cm SSD. Methods: Dose to a water phantom at 100, 110, and 120cm SSD was calculated with the Pinnacle {sup 3} CC convolve algorithm for 6x beams formore » 5×5, 10×10, 20×20, and 30×30cm{sup 2} field sizes (defined at the water surface for each SSD). PDDs and profiles (depths of 1.5, 12.5, and 22cm) were compared to measurements in water with an ionization chamber. Point-by-point agreement was analyzed, as well as agreement in field size defined by the 50% isodose. Results: The deviations of the calculated PDDs from measurement, analyzed from depth of maximum dose to 23cm, were all within 1.3% for all beam geometries. In particular, the calculated PDDs at 10cm depth were all within 0.7% of measurement. For profiles, the deviations within the central 80% of the field were within 2.2% for all geometries. The field sizes all agreed within 2mm. Conclusion: The agreement of the PDDs and profiles calculated by Pinnacle3 for extended SSD geometries were within the acceptability criteria defined by Van Dyk (±2% for PDDs and ±3% for profiles). The accuracy of the calculation of more complex beam geometries at extended SSDs will be investigated to further assess the feasibility of using a standard beam model commissioned at 100cm SSD in Pinnacle3 for extended SSD treatments.« less
Radiation pattern of a borehole radar antenna
Ellefsen, K.J.; Wright, D.L.
2002-01-01
To understand better how a borehole antenna radiates radar waves into a formation, this phenomenon is simulated numerically using the finite-difference, time-domain method. The simulations are of two different antenna models that include features like a driving point fed by a coaxial cable, resistive loading of the antenna, and a water-filled borehole. For each model, traces are calculated in the far-field region, and then, from these traces, radiation patterns are calculated. The radiation patterns show that the amplitude of the radar wave is strongly affected by its frequency, its propagation direction, and the resistive loading of the antenna.
Monte Carlo calculations of lunar regolith thickness distributions.
NASA Technical Reports Server (NTRS)
Oberbeck, V. R.; Quaide, W. L.; Mahan, M.; Paulson, J.
1973-01-01
It is pointed out that none of the existing models of lunar regolith evolution take into account the relationship between regolith thickness, crater shape, and volume of debris ejected. The results of a Monte Carlo computer simulation of regolith evolution are presented. The simulation was designed to consider the full effect of the buffering regolith through calculation of the amount of debris produced by any given crater as a function of the amount of debris present at the site of the crater at the time of crater formation. The method is essentially an improved version of the Oberbeck and Quaide (1968) model.
The quadrupole model for rigid-body gravity simulations
NASA Astrophysics Data System (ADS)
Dobrovolskis, Anthony R.; Korycansky, D. G.
2013-07-01
We introduce two new models for gravitational simulations of systems of non-spherical bodies, such as comets and asteroids. In both models, one body (the "primary") may be represented by any convenient means, to arbitrary accuracy. In our first model, all of the other bodies are represented by small gravitational "molecules" consisting of a few point masses, rigidly linked together. In our second model, all of the other bodies are treated as point quadrupoles, with gravitational potentials including spherical harmonic terms up to the third degree (rather than only the first degree, as for ideal spheres or point masses). This quadrupole formulation may be regarded as a generalization of MacCullagh's approximation. Both models permit the efficient calculation of the interaction energy, the force, and the torque acting on a small body in an arbitrary external gravitational potential. We test both models for the cases of a triaxial ellipsoid, a rectangular parallelepiped, and "duplex" combinations of two spheres, all in a point-mass potential. These examples were chosen in order to compare the accuracy of our technique with known analytical results, but the ellipsoid and duplex are also useful models for comets and asteroids. We find that both approaches show significant promise for more efficient gravitational simulations of binary asteroids, for example. An appendix also describes the duplex model in detail.
Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O
1997-10-13
In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
A dynamic model for tumour growth and metastasis formation.
Haustein, Volker; Schumacher, Udo
2012-07-05
A simple and fast computational model to describe the dynamics of tumour growth and metastasis formation is presented. The model is based on the calculation of successive generations of tumour cells and enables one to describe biologically important entities like tumour volume, time point of 1st metastatic growth or number of metastatic colonies at a given time. The model entirely relies on the chronology of these successive events of the metastatic cascade. The simulation calculations were performed for two embedded growth models to describe the Gompertzian like growth behaviour of tumours. The initial training of the models was carried out using an analytical solution for the size distribution of metastases of a hepatocellular carcinoma. We then show the applicability of our models to clinical data from the Munich Cancer Registry. Growth and dissemination characteristics of metastatic cells originating from cells in the primary breast cancer can be modelled thus showing its ability to perform systematic analyses relevant for clinical breast cancer research and treatment. In particular, our calculations show that generally metastases formation has already been initiated before the primary can be detected clinically.
A dynamic model for tumour growth and metastasis formation
2012-01-01
A simple and fast computational model to describe the dynamics of tumour growth and metastasis formation is presented. The model is based on the calculation of successive generations of tumour cells and enables one to describe biologically important entities like tumour volume, time point of 1st metastatic growth or number of metastatic colonies at a given time. The model entirely relies on the chronology of these successive events of the metastatic cascade. The simulation calculations were performed for two embedded growth models to describe the Gompertzian like growth behaviour of tumours. The initial training of the models was carried out using an analytical solution for the size distribution of metastases of a hepatocellular carcinoma. We then show the applicability of our models to clinical data from the Munich Cancer Registry. Growth and dissemination characteristics of metastatic cells originating from cells in the primary breast cancer can be modelled thus showing its ability to perform systematic analyses relevant for clinical breast cancer research and treatment. In particular, our calculations show that generally metastases formation has already been initiated before the primary can be detected clinically. PMID:22548735
A reflection model for eclipsing binary stars
NASA Technical Reports Server (NTRS)
Wood, D. B.
1973-01-01
A highly accurate reflection model has been developed which emphasizes efficiency of computer calculation. It is assumed that the heating of the irradiated star must depend upon the following properties of the irradiating star: (1) effective temperature; (2) apparent area as seen from a point on the surface of the irradiated star; (3) limb darkening; and (4) zenith distance of the apparent centre as seen from a point on the surface of the irradiated star. The algorithm eliminates the need to integrate over the irradiating star while providing a highly accurate representation of the integrated bolometric flux, even for gravitationally distorted stars.
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.
Hedin, Emma; Bäck, Anna
2013-09-06
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.
Entanglement in a model for Hawking radiation: An application of quadratic algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bambah, Bindu A., E-mail: bbsp@uohyd.ernet.in; Mukku, C., E-mail: mukku@iiit.ac.in; Shreecharan, T., E-mail: shreecharan@gmail.com
2013-03-15
Quadratic polynomially deformed su(1,1) and su(2) algebras are utilized in model Hamiltonians to show how the gravitational system consisting of a black hole, infalling radiation and outgoing (Hawking) radiation can be solved exactly. The models allow us to study the long-time behaviour of the black hole and its outgoing modes. In particular, we calculate the bipartite entanglement entropies of subsystems consisting of (a) infalling plus outgoing modes and (b) black hole modes plus the infalling modes, using the Janus-faced nature of the model. The long-time behaviour also gives us glimpses of modifications in the character of Hawking radiation. Finally, wemore » study the phenomenon of superradiance in our model in analogy with atomic Dicke superradiance. - Highlights: Black-Right-Pointing-Pointer We examine a toy model for Hawking radiation with quantized black hole modes. Black-Right-Pointing-Pointer We use quadratic polynomially deformed su(1,1) algebras to study its entanglement properties. Black-Right-Pointing-Pointer We study the 'Dicke Superradiance' in black hole radiation using quadratically deformed su(2) algebras. Black-Right-Pointing-Pointer We study the modification of the thermal character of Hawking radiation due to quantized black hole modes.« less
Acoustic intensity calculations for axisymmetrically modeled fluid regions
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.; Everstine, Gordon C.
1992-01-01
An algorithm for calculating acoustic intensities from a time harmonic pressure field in an axisymmetric fluid region is presented. Acoustic pressures are computed in a mesh of NASTRAN triangular finite elements of revolution (TRIAAX) using an analogy between the scalar wave equation and elasticity equations. Acoustic intensities are then calculated from pressures and pressure derivatives taken over the mesh of TRIAAX elements. Intensities are displayed as vectors indicating the directions and magnitudes of energy flow at all mesh points in the acoustic field. A prolate spheroidal shell is modeled with axisymmetric shell elements (CONEAX) and submerged in a fluid region of TRIAAX elements. The model is analyzed to illustrate the acoustic intensity method and the usefulness of energy flow paths in the understanding of the response of fluid-structure interaction problems. The structural-acoustic analogy used is summarized for completeness. This study uncovered a NASTRAN limitation involving numerical precision issues in the CONEAX stiffness calculation causing large errors in the system matrices for nearly cylindrical cones.
Yoon, Yongjin; Puria, Sunil; Steele, Charles R
2009-09-05
In our previous work, the basilar membrane velocity V(BM) for a gerbil cochlea was calculated and compared with physiological measurements. The calculated V(BM) showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the V(BM) for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase.
YOON, YONGJIN; PURIA, SUNIL; STEELE, CHARLES R.
2010-01-01
In our previous work, the basilar membrane velocity VBM for a gerbil cochlea was calculated and compared with physiological measurements. The calculated VBM showed excessive phase excursion and, in the active case, a best-frequency place shift of approximately two fifths of an octave higher. Here we introduce a refined model that uses the time-averaged Lagrangian for the conservative system to resolve the phase excursion issues. To improve the overestimated best-frequency place found in the previous feed-forward active model, we implement in the new model a push-pull mechanism from the outer hair cells and phalangeal process. Using this new model, the VBM for the gerbil cochlea was calculated and compared with animal measurements, The results show excellent agreement for mapping the location of the maximum response to frequency, while the agreement for the response at a fixed point as a function of frequency is excellent for the amplitude and good for the phase. PMID:20485540
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Calculation methods study on hot spot stress of new girder structure detail
NASA Astrophysics Data System (ADS)
Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing
2017-10-01
To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pantyukhova, Olga, E-mail: Pantyukhova@list.ru; Starenchenko, Vladimir, E-mail: star@tsuab.ru; Starenchenko, Svetlana, E-mail: sve-starenchenko@yandex.ru
2016-01-15
The dependences of the point defect concentration (interstitial atoms and vacancies) on the deformation degree were calculated for the L1{sub 2} alloys with the high and low antiphase boundaries (APB) energy in terms of the mathematical model of the work and thermal strengthening of the alloys with the L1{sub 2} structure; the concentration of the point defects generated and annihilated in the process of deformation was estimated. It was found that the main part of the point defects generating during plastic deformation annihilates, the residual density of the deformation point defects does not exceed 10{sup −5}.
The Bounce of SL-9 Impact Ejecta Plumes on Re-Entry
NASA Astrophysics Data System (ADS)
Deming, L. D.; Harrington, J.
1996-09-01
We have generated synthetic light curves of the re-entry of SL-9 ejecta plumes into Jupiter's atmosphere and have modeled the periodic oscillation of the observed R plume light curves (P. D. Nicholson et al. 1995, Geophys. Res. Lett. 22, 1613--1616) as a hydrodynamic bounce. Our model is separated into plume and atmospheric components. The plume portion of the model is a ballistic Monte Carlo calculation (Harrington and Deming, this meeting). In this paper we describe the atmospheric portion of the model. The infalling plume is divided over a spatial grid (in latitude/longitude). The plume is layered, and joined to a 1-D Lagrangian radiative-hydrodynamic model of the atmosphere, at each grid point. The radiative-hydrodynamic code solves the momentum, energy, and radiative transfer equations for both the infalling plume layers and the underlying atmosphere using an explicit finite difference scheme. It currently uses gray opacities for both the plume and the atmosphere, and the calculations indicate that a much greater opacity is needed for the plume than for the atmosphere. We compute the emergent infrared intensity at each grid point, and integrate spatially to yield a synthetic light curve. These curves exhibit many features in common with observed light curves, including a rapid rise to maximum light followed by a gradual decline due to radiative damping. Oscillatory behavior (the ``bounce'') is a persistent feature of the light curves, and is caused by the elastic nature of the plume impact. In addition to synthetic light curves, the model also calculates temperature profiles for the jovian atmosphere as heated by the plume infall.
NASA Astrophysics Data System (ADS)
Lee, Sinyoung; Koike, Takuji
2018-05-01
The inner hair cells (IHCs) in the cochlea transduce mechanical vibration of the basilar membrane (BM), caused by sound pressure, to electrical signals that are transported along the acoustic nerve to the brain. The mechanical vibration of the BM and the ionic behaviors of the IHCs have been investigated. However, consideration of the ionic behavior of the IHCs related to mechanical vibration is necessary to investigate the mechano-electrical transduction of the cochlea. In this study, a finite-element model of the BM, which takes into account the non-linear activities of the outer hair cells (OHCs), and an ionic current model of IHC were combined. The amplitudes and phases of the vibration at several points on the BM were obtained from the finite-element model by applying sound pressure. These values were fed into the ionic current model, and changes in membrane potential and calcium ion concentration of the IHCs were calculated. The membrane potential of the IHC at the maximum amplitude point (CF point) was higher than that at the non-CF points. The calcium ion concentration at the CF point was also higher than that at the non-CF points. These results suggest that the cochlea achieves its good frequency discrimination ability through mechano-electrical transduction.
Filling the voids in the SRTM elevation model — A TIN-based delta surface approach
NASA Astrophysics Data System (ADS)
Luedeling, Eike; Siebert, Stefan; Buerkert, Andreas
The Digital Elevation Model (DEM) derived from NASA's Shuttle Radar Topography Mission is the most accurate near-global elevation model that is publicly available. However, it contains many data voids, mostly in mountainous terrain. This problem is particularly severe in the rugged Oman Mountains. This study presents a method to fill these voids using a fill surface derived from Russian military maps. For this we developed a new method, which is based on Triangular Irregular Networks (TINs). For each void, we extracted points around the edge of the void from the SRTM DEM and the fill surface. TINs were calculated from these points and converted to a base surface for each dataset. The fill base surface was subtracted from the fill surface, and the result added to the SRTM base surface. The fill surface could then seamlessly be merged with the SRTM DEM. For validation, we compared the resulting DEM to the original SRTM surface, to the fill DEM and to a surface calculated by the International Center for Tropical Agriculture (CIAT) from the SRTM data. We calculated the differences between measured GPS positions and the respective surfaces for 187,500 points throughout the mountain range (ΔGPS). Comparison of the means and standard deviations of these values showed that for the void areas, the fill surface was most accurate, with a standard deviation of the ΔGPS from the mean ΔGPS of 69 m, and only little accuracy was lost by merging it to the SRTM surface (standard deviation of 76 m). The CIAT model was much less accurate in these areas (standard deviation of 128 m). The results show that our method is capable of transferring the relative vertical accuracy of a fill surface to the void areas in the SRTM model, without introducing uncertainties about the absolute elevation of the fill surface. It is well suited for datasets with varying altitude biases, which is a common problem of older topographic information.
NASA Technical Reports Server (NTRS)
August, Richard; Kaza, Krishna Rao V.
1988-01-01
An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.
Inflection point in running kinetic term inflation
NASA Astrophysics Data System (ADS)
Gao, Tie-Jun; Xiu, Wu-Tao; Yang, Xiu-Yi
2017-04-01
In this work, we calculate the general form of the scalar potential with polynomial superpotential in the framework of running kinetic term inflation, then focus on a polynomial superpotential with two terms and obtain the inflection point inflationary model. We study the inflationary dynamics and show that the predicted value of the scalar spectral index and tensor-to-scalar ratio can lie within the 1σ confidence region allowed by the result of Planck 2015.
Bed inventory overturn in a circulating fluid bed riser with pant-leg structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jinjing Li; Wei Wang; Hairui Yang
2009-05-15
The special phenomenon, nominated as bed inventory overturn, in circulating fluid bed (CFB) riser with pant-leg structure was studied with model calculation and experimental work. A compounded pressure drop mathematic model was developed and validated with the experimental data in a cold experimental test rig. The model calculation results agree well with the measured data. In addition, the intensity of bed inventory overturn is directly proportional to the fluidizing velocity and is inversely proportional to the branch point height. The results in the present study provide significant information for the design and operation of a CFB boiler with pant-leg structure.more » 15 refs., 10 figs., 1 tab.« less
Evaluation of FSK models for radiative heat transfer under oxyfuel conditions
NASA Astrophysics Data System (ADS)
Clements, Alastair G.; Porter, Rachael; Pranzitelli, Alessandro; Pourkashanian, Mohamed
2015-01-01
Oxyfuel is a promising technology for carbon capture and storage (CCS) applied to combustion processes. It would be highly advantageous in the deployment of CCS to be able to model and optimise oxyfuel combustion, however the increased concentrations of CO2 and H2O under oxyfuel conditions modify several fundamental processes of combustion, including radiative heat transfer. This study uses benchmark narrow band radiation models to evaluate the influence of assumptions in global full-spectrum k-distribution (FSK) models, and whether they are suitable for modelling radiation in computational fluid dynamics (CFD) calculations of oxyfuel combustion. The statistical narrow band (SNB) and correlated-k (CK) models are used to calculate benchmark data for the radiative source term and heat flux, which are then compared to the results calculated from FSK models. Both the full-spectrum correlated k (FSCK) and the full-spectrum scaled k (FSSK) models are applied using up-to-date spectral data. The results show that the FSCK and FSSK methods achieve good agreement in the test cases. The FSCK method using a five-point Gauss quadrature scheme is recommended for CFD calculations in oxyfuel conditions, however there are still potential inaccuracies in cases with very wide variations in the ratio between CO2 and H2O concentrations.
Observational Role of Dark Matter in f(R) Models for Structure Formation
NASA Astrophysics Data System (ADS)
Verma, Murli Manohar; Yadav, Bal Krishna
The fixed points for the dynamical system in the phase space have been calculated with dark matter in the f(R) gravity models. The stability conditions of these fixed points are obtained in the ongoing accelerated phase of the universe, and the values of the Hubble parameter and Ricci scalar are obtained for various evolutionary stages of the universe. We present a range of some modifications of general relativistic action consistent with the ΛCDM model. We elaborate upon the fact that the upcoming cosmological observations would further constrain the bounds on the possible forms of f(R) with greater precision that could in turn constrain the search for dark matter in colliders.
Self-Avoiding Walks on the Random Lattice and the Random Hopping Model on a Cayley Tree
NASA Astrophysics Data System (ADS)
Kim, Yup
Using a field theoretic method based on the replica trick, it is proved that the three-parameter renormalization group for an n-vector model with quenched randomness reduces to a two-parameter one in the limit n (--->) 0 which corresponds to self-avoiding walks (SAWs). This is also shown by the explicit calculation of the renormalization group recursion relations to second order in (epsilon). From this reduction we find that SAWs on the random lattice are in the same universality class as SAWs on the regular lattice. By analogy with the case of the n-vector model with cubic anisotropy in the limit n (--->) 1, the fixed-point structure of the n-vector model with randomness is analyzed in the SAW limit, so that a physical interpretation of the unphysical fixed point is given. Corrections of the values of critical exponents of the unphysical fixed point published previously is also given. Next we formulate an integral equation and recursion relations for the configurationally averaged one particle Green's function of the random hopping model on a Cayley tree of coordination number ((sigma) + 1). This formalism is tested by applying it successfully to the nonrandom model. Using this scheme for 1 << (sigma) < (INFIN) we calculate the density of states of this model with a Gaussian distribution of hopping matrix elements in the range of energy E('2) > E(,c)('2), where E(,c) is a critical energy described below. The singularity in the Green's function which occurs at energy E(,1)('(0)) for (sigma) = (INFIN) is shifted to complex energy E(,1) (on the unphysical sheet of energy E) for small (sigma)('-1). This calculation shows that the density of states is smooth function of energy E around the critical energy E(,c) = Re E(,1) in accord with Wegner's theorem. In this formulation the density of states has no sharp phase transition on the real axis of E because E(,1) has developed an imaginary part. Using the Lifschitz argument, we calculate the density of states near the band edge for the model when the hopping matrix elements are governed by a bounded probability distribution. It is also shown within the dynamical system language that the density of states of the model with a bounded distribution never vanishes inside the band and we suggest a theoretical mechanism for the formation of energy bands.
Deciphering chemical order/disorder and material properties at the single-atom level
Yang, Yongsoo; Chen, Chien-Chun; Scott, M. C.; ...
2017-02-01
Perfect crystals are rare in nature. Real materials often contain crystal defects and chemical order/disorder such as grain boundaries, dislocations, interfaces, surface reconstructions and point defects. Such disruption in periodicity strongly affects material properties and functionality. Despite rapid development of quantitative material characterization methods, correlating three-dimensional (3D) atomic arrangements of chemical order/disorder and crystal defects with material properties remains a challenge. On a parallel front, quantum mechanics calculations such as density functional theory (DFT) have progressed from the modelling of ideal bulk systems to modelling ‘real’ materials with dopants, dislocations, grain boundaries and interfaces; but these calculations rely heavily onmore » average atomic models extracted from crystallography. To improve the predictive power of first-principles calculations, there is a pressing need to use atomic coordinates of real systems beyond average crystallographic measurements. Here we determine the 3D coordinates of 6,569 iron and 16,627 platinum atoms in an iron-platinum nanoparticle, and correlate chemical order/disorder and crystal defects with material properties at the single-atom level. We identify rich structural variety with unprecedented 3D detail including atomic composition, grain boundaries, anti-phase boundaries, anti-site point defects and swap defects. We show that the experimentally measured coordinates and chemical species with 22 picometre precision can be used as direct input for DFT calculations of material properties such as atomic spin and orbital magnetic moments and local magnetocrystalline anisotropy. The work presented here combines 3D atomic structure determination of crystal defects with DFT calculations, which is expected to advance our understanding of structure–property relationships at the fundamental level.« less
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Pseudo-critical point in anomalous phase diagrams of simple plasma models
NASA Astrophysics Data System (ADS)
Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu
2016-11-01
Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 < Z < Z 2). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941).
NASA Astrophysics Data System (ADS)
Ikeda, K.; Goldfarb, E. J.; Tisato, N.
2017-12-01
Digital rock physics (DRP) allows performing common laboratory experiments on numerical models to estimate, for example, rock hydraulic permeability. The standard procedure of DRP involves turning a rock sample into a numerical array using X-ray micro computed tomography (micro-CT). Each element of the array bears a value proportional to the X-ray attenuation of the rock at the element (voxel). However, the traditional DRP methodology, which includes segmentation, over-predicts rock moduli by significant amounts (e.g., 100%). Recently, a new methodology - the segmentation-less approach - has been proposed leading to more accurate DRP estimate of elastic moduli. This new method is based on homogenization theory. Typically, segmentation-less approach requires calibration points from known density objects, known as targets. Not all micro-CT datasets have these reference points. Here, we describe how we perform segmentation- and target-less DRP to estimate elastic properties of rocks (i.e., elastic moduli), which are crucial parameters to perform subsurface modeling. We calculate the elastic properties of a Berea sandstone sample that was scanned at a resolution of 40 microns per voxel. We transformed the CT images into density matrices using polynomial fitting curve with four calibration points: the whole rock, the center of quartz grains, the center of iron oxide grains, and the center of air-filled volumes. The first calibration point is obtained by assigning the density of the whole rock to the average of all CT-numbers in the dataset. Then, we locate the center of each phase by finding local extrema point in the dataset. The average CT-numbers of these center points are assigned the density equal to either pristine minerals (quartz and iron oxide) or air. Next, density matrices are transformed to porosity and moduli matrices by means of an effective medium theory. Finally, effective static bulk and shear modulus are numerically calculated by using a Matlab code derived from the elas3D NIST code. The calculated quasi-static P- and S-wave speed overestimates the laboratory result by 37% and 5%, respectively. In fact, our approach predicts wave speeds more accurately than traditional DRP methods. Nevertheless, the presented methodology need to be further investigated and improved.
Infinite-disorder critical points of models with stretched exponential interactions
NASA Astrophysics Data System (ADS)
Juhász, Róbert
2014-09-01
We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less
Khalid Hussein
2012-02-01
This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.
Khalid Hussein
2012-02-01
This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.
Khalid Hussein
2012-02-01
This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.
Khalid Hussein
2012-02-01
This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled"warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.
Khalid Hussein
2012-02-01
This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.
Khalid Hussein
2012-02-01
This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.
Kawamura, Kazuya; Kobayashi, Yo; Fujie, Masakatsu G
2007-01-01
Medical technology has advanced with the introduction of robot technology, making previous medical treatments that were very difficult far more possible. However, operation of a surgical robot demands substantial training and continual practice on the part of the surgeon because it requires difficult techniques that are different from those of traditional surgical procedures. We focused on a simulation technology based on the physical characteristics of organs. In this research, we proposed the development of surgical simulation, based on a physical model, for intra-operative navigation by a surgeon. In this paper, we describe the design of our system, in particular our organ deformation calculator. The proposed simulation system consists of an organ deformation calculator and virtual slave manipulators. We obtained adequate experimental results of a target node at a nearby point of interaction, because this point ensures better accuracy for our simulation model. The next research step would be to focus on a surgical environment in which internal organ models would be integrated into a slave simulation system.
NASA Astrophysics Data System (ADS)
Feng, Xiao-Li; Li, Yu-Xiao; Gu, Jian-Zhong; Zhuo, Yi-Zhong
2009-10-01
The relaxation property of both Eigen model and Crow-Kimura model with a single peak fitness landscape is studied from phase transition point of view. We first analyze the eigenvalue spectra of the replication mutation matrices. For sufficiently long sequences, the almost crossing point between the largest and second-largest eigenvalues locates the error threshold at which critical slowing down behavior appears. We calculate the critical exponent in the limit of infinite sequence lengths and compare it with the result from numerical curve fittings at sufficiently long sequences. We find that for both models the relaxation time diverges with exponent 1 at the error (mutation) threshold point. Results obtained from both methods agree quite well. From the unlimited correlation length feature, the first order phase transition is further confirmed. Finally with linear stability theory, we show that the two model systems are stable for all ranges of mutation rate. The Eigen model is asymptotically stable in terms of mutant classes, and the Crow-Kimura model is completely stable.
Yu, Miao; Wei, Chenhui; Niu, Leilei; Li, Shaohua; Yu, Yongjun
2018-01-01
Tensile strength and fracture toughness, important parameters of the rock for engineering applications are difficult to measure. Thus this paper selected three kinds of granite samples (grain sizes = 1.01mm, 2.12mm and 3mm), used the combined experiments of physical and numerical simulation (RFPA-DIP version) to conduct three-point-bending (3-p-b) tests with different notches and introduced the acoustic emission monitor system to analyze the fracture mechanism around the notch tips. To study the effects of grain size on the tensile strength and toughness of rock samples, a modified fracture model was established linking fictitious crack to the grain size so that the microstructure of the specimens and fictitious crack growth can be considered together. The fractal method was introduced to represent microstructure of three kinds of granites and used to determine the length of fictitious crack. It is a simple and novel method to calculate the tensile strength and fracture toughness directly. Finally, the theoretical model was verified by the comparison to the numerical experiments by calculating the nominal strength σn and maximum loads Pmax. PMID:29596422
Yu, Miao; Wei, Chenhui; Niu, Leilei; Li, Shaohua; Yu, Yongjun
2018-01-01
Tensile strength and fracture toughness, important parameters of the rock for engineering applications are difficult to measure. Thus this paper selected three kinds of granite samples (grain sizes = 1.01mm, 2.12mm and 3mm), used the combined experiments of physical and numerical simulation (RFPA-DIP version) to conduct three-point-bending (3-p-b) tests with different notches and introduced the acoustic emission monitor system to analyze the fracture mechanism around the notch tips. To study the effects of grain size on the tensile strength and toughness of rock samples, a modified fracture model was established linking fictitious crack to the grain size so that the microstructure of the specimens and fictitious crack growth can be considered together. The fractal method was introduced to represent microstructure of three kinds of granites and used to determine the length of fictitious crack. It is a simple and novel method to calculate the tensile strength and fracture toughness directly. Finally, the theoretical model was verified by the comparison to the numerical experiments by calculating the nominal strength σn and maximum loads Pmax.
SU-F-T-48: Clinical Implementation of Brachytherapy Planning System for COMS Eye Plaques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira, C; Islam, M; Ahmad, S
Purpose: To commission the Brachytherapy Planning (BP) system (Varian, Palo Alto, CA) for the Collaborative Ocular Melanoma Study (COMS) eye plaques by evaluating dose differences against original plans from Nucletron Planning System (NPS). Methods: NPS system is the primary planning software for COMS-plaques at our facility; however, Brachytherapy Planning 11.0.47 (Varian Medical Systems) is used for secondary check and for seed placement configurations not originally commissioned. Dose comparisons of BP and NPS plans were performed for prescription of 8500 cGy at 5 mm depth and doses to normal structures: opposite retina, inner sclera, macula, optic disk and lens. Plans weremore » calculated for Iodine-125 seeds (OncoSeeds, Model 6711) using COMS-plaques of 10, 12, 14, 16, 18 and 20 mm diameters. An in-house program based on inverse-square was utilized to calculate point doses for comparison as well. Results: The highest dose difference between BP and NPS was 3.7% for the prescription point for all plaques. Doses for BP were higher than doses reported by NPS for all points. The largest percent differences for apex, opposite retina, inner sclera, macula, optic disk, and lens were 3.2%, 0.9%, 13.5%, 20.5%, 15.7% and 2.2%, respectively. The dose calculated by the in-house program was 1.3% higher at the prescription point, and were as high as 42.1%, for points away from the plaque (i.e. opposite retina) when compared to NPS. Conclusion: Doses to the tumor, lens, retina, and optic nerve are paramount for a successful treatment and vision preservation. Both systems are based on TG-43 calculations and assume water medium tissue homogeneity (ρe=1, water medium). Variations seen may result from the different task group versions and/or mathematical algorithms of the software. BP was commissioned to serve as a backup system and it also enables dose calculation in cases where seeds don’t follow conventional placement configuration.« less
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
Experimental Estimating Deflection of a Simple Beam Bridge Model Using Grating Eddy Current Sensors
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring. PMID:23112583
Experimental estimating deflection of a simple beam bridge model using grating eddy current sensors.
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring.
Acceptance and commissioning of a treatment planning system based on Monte Carlo calculations.
Lopez-Tarjuelo, J; Garcia-Molla, R; Juan-Senabre, X J; Quiros-Higueras, J D; Santos-Serra, A; de Marco-Blancas, N; Calzada-Feliu, S
2014-04-01
The Monaco Treatment Planning System (TPS), based on a virtual energy fluence model of the photon beam head components of the linac and a dose computation engine made with Monte Carlo (MC) algorithm X-Ray Voxel MC (XVMC), has been tested before being put into clinical use. An Elekta Synergy with 6 MV was characterized using routine equipment. After the machine's model was installed, a set of functionality, geometric, dosimetric and data transfer tests were performed. The dosimetric tests included dose calculations in water, heterogeneous phantoms and Intensity Modulated Radiation Therapy (IMRT) verifications. Data transfer tests were run for every imaging device, TPS and the electronic medical record linked to Monaco. Functionality and geometric tests were run properly. Dose calculations in water were in accordance with measurements so that, in 95% of cases, differences were up to 1.9%. Dose calculation in heterogeneous media showed expected results found in the literature. IMRT verification results with an ionization chamber led to dose differences lower than 2.5% for points inside a standard gradient. When an 2-D array was used, all the fields passed the g (3%, 3 mm) test with a percentage of succeeding points between 90% and 95%, of which the majority of the mentioned fields had a percentage of succeeding points between 95% and 100%. Data transfer caused problems that had to be solved by means of changing our workflow. In general, tests led to satisfactory results. Monaco performance complied with published international recommendations and scored highly in the dosimetric ambit. However, the problems detected when the TPS was put to work together with our current equipment showed that this kind of product must be completely commissioned, without neglecting data workflow, before treating the first patient.
Surface Meteorology, Barrow, Alaska, Area A, B, C and D, Ongoing from 2012
Bob Busey; Larry Hinzman; William Cable; Vladimir Romanovsky
2014-12-04
Meteorological data are being collected at several points within four intensive study areas in Barrow. These data assist in the calculation of the energy balance at the land surface and are also useful as inputs into modeling activities.
NASA Astrophysics Data System (ADS)
Leirião, Sílvia; He, Xin; Christiansen, Lars; Andersen, Ole B.; Bauer-Gottwein, Peter
2009-02-01
SummaryTotal water storage change in the subsurface is a key component of the global, regional and local water balances. It is partly responsible for temporal variations of the earth's gravity field in the micro-Gal (1 μGal = 10 -8 m s -2) range. Measurements of temporal gravity variations can thus be used to determine the water storage change in the hydrological system. A numerical method for the calculation of temporal gravity changes from the output of hydrological models is developed. Gravity changes due to incremental prismatic mass storage in the hydrological model cells are determined to give an accurate 3D gravity effect. The method is implemented in MATLAB and can be used jointly with any hydrological simulation tool. The method is composed of three components: the prism formula, the MacMillan formula and the point-mass approximation. With increasing normalized distance between the storage prism and the measurement location the algorithm switches first from the prism equation to the MacMillan formula and finally to the simple point-mass approximation. The method was used to calculate the gravity signal produced by an aquifer pump test. Results are in excellent agreement with the direct numerical integration of the Theis well solution and the semi-analytical results presented in [Damiata, B.N., and Lee, T.-C., 2006. Simulated gravitational response to hydraulic testing of unconfined aquifers. Journal of Hydrology 318, 348-359]. However, the presented method can be used to forward calculate hydrology-induced temporal variations in gravity from any hydrological model, provided earth curvature effects can be neglected. The method allows for the routine assimilation of ground-based gravity data into hydrological models.
NASA Technical Reports Server (NTRS)
Demarest, H. H., Jr.
1972-01-01
The elastic constants and the entire frequency spectrum were calculated up to high pressure for the alkali halides in the NaCl lattice, based on an assumed functional form of the inter-atomic potential. The quasiharmonic approximation is used to calculate the vibrational contribution to the pressure and the elastic constants at arbitrary temperature. By explicitly accounting for the effect of thermal and zero point motion, the adjustable parameters in the potential are determined to a high degree of accuracy from the elastic constants and their pressure derivatives measured at zero pressure. The calculated Gruneisen parameter, the elastic constants and their pressure derivatives are in good agreement with experimental results up to about 600 K. The model predicts that for some alkali halides the Grunesen parameter may decrease monotonically with pressure, while for others it may increase with pressure, after an initial decrease.
Calculation of Tectonic Strain Release from an Explosion in a Three-Dimensional Stress Field
NASA Astrophysics Data System (ADS)
Stevens, J. L.; O'Brien, M. S.
2012-12-01
We have developed a 3D nonlinear finite element code designed for calculation of explosions in 3D heterogeneous media and have incorporated the capability to perform explosion calculations in a prestressed medium. The effect of tectonic prestress on explosion-generated surface waves has been discussed since the 1960's. In most of these studies tectonic release was described as superposition of a tectonic source modeled as a double couple, multipole or moment tensor, plus a point explosion source. The size of the tectonic source was determined by comparison with the observed Love waves and the Rayleigh wave radiation pattern. Day et al. (1987) first attempted to perform numerical modeling of tectonic release through an axisymmetric calculation of the explosion Piledriver. To the best of our knowledge no one has previously performed numerical calculations for an explosion in a three-dimensional stress field. Calculation of tectonic release depends on a realistic representation of the stress state in the earth. In general the vertical stress is equal to the overburden weight of the material above at any given point. The horizontal stresses may be larger or smaller than this value up to the point where failure due to frictional sliding relieves the stress. In our calculations, we use the normal overburden calculation to determine the vertical stress, and then modify the horizontal stresses to some fraction of the frictional limit. This is the initial stable state of the calculation prior to introduction of the explosion. Note that although the vertical stress is still equivalent to the overburden weight, the pressure is not, and it may be either increased or reduced by the tectonic stresses. Since material strength increases with pressure, this also can substantially affect the seismic source. In general, normal faulting regimes will amplify seismic signals, while reverse faulting regimes will decrease seismic signals; strike-slip regimes may do either. We performed a 3D calculation of the Shoal underground nuclear explosion including tectonic prestress. Shoal was a 12.5 kiloton nuclear explosion detonated near Fallon, Nevada. This event had strong heterogeneity in near field waveforms and is in a region under primarily extensional tectonic stress. There were three near-field shot level recording stations located in three directions each at about 590 meters from the shot. Including prestress consistent with the regional stress field causes variations in the calculated near-field waveforms similar to those observed in the Shoal data.
Theoretical calculations of oxygen relaxation in YBa2Cu3O6+x ceramics
NASA Astrophysics Data System (ADS)
Mi, Y.; Schaller, R.; Sathish, S.; Benoit, W.
1991-12-01
A two-dimensional theoretical model of stress-induced point-defect relaxation in a layered structure is presented, with a detailed discussion of the special case of YBa2Cu3O6+x. The experimental results of oxygen relaxation in YBa2Cu3O6+x can be explained qualitatively by this model.
NASA Astrophysics Data System (ADS)
Janpaule, Inese; Haritonova, Diana; Balodis, Janis; Zarins, Ansis; Silabriedis, Gunars; Kaminskis, Janis
2015-03-01
Development of a digital zenith telescope prototype, improved zenith camera construction and analysis of experimental vertical deflection measurements for the improvement of the Latvian geoid model has been performed at the Institute of Geodesy and Geoinformatics (GGI), University of Latvia. GOCE satellite data was used to compute geoid model for the Riga region, and European gravimetric geoid model EGG97 and 102 data points of GNSS/levelling were used as input data in the calculations of Latvian geoid model.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Material point method of modelling and simulation of reacting flow of oxygen
NASA Astrophysics Data System (ADS)
Mason, Matthew; Chen, Kuan; Hu, Patrick G.
2014-07-01
Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.
Critical points of the O(n) loop model on the martini and the 3-12 lattices
NASA Astrophysics Data System (ADS)
Ding, Chengxiang; Fu, Zhe; Guo, Wenan
2012-06-01
We derive the critical line of the O(n) loop model on the martini lattice as a function of the loop weight n basing on the critical points on the honeycomb lattice conjectured by Nienhuis [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.49.1062 49, 1062 (1982)]. In the limit n→0 we prove the connective constant μ=1.7505645579⋯ of self-avoiding walks on the martini lattice. A finite-size scaling analysis based on transfer matrix calculations is also performed. The numerical results coincide with the theoretical predictions with a very high accuracy. Using similar numerical methods, we also study the O(n) loop model on the 3-12 lattice. We obtain similarly precise agreement with the critical points given by Batchelor [J. Stat. Phys.JSTPBS0022-471510.1023/A:1023065215233 92, 1203 (1998)].
NASA Astrophysics Data System (ADS)
Berthinier, C.; Rado, C.; Chatillon, C.; Hodaj, F.
2013-02-01
The self and chemical diffusion of oxygen in the non-stoichiometric domain of the UO2 compound is analyzed from the point of view of experimental determinations and modeling from Frenkel pair defects. The correlation between the self-diffusion and the chemical diffusion coefficients is analyzed using the Darken coefficient calculated from a thermodynamic description of the UO2±x phase. This description was obtained from an optimization of thermodynamic and phase diagram data and modeling with different point defects, including the Frenkel pair point defects. The proposed diffusion coefficients correspond to the 300-2300 K temperature range and to the full composition range of the non stoichiometric UO2 compound. These values will be used for the simulation of the oxidation and ignition of the uranium carbide in different oxygen atmospheres that starts at temperatures as low as 400 K.
Mueller, Silke C; Drewelow, Bernd
2013-05-01
The area under the concentration-time curve (AUC) after oral midazolam administration is commonly used for cytochrome P450 (CYP) 3A phenotyping studies. The aim of this investigation was to evaluate a limited sampling strategy for the prediction of AUC with oral midazolam. A total of 288 concentration-time profiles from 123 healthy volunteers who participated in four previously performed drug interaction studies with intense sampling after a single oral dose of 7.5 mg midazolam were available for evaluation. Of these, 45 profiles served for model building, which was performed by stepwise multiple linear regression, and the remaining 243 datasets served for validation. Mean prediction error (MPE), mean absolute error (MAE) and root mean squared error (RMSE) were calculated to determine bias and precision The one- to four-sampling point models with the best coefficient of correlation were the one-sampling point model (8 h; r (2) = 0.84), the two-sampling point model (0.5 and 8 h; r (2) = 0.93), the three-sampling point model (0.5, 2, and 8 h; r (2) = 0.96), and the four-sampling point model (0.5,1, 2, and 8 h; r (2) = 0.97). However, the one- and two-sampling point models were unable to predict the midazolam AUC due to unacceptable bias and precision. Only the four-sampling point model predicted the very low and very high midazolam AUC of the validation dataset with acceptable precision and bias. The four-sampling point model was also able to predict the geometric mean ratio of the treatment phase over the baseline (with 90 % confidence interval) results of three drug interaction studies in the categories of strong, moderate, and mild induction, as well as no interaction. A four-sampling point limited sampling strategy to predict the oral midazolam AUC for CYP3A phenotyping is proposed. The one-, two- and three-sampling point models were not able to predict midazolam AUC accurately.
Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings
NASA Astrophysics Data System (ADS)
Hunt, H. E. M.
1996-05-01
A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Finger muscle attachments for an OpenSim upper-extremity model.
Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements.
Finger Muscle Attachments for an OpenSim Upper-Extremity Model
Lee, Jong Hwa; Asakawa, Deanna S.; Dennerlein, Jack T.; Jindrich, Devin L.
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements. PMID:25853869
Stress field modelling from digital geological map data
NASA Astrophysics Data System (ADS)
Albert, Gáspár; Barancsuk, Ádám; Szentpéteri, Krisztián
2016-04-01
To create a model for the lithospheric stress a functional geodatabase is required which contains spatial and geodynamic parameters. A digital structural-geological map is a geodatabase, which usually contains enough attributes to create a stress field model. Such a model is not accurate enough for engineering-geological purposes because simplifications are always present in a map, but in many cases maps are the only sources for a tectonic analysis. The here presented method is designed for field geologist, who are interested to see the possible realization of the stress field over the area, on which they are working. This study presents an application which can produce a map of 3D stress vectors from a kml-file. The core application logic is implemented on top of a spatially aware relational database management system. This allows rapid and geographically accurate analysis of the imported geological features, taking advantage of standardized spatial algorithms and indexing. After pre-processing the map features in a GIS, according to the Type-Property-Orientation naming system, which was described in a previous study (Albert et al. 2014), the first stage of the algorithm generates an irregularly spaced point cloud by emitting a pattern of points within a user-defined buffer zone around each feature. For each point generated, a component-wise approximation of the tensor field at the point's position is computed, derived from the original feature's geodynamic properties. In a second stage a weighted moving average method calculates the stress vectors in a regular grid. Results can be exported as geospatial data for further analysis or cartographic visualization. Computation of the tensor field's components is based on the implementation of the Mohr diagram of a compressional model, which uses a Coulomb fracture criterion. Using a general assumption that the main principal stress must be greater than the stress from the overburden, the differential stress is calculated from the fracture criterion. The calculation includes the gravitational acceleration, the average density of rocks and the experimental 60 degree of the fracture angle from the normal of the fault plane. This way, the stress tensors are calculated as absolute pressure values per square meters on both sides of the faults. If the stress from the overburden is greater than 1 bar (i.e. the faults are buried), a confined compression would be present. Modelling this state of stress may result a confusing pattern of vectors, because in a confined position the horizontal stress vectors may point towards structures primarily associated with extension. To step over this, and to highlight the variability in the stress-field, the model calculates the vectors directly from the differential stress (practically subtracting the minimum principal stress from the critical stress). The result of the modelling is a vector map, which theoretically represents the minimum tectonic pressure in the moment, when the rock body breaks from an initial state. This map - together with the original fault-map - is suitable for determining those areas where unrevealed tectonic, sedimentary and lithological structures are possibly present (e.g. faults, sub-basins and intrusions). With modelling different deformational phases on the same area, change of the stress vectors can be detected which reveals not only the varying directions of the principal stresses, but the tectonic-driven sedimentation patterns too. The decrease of necessary critical stress in the case of a possible reactivation of a fault in subsequent deformation phase can be managed with the down-ranking of the concerning structural elements. Reference: Albert G., Ungvári ZS., Szentpéteri K. 2014: Modeling the present day stress field of the Pannonian Basin from neotectonic maps - In: Beqiraj A, Ionescu C, Christofides G, Uta A, Beqiraj Goga E, Marku S (eds.) Proceedings XX Congress of the Carpathian-Balkan Geological Association. Tirana: p. 2.
Quantitative comparison of the application accuracy between NDI and IGT tracking systems
NASA Astrophysics Data System (ADS)
Li, Qinghang; Zamorano, Lucia J.; Jiang, Charlie Z. W.; Gong, JianXing; Diaz, Fernando
1999-07-01
The application accuracy is a crucial factor for the stereotactic surgical localization system in which space digitization system is one of the most important part of equipment. In this study we compared the application accuracy of using the OPTOTRAK space digitization system (OPTOTRAK 3020, Northern Digital, Waterloo, CAN) and FlashPoint Model 3000 and 5000 3-D digitizer systems (FlashPoint Model 3000 and 5000, Image Guided Surgery Technology Inc., Boulder, CO 80301, USA) for interactive localization of intracranial lesions. A phantom was mounted with the implantable frameless marker system (Fischer- Leibinger, Freiburg, Germany) which randomly distributed markers on the surface of the phantom. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points were used as the deviation from the `true point'. The mean square root was calculated to show the sum of vectors. A paired t-test was used to analyze results. The results of the phantom showed that the mean square roots were 0.76 +/- 0.54 mm for the OPTOTRAK system and 1.23 +/- 0.53 mm for FlashPoint Model 3000 3-D digitizer system and 1.00 +/- 0.42 mm for FlashPoint Model 3000 3-D digitizer system in the 1 mm sections of CT scan. This preliminary results showed that there is no significant difference between two tracking systems. Both of them can be used for image guided surgery procedure.
NASA Astrophysics Data System (ADS)
Alexeev, A. Yu.; Krivosheeva, A. V.; Shaposhnikov, V. L.; Borisenko, V. E.
2017-09-01
A model for ab initio calculation of the phonon properties of three-component solid solutions of refractory-metal dichalcogenides was developed based on the assumption that displacements of the same type of chalcogen atoms and decoupled displacements of the metal atoms were identical. The calculated phonon frequencies at the Γ-point for monomolecular layers of MoS2-xSex and MoS2-xTex agreed with existing experimental Raman spectra.
Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order
Favalli, Andrea; Croft, Stephen; Santi, Peter
2015-06-15
Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less
Tahmasbi, Amir; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function. PMID:25837101
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rout, G. C., E-mail: siva1987@iopb.res.in, E-mail: skp@iopb.res.in, E-mail: gcr@iopb.res.in; Sahu, Sivabrata; Panda, S. K.
2016-04-13
We report here a microscopic tight-binding model calculation for AB-stacked bilayer graphene in presence of biasing potential between the two layers and the impurity effects to study the evolution of the total density of states with special emphasis on opening of band gap near Dirac point. We have calculated the electron Green’s functions for both the A and B sub-lattices by Zubarev technique. The imaginary part of the Green’s function gives the partial and total density of states of electrons. The density of states are computed numerically for 1000 × 1000 grid points of the electron momentum. The evolution ofmore » the opening of band gap near van-Hove singularities as well as near Dirac point is investigated by varying the different interlayer hoppings and the biasing potentials. The inter layer hopping splits the density of states at van-Hove singularities and produces a V-shaped gap near Dirac point. Further the biasing potential introduces a U shaped gap near Dirac point with a density minimum at the applied potential(i.e. at V/2).« less
NASA Astrophysics Data System (ADS)
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
2016-02-01
frequency...................................................................... 81 Figure 46. Return period analysis at Sewell’s Point (across the mouth ...Return period analysis at Sewell’s Point (across the mouth of the James River from both Langley AFB and Fort Eustis with sea level rise projections...a digital elevation model as an input and calculates the water level necessary to fill each grid cell. In other words , the fill tool takes into
The Modulus of Rupture from a Mathematical Point of View
NASA Astrophysics Data System (ADS)
Quintela, P.; Sánchez, M. T.
2007-04-01
The goal of this work is to present a complete mathematical study about the three-point bending experiments and the modulus of rupture of brittle materials. We will present the mathematical model associated to three-point bending experiments and we will use the asymptotic expansion method to obtain a new formula to calculate the modulus of rupture. We will compare the modulus of rupture of porcelain obtained with the previous formula with that obtained by using the classic theoretical formula. Finally, we will also present one and three-dimensional numerical simulations to compute the modulus of rupture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magnucka-Blandzi, Ewa
The study is devoted to stability of simply supported beam under axial compression. The beam is subjected to an axial load located at any point along the axis of the beam. The buckling problem has been desribed and solved mathematically. Critical loads have been calculated. In the particular case, the Euler’s buckling load is obtained. Explicit solutions are given. The values of critical loads are collected in tables and shown in figure. The relation between the point of the load application and the critical load is presented.
Non-Fermi Liquid Behavior in the Single-Impurity Mixed Valence Problem
NASA Astrophysics Data System (ADS)
Zhang, Guang-Ming; Su, Zhao-Bin; Yu, Lu
An effective Hamiltonian of the Anderson single-impurity model with finite-range Coulomb interactions is derived near a particular limit, which is analogous to the Toulouse limit of the ordinary Kondo problem, and the physical properties around the mixed valence quantum critical point are calculated. At this quantum critical point, the local moment is only partially quenched and X-ray edge singularities are exhibited. Around this point, a new type of non-Fermi liquid behavior is predicted with an extra specific heat Cimp ~ T1/4 + AT ln T and spin-susceptibility χimp ~T-3/4 + B ln T.
van der Waals model for the surface tension of liquid 4He near the λ point
NASA Astrophysics Data System (ADS)
Tavan, Paul; Widom, B.
1983-01-01
We develop a phenomenological model of the 4He liquid-vapor interface. With it we calculate the surface tension of liquid helium near the λ point and compare with the experimental measurements by Magerlein and Sanders. The model is a form of the van der Waals surface-tension theory, extended to apply to a phase equilibrium in which the simultaneous variation of two order parameters-here the superfluid order parameter and the total density-is essential. The properties of the model are derived analytically above the λ point and numerically below it. Just below the λ point the superfluid order parameter is found to approach its bulk-superfluid-phase value very slowly with distance on the liquid side of the interface (the characteristic distance being the superfluid coherence length), and to vanish rapidly with distance on the vapor side, while the total density approaches its bulk-phase values rapidly and nearly symmetrically on the two sides. Below the λ point the surface tension has a |ɛ|32 singularity (ɛ~T-Tλ) arising from the temperature dependence of the spatially varying superfluid order parameter. This is the mean-field form of the more general |ɛ|μ singularity predicted by Sobyanin and by Hohenberg, in which μ (which is in reality close to 1.35 at the λ point of helium) is the exponent with which the interfacial tension between two critical phases vanishes. Above the λ point the surface tension in this model is analytic in ɛ. A singular term |ɛ|μ may in reality be present in the surface tension above as well as below the λ point, although there should still be a pronounced asymmetry. The variation with temperature of the model surface tension is overall much like that in experiment.
Optimum design point for a closed-cycle OTEC system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikegami, Yasuyuki; Uehara, Haruo
1994-12-31
Performance analysis is performed for optimum design point of a closed-cycle Ocean Thermal Energy Conversion (OTEC) system. Calculations are made for an OTEC model plant with a gross power of 100 MW, which was designed by the optimization method proposed by Uehara and Ikegami for the design conditions of 21 C--29 C warm sea water temperature and 4 C cold sea water temperature. Ammonia is used as working fluid. Plate type evaporator and condenser are used as heat exchangers. The length of the cold sea water pipe is 1,000 m. This model plant is a floating-type OTEC plant. The objectivemore » function of optimum design point is defined as the total heat transfer area of heat exchangers per the annual net power.« less
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
Photon migration in non-scattering tissue and the effects on image reconstruction
NASA Astrophysics Data System (ADS)
Dehghani, H.; Delpy, D. T.; Arridge, S. R.
1999-12-01
Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.
Modeling of the metallic port in breast tissue expanders for photon radiotherapy.
Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui
2018-03-30
The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DTM Generation with Uav Based Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Polat, N.; Uysal, M.
2017-11-01
Nowadays Unmanned Aerial Vehicles (UAVs) are widely used in many applications for different purposes. Their benefits however are not entirely detected due to the integration capabilities of other equipment such as; digital camera, GPS, or laser scanner. The main scope of this paper is evaluating performance of cameras integrated UAV for geomatic applications by the way of Digital Terrain Model (DTM) generation in a small area. In this purpose, 7 ground control points are surveyed with RTK and 420 photographs are captured. Over 30 million georeferenced points were used in DTM generation process. Accuracy of the DTM was evaluated with 5 check points. The root mean square error is calculated as 17.1 cm for an altitude of 100 m. Besides, a LiDAR derived DTM is used as reference in order to calculate correlation. The UAV based DTM has o 94.5 % correlation with reference DTM. Outcomes of the study show that it is possible to use the UAV Photogrammetry data as map producing, surveying, and some other engineering applications with the advantages of low-cost, time conservation, and minimum field work.
Generic effective source for scalar self-force calculations
NASA Astrophysics Data System (ADS)
Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter
2012-05-01
A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.
Transverse liquid fuel jet breakup, burning, and ignition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hsi-shang
1990-01-01
An analytical/numerical study of the breakup, burning, and ignition of liquid fuels injected transversely into a hot air stream is conducted. The non-reacting liquid jet breakup location is determined by the local sonic point criterion first proposed by Schetz, et al. (1980). Two models, one employing analysis of an elliptical jet cross-section and the other employing a two-dimensional blunt body to represent the transverse jet, have been used for sonic point calculations. An auxiliary criterion based on surface tension stability is used as a separate means of determining the breakup location. For the reacting liquid jet problem, a diffusion flamemore » supported by a one-step chemical reaction within the gaseous boundary layer is solved along the ellipse surface in subsonic crossflow. Typical flame structures and concentration profiles have been calculated for various locations along the jet cross-section as a function of upstream Mach numbers. The integrated reaction rate along the jet cross-section is used to predict ignition position, which is found to be situated near the stagnation point. While a multi-step reaction is needed to represent the ignition process more accurately, the present calculation does yield reasonable predictions concerning ignition along a curved surface.« less
Transverse liquid fuel jet breakup, burning, and ignition. M.S. Thesis
NASA Technical Reports Server (NTRS)
Li, Hsi-Shang
1990-01-01
An analytical study of the breakup, burning, and ignition of liquid fuels injected transversely into a hot air stream is conducted. The non-reacting liquid jet breakup location is determined by the local sonic point criterion. Two models, one employing analysis of an elliptical jet cross-section and the other employing a two-dimensional blunt body to represent the transverse jet, were used for sonic point calculations. An auxiliary criterion based on surface tension stability is used as a separate means of determining the breakup location. For the reacting liquid jet problem, a diffusion flame supported by a one-step chemical reaction within the gaseous boundary layer is solved along the ellipse surface in subsonic cross flow. Typical flame structures and concentration profiles were calculated for various locations along the jet cross-section as a function of upstream Mach numbers. The integration reaction rate along the jet cross-section is used to predict ignition position, which is found to be situated near the stagnation point. While a multi-step reaction is needed to represent the ignition process more accurately, the present calculation does yield reasonable predictions concerning ignition along a curved surface.
Simulating sunflower canopy temperatures to infer root-zone soil water potential
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Idso, S. B.
1983-01-01
A soil-plant-atmosphere model for sunflower (Helianthus annuus L.), together with clear sky weather data for several days, is used to study the relationship between canopy temperature and root-zone soil water potential. Considering the empirical dependence of stomatal resistance on insolation, air temperature and leaf water potential, a continuity equation for water flux in the soil-plant-atmosphere system is solved for the leaf water potential. The transpirational flux is calculated using Monteith's combination equation, while the canopy temperature is calculated from the energy balance equation. The simulation shows that, at high soil water potentials, canopy temperature is determined primarily by air and dew point temperatures. These results agree with an empirically derived linear regression equation relating canopy-air temperature differential to air vapor pressure deficit. The model predictions of leaf water potential are also in agreement with observations, indicating that measurements of canopy temperature together with a knowledge of air and dew point temperatures can provide a reliable estimate of the root-zone soil water potential.
Li, Yuanzheng; Xu, Haiyang; Liu, Weizhen; Yang, Guochun; Shi, Jia; Liu, Zheng; Liu, Xinfeng; Wang, Zhongqiang; Tang, Qingxin; Liu, Yichun
2017-05-01
It is very important to obtain a deeper understand of the carrier dynamics for indirect-bandgap multilayer MoS 2 and to make further improvements to the luminescence efficiency. Herein, an anomalous luminescence behavior of multilayer MoS 2 is reported, and its exciton emission is significantly enhanced at high temperatures. Temperature-dependent Raman studies and electronic structure calculations reveal that this experimental observation cannot be fully explained by a common mechanism of thermal-expansion-induced interlayer decoupling. Instead, a new model involving the intervalley transfer of thermally activated carriers from Λ/Γ point to K point is proposed to understand the high-temperature luminescence enhancement of multilayer MoS 2 . Steady-state and transient-state fluorescence measurements show that both the lifetime and intensity of the exciton emission increase relatively to increasing temperature. These two experimental evidences, as well as a calculation of carrier population, provide strong support for the proposed model. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling of roll/pitch determination with horizon sensors - Oblate Earth
NASA Astrophysics Data System (ADS)
Hablani, Hari B.
Model calculations are presented of roll/pitch determinations for oblate Earth, with horizon sensors. Two arrangements of a pair of horizon sensors are considered: left and right of the velocity vactor (i.e., along the pitch axis), and aft and forward (along the roll axis). Two approaches are used to obtain the roll/pitch oblateness corrections: (1) the crossing point approach, where the two crossings of the horizon sensor's scan and the earth's horizon are determined, and (2) by decomposing the angular deviation of the geocentric normal from the geodetic normal into roll and pitch components. It is shown that the two approaches yield essentially the same corrections if two sensors are used simultaneously. However, if the spacecraft is outfitted with only one sensor, the oblateness correction about one axis is far different from that predicted by the geocentric/geodetic angular deviation approach. In this case, the corrections may be calculated on ground for the sensor location under consideration and stored in the flight computer, using the crossing point approach.
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2009-10-01
The purpose of this paper is twofold: (i) to advance and extend the statistical two-point models of pair dispersion and particle clustering in isotropic turbulence that were previously proposed by Zaichik and Alipchenkov (2003 Phys. Fluids15 1776-87 2007 Phys. Fluids 19, 113308) and (ii) to present some applications of these models. The models developed are based on a kinetic equation for the two-point probability density function of the relative velocity distribution of two particles. These models predict the pair relative velocity statistics and the preferential accumulation of heavy particles in stationary and decaying homogeneous isotropic turbulent flows. Moreover, the models are applied to predict the effect of particle clustering on turbulent collisions, sedimentation and intensity of microwave radiation as well as to calculate the mean filtered subgrid stress of the particulate phase. Model predictions are compared with direct numerical simulations and experimental measurements.
Long-range Coulomb forces and localized bonds.
Preiser; Lösel; Brown; Kunz; Skowron
1999-10-01
The ionic model is shown to be applicable to all compounds in which the atoms carry a net charge and their electron density is spherically symmetric regardless of the covalent character of the bonding. By examining the electric field generated by an array of point charges placed at the positions of the ions in over 40 inorganic compounds, we show that the Coulomb field naturally partitions itself into localized regions (bonds) which are characterized by the electric flux that links neighbouring ions of opposite charge. This flux is identified with the bond valence, and Gauss' law with the valence-sum rule, providing a secure theoretical foundation for the bond-valence model. The localization of the Coulomb field provides an unambiguous definition of coordination number and our calculations show that, in addition to the expected primary coordination sphere, there are a number of weak bonds between cations and the anions in the second coordination sphere. Long-range Coulomb interactions are transmitted through the crystal by the application of Gauss' law at each of the intermediate atoms. Bond fluxes have also been calculated for compounds containing ions with non-spherical electron densities (e.g. cations with stereoactive lone electron pairs). In these cases the point-charge model continues to describe the distant field, but multipoles must be added to the point charges to give the correct local field.
Stability and bifurcation in a model for the dynamics of stem-like cells in leukemia under treatment
NASA Astrophysics Data System (ADS)
Rǎdulescu, I. R.; Cândea, D.; Halanay, A.
2012-11-01
A mathematical model for the dynamics of leukemic cells during treatment is introduced. Delay differential equations are used to model cells' evolution and are based on the Mackey-Glass approach, incorporating Goldie-Coldman law. Since resistance is propagated by cells that have the capacity of self-renewal, a population of stem-like cells is studied. Equilibrium points are calculated and their stability properties are investigated.
COMSOL in the Academic Environment at USNA
2009-10-01
figure shows the electric field calculated and the right shows the electron density at one point in time. 3.3 Acoustic Detection of Landmines – 3...industries heavy investment in computer graphics and modeling. Packages such as Maya , Zbrush, Mudbox and others excel at this type of modeling. A...like Sketch-Up, Maya or AutoCAD. An extensive library of pre-built models would include all of the Platonic solids, combinations of Platonic
1992-12-01
desirable. In this study, the proposed model consists of a thick-walled, highly deformable elastic tube in which the blood flow is described by linearized ...presented a mechanical model consisting of linearized Navier-Stokes and finite elasticity equations to predict blood pooling under acceleration stress... linear multielement model of the cardiovascular system which can calculate blood pressures and flows at any point in the cardio- vascular system. It
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
Gençaslan, Mustafa; Keskin, Mustafa
2012-02-14
We combine the modified Tompa model with the van der Waals equation to study critical lines for an unequal size of molecules in a binary gas-liquid mixture around the van Laar point. The van Laar point is coined by Meijer and it is the only point at which the mathematical double point curve is stable. It is the intersection of the tricritical point and the double critical end point. We calculate the critical lines as a function of χ(1) and χ(2), the density of type I molecules and the density of type II molecules for various values of the system parameters; hence the global phase diagrams are presented and discussed in the density-density plane. We also investigate the connectivity of critical lines at the van Laar point and its vicinity and discuss these connections according to the Scott and van Konynenburg classifications. It is also found that the critical lines and phase behavior are extremely sensitive to small modifications in the system parameters. © 2012 American Institute of Physics
An accelerated hologram calculation using the wavefront recording plane method and wavelet transform
NASA Astrophysics Data System (ADS)
Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-06-01
Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.
Dense mesh sampling for video-based facial animation
NASA Astrophysics Data System (ADS)
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2017-07-01
We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i . In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length la. For la=0 the islands behave as perfect sinks while for la→∞ they behave as reflecting boundaries. For intermediate values of la, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and la. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and la). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of la and i =1 , 2, and 3.
Numerical modeling on carbon fiber composite material in Gaussian beam laser based on ANSYS
NASA Astrophysics Data System (ADS)
Luo, Ji-jun; Hou, Su-xia; Xu, Jun; Yang, Wei-jun; Zhao, Yun-fang
2014-02-01
Based on the heat transfer theory and finite element method, the macroscopic ablation model of Gaussian beam laser irradiated surface is built and the value of temperature field and thermal ablation development is calculated and analyzed rationally by using finite element software of ANSYS. Calculation results show that the ablating form of the materials in different irritation is of diversity. The laser irradiated surface is a camber surface rather than a flat surface, which is on the lowest point and owns the highest power density. Research shows that the higher laser power density absorbed by material surface, the faster the irritation surface regressed.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei
2014-10-01
Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.
Research on axial thrust of the waterjet pump based on CFD under cavitation conditions
NASA Astrophysics Data System (ADS)
Shen, Z. H.; Pan, Z. Y.
2015-01-01
Based on RANS equations, performance of a contra-rotating axial-flow waterjet pump without hydrodynamic cavitation state had been obtained combined with shear stress transport turbulence model. Its cavitation hydrodynamic performance was calculated and analysed with mixture homogeneous flow cavitation model based on Rayleigh-Plesset equations. The results shows that the cavitation causes axial thrust of waterjet pump to drop. Furthermore, axial thrust and head cavitation characteristic curve is similar. However, the drop point of the axial thrust is postponed by 5.1% comparing with one of head, and the critical point of the axial thrust is postponed by 2.6%.
NASA Technical Reports Server (NTRS)
Li, Q.; Zamorano, L.; Jiang, Z.; Gong, J. X.; Pandya, A.; Perez, R.; Diaz, F.
1999-01-01
Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.
Li, Q; Zamorano, L; Jiang, Z; Gong, J X; Pandya, A; Perez, R; Diaz, F
1999-01-01
Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.
Spacing distribution functions for 1D point island model with irreversible attachment
NASA Astrophysics Data System (ADS)
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D; O’Connell, D; Lamb, J
Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment weremore » generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments.« less
KNGEOID14: A national hybrid geoid model in Korea
NASA Astrophysics Data System (ADS)
Kang, S.; Sung, Y. M.; KIM, H.; Kim, Y. S.
2016-12-01
This study describes in brief the construction of a national hybrid geoid model in Korea, KNGEOID14, which can be used as an accurate vertical datum in/around Korea. The hybrid geoid model should be determined by fitting the gravimetric geoid to the geometric geoid undulations from GNSS/Leveling data which were presented the local vertical level. For developing the gravimetric geoid model, we determined all frequency parts (long, middle and short-frequency) of gravimetric geoid using all available data with optimal remove-restore technique based on EGM2008 reference surface. In remove-restore technique, the EGM2008 model to degree 360, RTM reduction method were used for calculating the long, middle and short-frequency part of gravimetric geoid, respectively. A number of gravity data compiled for modeling the middle-frequency part, residual geoid, containing 8,866 points gravity data on land and ocean areas. And, the DEM data gridded by 100m×100m were used for short-frequency part, is the topographic effect on the geoid generated by RTM method. The accuracy of gravimetric geoid model were evaluated by comparison with GNSS/Leveling data was about -0.362m ± 0.055m. Finally, we developed the national hybrid geoid model in Korea, KNGEOID14, corrected to gravimetric geoid with the correction term by fitting the about 1,200 GNSS/Leveling data on Korean bench marks. The correction term is modeled using the difference between GNSS/Leveling derived geoidal heights and gravimetric geoidal heights. The stochastic model used in the calculation of correction term is the LSC technique based on second-order Markov covariance function. The post-fit error (mean and std. dev.) of the KNGEOID14 model was evaluated as 0.001m ± 0.033m. Concerning the result of this study, the accurate orthometric height at any points in Korea will be easily and precisely calculated by combining the geoidal height from KNGEOID14 and ellipsoidal height from GPS observation technique.
Study and comparison of different sensitivity models for a two-plane Compton camera.
Muñoz, Enrique; Barrio, John; Bernabéu, José; Etxebeste, Ane; Lacasta, Carlos; Llosá, Gabriela; Ros, Ana; Roser, Jorge; Oliver, Josep F
2018-06-25
Given the strong variations in the sensitivity of Compton cameras for the detection of events originating from different points in the field of view (FoV), sensitivity correction is often necessary in Compton image reconstruction. Several approaches for the calculation of the sensitivity matrix have been proposed in the literature. While most of these models are easily implemented and can be useful in many cases, they usually assume high angular coverage over the scattered photon, which is not the case for our prototype. In this work, we have derived an analytical model that allows us to calculate a detailed sensitivity matrix, which has been compared to other sensitivity models in the literature. Specifically, the proposed model describes the probability of measuring a useful event in a two-plane Compton camera, including the most relevant physical processes involved. The model has been used to obtain an expression for the system and sensitivity matrices for iterative image reconstruction. These matrices have been validated taking Monte Carlo simulations as a reference. In order to study the impact of the sensitivity, images reconstructed with our sensitivity model and with other models have been compared. Images have been reconstructed from several simulated sources, including point-like sources and extended distributions of activity, and also from experimental data measured with 22 Na sources. Results show that our sensitivity model is the best suited for our prototype. Although other models in the literature perform successfully in many scenarios, they are not applicable in all the geometrical configurations of interest for our system. In general, our model allows to effectively recover the intensity of point-like sources at different positions in the FoV and to reconstruct regions of homogeneous activity with minimal variance. Moreover, it can be employed for all Compton camera configurations, including those with low angular coverage over the scatterer.
Diamond, Kevin R; Farrell, Thomas J; Patterson, Michael S
2003-12-21
Steady-state diffusion theory models of fluorescence in tissue have been investigated for recovering fluorophore concentrations and fluorescence quantum yield. Spatially resolved fluorescence, excitation and emission reflectance Carlo simulations, and measured using a multi-fibre probe on tissue-simulating phantoms containing either aluminium phthalocyanine tetrasulfonate (AlPcS4), Photofrin meso-tetra-(4-sulfonatophenyl)-porphine dihydrochloride The accuracy of the fluorophore concentration and fluorescence quantum yield recovered by three different models of spatially resolved fluorescence were compared. The models were based on: (a) weighted difference of the excitation and emission reflectance, (b) fluorescence due to a point excitation source or (c) fluorescence due to a pencil beam excitation source. When literature values for the fluorescence quantum yield were used for each of the fluorophores, the fluorophore absorption coefficient (and hence concentration) at the excitation wavelength (mu(a,x,f)) was recovered with a root-mean-square accuracy of 11.4% using the point source model of fluorescence and 8.0% using the more complicated pencil beam excitation model. The accuracy was calculated over a broad range of optical properties and fluorophore concentrations. The weighted difference of reflectance model performed poorly, with a root-mean-square error in concentration of about 50%. Monte Carlo simulations suggest that there are some situations where the weighted difference of reflectance is as accurate as the other two models, although this was not confirmed experimentally. Estimates of the fluorescence quantum yield in multiple scattering media were also made by determining mu(a,x,f) independently from the fitted absorption spectrum and applying the various diffusion theory models. The fluorescence quantum yields for AlPcS4 and TPPS4 were calculated to be 0.59 +/- 0.03 and 0.121 +/- 0.001 respectively using the point source model, and 0.63 +/- 0.03 and 0.129 +/- 0.002 using the pencil beam excitation model. These results are consistent with published values.
How much hydrogen is there in a white dwarf?
NASA Technical Reports Server (NTRS)
Macdonald, James; Vennes, Stephane
1991-01-01
Stratified hydrogen/helium envelope models in diffusive equilibrium are calculated for a 0.6-solar-mass white dwarf for effective temperatures between 10,000 and 80,000 K in order to investigate the observational constraints placed on the total hydrogen mass. Convective mixing is included ab initio in the calculations, and synthetic spectra are used for comparing these models with observational materials. It is shown that evolutionary changes in the surface composition of white dwarfs cannot be explained by a model in which a small amount of hydrogen floats to the surface from initially being mixed in the outer parts of a helium envelope. It is pointed out that the shape of the hydrogen lines can be used for constraining theories of convective overshoot.
Concentrator optical characterization using computer mathematical modelling and point source testing
NASA Technical Reports Server (NTRS)
Dennison, E. W.; John, S. L.; Trentelman, G. F.
1984-01-01
The optical characteristics of a paraboloidal solar concentrator are analyzed using the intercept factor curve (a format for image data) to describe the results of a mathematical model and to represent reduced data from experimental testing. This procedure makes it possible not only to test an assembled concentrator, but also to evaluate single optical panels or to conduct non-solar tests of an assembled concentrator. The use of three-dimensional ray tracing computer programs to calculate the mathematical model is described. These ray tracing programs can include any type of optical configuration from simple paraboloids to array of spherical facets and can be adapted to microcomputers or larger computers, which can graphically display real-time comparison of calculated and measured data.
Comment on ``Spectroscopy of samarium isotopes in the sdg interacting boson model''
NASA Astrophysics Data System (ADS)
Kuyucak, Serdar; Lac, Vi-Sieu
1993-04-01
We point out that the data used in the sdg boson model calculations by Devi and Kota [Phys. Rev. C 45, 2238 (1992)] can be equally well described by the much simpler sd boson model. We present additional data for the Sm isotopes which cannot be explained in the sd model and hence may justify such an extension to the sdg bosons. We also comment on the form of the Hamiltonian and the transition operators used in this paper.
Modeling of Hydrate Formation Mode in Raw Natural Gas Air Coolers
NASA Astrophysics Data System (ADS)
Scherbinin, S. V.; Prakhova, M. Yu; Krasnov, A. N.; Khoroshavina, E. A.
2018-05-01
Air cooling units (ACU) are used at all the gas fields for cooling natural gas after compressing. When using ACUs on raw (wet) gas in a low temperature condition, there is a danger of hydrate plug formation in the heat exchanging tubes of the ACU. To predict possible hydrate formation, a mathematical model of the air cooler thermal behavior used in the control system shall adequately calculate not only gas temperature at the cooler's outlet, but also a dew point value, a temperature at which condensation, as well as the gas hydrate formation point, onsets. This paper proposes a mathematical model allowing one to determine the pressure in the air cooler which makes hydrate formation for a given gas composition possible.
Li, Yongxiu; Gao, Ya; Zhang, Xuqiang; Wang, Xingyu; Mou, Lirong; Duan, Lili; He, Xiao; Mei, Ye; Zhang, John Z H
2013-09-01
Main chain torsions of alanine dipeptide are parameterized into coupled 2-dimensional Fourier expansions based on quantum mechanical (QM) calculations at M06 2X/aug-cc-pvtz//HF/6-31G** level. Solvation effect is considered by employing polarizable continuum model. Utilization of the M06 2X functional leads to precise potential energy surface that is comparable to or even better than MP2 level, but with much less computational demand. Parameterization of the 2D expansions is against the full main chain torsion space instead of just a few low energy conformations. This procedure is similar to that for the development of AMBER03 force field, except unique weighting factor was assigned to all the grid points. To avoid inconsistency between quantum mechanical calculations and molecular modeling, the model peptide is further optimized at molecular mechanics level with main chain dihedral angles fixed before the calculation of the conformational energy on molecular mechanical level at each grid point, during which generalized Born model is employed. Difference in solvation models at quantum mechanics and molecular mechanics levels makes this parameterization procedure less straightforward. All force field parameters other than main chain torsions are taken from existing AMBER force field. With this new main chain torsion terms, we have studied the main chain dihedral distributions of ALA dipeptide and pentapeptide in aqueous solution. The results demonstrate that 2D main chain torsion is effective in delineating the energy variation associated with rotations along main chain dihedrals. This work is an implication for the necessity of more accurate description of main chain torsions in the future development of ab initio force field and it also raises a challenge to the development of quantum mechanical methods, especially the quantum mechanical solvation models.
NASA Astrophysics Data System (ADS)
Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia
2015-12-01
Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.
Poet, T S; Schlosser, P M; Rodriguez, C E; Parod, R J; Rodwell, D E; Kirman, C R
2016-04-01
The developmental effects of NMP are well studied in Sprague-Dawley rats following oral, inhalation, and dermal routes of exposure. Short-term and chronic occupational exposure limit (OEL) values were derived using an updated physiologically based pharmacokinetic (PBPK) model for NMP, along with benchmark dose modeling. Two suitable developmental endpoints were evaluated for human health risk assessment: (1) for acute exposures, the increased incidence of skeletal malformations, an effect noted only at oral doses that were toxic to the dam and fetus; and (2) for repeated exposures to NMP, changes in fetal/pup body weight. Where possible, data from multiple studies were pooled to increase the predictive power of the dose-response data sets. For the purposes of internal dose estimation, the window of susceptibility was estimated for each endpoint, and was used in the dose-response modeling. A point of departure value of 390 mg/L (in terms of peak NMP in blood) was calculated for skeletal malformations based on pooled data from oral and inhalation studies. Acceptable dose-response model fits were not obtained using the pooled data for fetal/pup body weight changes. These data sets were also assessed individually, from which the geometric mean value obtained from the inhalation studies (470 mg*hr/L), was used to derive the chronic OEL. A PBPK model for NMP in humans was used to calculate human equivalent concentrations corresponding to the internal dose point of departure values. Application of a net uncertainty factor of 20-21, which incorporates data-derived extrapolation factors, to the point of departure values yields short-term and chronic occupational exposure limit values of 86 and 24 ppm, respectively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
High-fidelity modeling and impact footprint prediction for vehicle breakup analysis
NASA Astrophysics Data System (ADS)
Ling, Lisa
For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarchalski, M.; Pytel, K.; Wroblewska, M.
2015-07-01
Precise computational determination of nuclear heating which consists predominantly of gamma heating (more than 80 %) is one of the challenges in material testing reactor exploitation. Due to sophisticated construction and conditions of experimental programs planned in JHR it became essential to use most accurate and precise gamma heating model. Before the JHR starts to operate, gamma heating evaluation methods need to be developed and qualified in other experimental reactor facilities. This is done inter alia using OSIRIS, MINERVE or EOLE research reactors in France. Furthermore, MARIA - Polish material testing reactor - has been chosen to contribute to themore » qualification of gamma heating calculation schemes/tools. This reactor has some characteristics close to those of JHR (beryllium usage, fuel element geometry). To evaluate gamma heating in JHR and MARIA reactors, both simulation tools and experimental program have been developed and performed. For gamma heating simulation, new calculation scheme and gamma heating model of MARIA have been carried out using TRIPOLI4 and APOLLO2 codes. Calculation outcome has been verified by comparison to experimental measurements in MARIA reactor. To have more precise calculation results, model of MARIA in TRIPOLI4 has been made using the whole geometry of the core. This has been done for the first time in the history of MARIA reactor and was complex due to cut cone shape of all its elements. Material composition of burnt fuel elements has been implemented from APOLLO2 calculations. An experiment for nuclear heating measurements and calculation verification has been done in September 2014. This involved neutron, photon and nuclear heating measurements at selected locations in MARIA reactor using in particular Rh SPND, Ag SPND, Ionization Chamber (all three from CEA), KAROLINA calorimeter (NCBJ) and Gamma Thermometer (CEA/SCK CEN). Measurements were done in forty points using four channels. Maximal nuclear heating evaluated from measurements is of the order of 2.5 W/g at half of the possible MARIA power - 15 MW. The approach and the detailed program for experimental verification of calculations will be presented. The following points will be discussed: - Development of a gamma heating model of MARIA reactor with TRIPOLI 4 (coupled neutron-photon mode) and APOLLO2 model taking into account the key parameters like: configuration of the core, experimental loading, control rod location, reactor power, fuel depletion); - Design of specific measurement tools for MARIA experiments including for instance a new single-cell calorimeter called KAROLINA calorimeter; - MARIA experimental program description and a preliminary analysis of results; - Comparison of calculations for JHR and MARIA cores with experimental verification analysis, calculation behavior and n-γ 'environments'. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, A; Contee, C; White, B
Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less
Duan, Yong; Wu, Chun; Chowdhury, Shibasish; Lee, Mathew C; Xiong, Guoming; Zhang, Wei; Yang, Rong; Cieplak, Piotr; Luo, Ray; Lee, Taisung; Caldwell, James; Wang, Junmei; Kollman, Peter
2003-12-01
Molecular mechanics models have been applied extensively to study the dynamics of proteins and nucleic acids. Here we report the development of a third-generation point-charge all-atom force field for proteins. Following the earlier approach of Cornell et al., the charge set was obtained by fitting to the electrostatic potentials of dipeptides calculated using B3LYP/cc-pVTZ//HF/6-31G** quantum mechanical methods. The main-chain torsion parameters were obtained by fitting to the energy profiles of Ace-Ala-Nme and Ace-Gly-Nme di-peptides calculated using MP2/cc-pVTZ//HF/6-31G** quantum mechanical methods. All other parameters were taken from the existing AMBER data base. The major departure from previous force fields is that all quantum mechanical calculations were done in the condensed phase with continuum solvent models and an effective dielectric constant of epsilon = 4. We anticipate that this force field parameter set will address certain critical short comings of previous force fields in condensed-phase simulations of proteins. Initial tests on peptides demonstrated a high-degree of similarity between the calculated and the statistically measured Ramanchandran maps for both Ace-Gly-Nme and Ace-Ala-Nme di-peptides. Some highlights of our results include (1) well-preserved balance between the extended and helical region distributions, and (2) favorable type-II poly-proline helical region in agreement with recent experiments. Backward compatibility between the new and Cornell et al. charge sets, as judged by overall agreement between dipole moments, allows a smooth transition to the new force field in the area of ligand-binding calculations. Test simulations on a large set of proteins are also discussed. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1999-2012, 2003
Influence of different dose calculation algorithms on the estimate of NTCP for lung complications
Bäck, Anna
2013-01-01
Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865
Program to Optimize Simulated Trajectories (POST). Volume 1: Formulation manual
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Cornick, D. E.; Habeger, A. R.; Petersen, F. M.; Stevenson, R.
1975-01-01
A general purpose FORTRAN program for simulating and optimizing point mass trajectories (POST) of aerospace vehicles is described. The equations and the numerical techniques used in the program are documented. Topics discussed include: coordinate systems, planet model, trajectory simulation, auxiliary calculations, and targeting and optimization.
Modeling Interfacial Thermal Boundary Conductance of Engineered Interfaces
2014-08-31
melting / recrystallization of the subsurface Ag/Cu interface. Observed the formation of a novel, lattice-mismatched interfacial microstruc- ture...calculations were converged within 1 × 10−4 Ryd with respect to wave function cutoff energy, energy density cutoff, and k- point sampling. The A-EAM
Geometry-dependent distributed polarizability models for the water molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loboda, Oleksandr; Ingrosso, Francesca; Ruiz-López, Manuel F.
2016-01-21
Geometry-dependent distributed polarizability models have been constructed by fits to ab initio calculations at the coupled cluster level of theory with up to noniterative triple excitations in an augmented triple-zeta quality basis set for the water molecule in the field of a point charge. The investigated models include (i) charge-flow polarizabilities between chemically bonded atoms, (ii) isotropic or anisotropic dipolar polarizabilities on oxygen atom or on all atoms, and (iii) combinations of models (i) and (ii). For each model, the polarizability parameters have been optimized to reproduce the induction energy of a water molecule polarized by a point charge successivelymore » occupying a grid of points surrounding the molecule. The quality of the models is ascertained by examining their ability to reproduce these induction energies as well as the molecular dipolar and quadrupolar polarizabilities. The geometry dependence of the distributed polarizability models has been explored by changing bond lengths and HOH angle to generate 125 molecular structures (reduced to 75 symmetry-unique ones). For each considered model, the distributed polarizability components have been fitted as a function of the geometry by a Taylor expansion in monomer coordinate displacements up to the sum of powers equal to 4.« less
Modeling and simulation for fewer-axis grinding of complex surface
NASA Astrophysics Data System (ADS)
Li, Zhengjian; Peng, Xiaoqiang; Song, Ci
2017-10-01
As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.
Third-Order Memristive Morris-Lecar Model of Barnacle Muscle Fiber
NASA Astrophysics Data System (ADS)
Rajamani, Vetriveeran; Sah, Maheshwar Pd.; Mannan, Zubaer Ibna; Kim, Hyongsuk; Chua, Leon
This paper presents a detailed analysis of various oscillatory behaviors observed in relation to the calcium and potassium ions in the third-order Morris-Lecar model of giant barnacle muscle fiber. Since, both the calcium and potassium ions exhibit all of the characteristics of memristor fingerprints, we claim that the time-varying calcium and potassium ions in the third-order Morris-Lecar model are actually time-invariant calcium and potassium memristors in the third-order memristive Morris-Lecar model. We confirmed the existence of a small unstable limit cycle oscillation in both the second-order and the third-order Morris-Lecar model by numerically calculating the basin of attraction of the asymptotically stable equilibrium point associated with two subcritical Hopf bifurcation points. We also describe a comprehensive analysis of the generation of oscillations in third-order memristive Morris-Lecar model via small-signal circuit analysis and a subcritical Hopf bifurcation phenomenon.
Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data
NASA Astrophysics Data System (ADS)
Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.
2015-08-01
Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.
correlcalc: Two-point correlation function from redshift surveys
NASA Astrophysics Data System (ADS)
Rohin, Yeluripati
2017-11-01
correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.
Terrain Referenced Navigation Using SIFT Features in LiDAR Range-Based Data
2014-12-26
14 2.5 Geocentric and geodetic representation of the same point on Earth’s surface. . . 16 2.6 Difference between datum provided ellipsoid height h and...also called a geocentric system, in that its origin is coincident with the calculated center of the earth. 8 2.1.3.2 Local Navigation Frame. The local...utilizing them. 2.2.1.1 Ellipsoid Earth Models. While geocentric ECEF coordinates are useful to describe a point on or inside the earth they can be cumbersome
First-principles study of point defects at a semicoherent interface
Metsanurk, E.; Tamm, A.; Caro, A.; ...
2014-12-19
Most of the atomistic modeling of semicoherent metal-metal interfaces has so far been based on the use of semiempirical interatomic potentials. Here, we show that key conclusions drawn from previous studies are in contradiction with more precise ab-initio calculations. In particular we find that single point defects do not delocalize, but remain compact near the interfacial plane in Cu-Nb multilayers. Lastly, we give a simple qualitative explanation for this difference on the basis of the well known limited transferability of empirical potentials.
Neck formation and deformation effects in a preformed cluster model of exotic cluster decays
NASA Astrophysics Data System (ADS)
Kumar, Satish; Gupta, Raj K.
1997-01-01
Using the nuclear proximity approach and the two center nuclear shape parametrization, the interaction potential between two deformed and pole-to-pole oriented nuclei forming a necked configuration in the overlap region is calculated and its role is studied for the cluster decay half-lives. The barrier is found to move to a larger relative separation, with its proximity minimum lying in the neighborhood of the Q value of decay and its height and width reduced considerably. For cluster decay calculations in the preformed cluster model of Malik and Gupta, due to deformations and orientations of nuclei, the (empirical) preformation factor is found to get reduced considerably and agrees nicely with other model calculations known to be successful for their predictions of cluster decay half-lives. Comparison with the earlier case of nuclei treated as spheres suggests that the effects of both deformations and neck formation get compensated by choosing the position of cluster preformation and the inner classical turning point for penetrability calculations at the touching configuration of spherical nuclei.
de Lima, Guilherme Ferreira; Duarte, Hélio Anderson; Pliego, Josefredo R
2010-12-09
A new dynamical discrete/continuum solvation model was tested for NH(4)(+) and OH(-) ions in water solvent. The method is similar to continuum solvation models in a sense that the linear response approximation is used. However, different from pure continuum models, explicit solvent molecules are included in the inner shell, which allows adequate treatment of specific solute-solvent interactions present in the first solvation shell, the main drawback of continuum models. Molecular dynamics calculations coupled with SCC-DFTB method are used to generate the configurations of the solute in a box with 64 water molecules, while the interaction energies are calculated at the DFT level. We have tested the convergence of the method using a variable number of explicit water molecules and it was found that even a small number of waters (as low as 14) are able to produce converged values. Our results also point out that the Born model, often used for long-range correction, is not reliable and our method should be applied for more accurate calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guille, Émilie; Vallverdu, Germain, E-mail: germain.vallverdu@univ-pau.fr; Baraille, Isabelle
2014-12-28
We present first-principle calculations of core-level binding energies for the study of insulating, bulk phase, compounds, based on the Slater-Janak transition state model. Those calculations were performed in order to find a reliable model of the amorphous Li{sub x}PO{sub y}N{sub z} solid electrolyte which is able to reproduce its electronic properties gathered from X-ray photoemission spectroscopy (XPS) experiments. As a starting point, Li{sub 2}PO{sub 2}N models were investigated. These models, proposed by Du et al. on the basis of thermodynamics and vibrational properties, were the first structural models of Li{sub x}PO{sub y}N{sub z}. Thanks to chemical and structural modifications appliedmore » to Li{sub 2}PO{sub 2}N structures, which allow to demonstrate the relevance of our computational approach, we raise an issue concerning the possibility of encountering a non-bridging kind of nitrogen atoms (=N{sup −}) in Li{sub x}PO{sub y}N{sub z} compounds.« less
Energy-harvesting potential of automobile suspension
NASA Astrophysics Data System (ADS)
Múčka, Peter
2016-12-01
This study is aimed quantify dissipated power in a damper of automobile suspension to predict energy harvesting potential of a passenger car more accurately. Field measurements of power dissipation in a regenerative damper are still rare. The novelty is in using the broad database of real road profiles, a 9 degrees-of-freedom full-car model with real parameters, and a tyre-enveloping contact model. Results were presented as a function of road surface type, velocity and road roughness characterised by International Roughness Index. Results were calculated for 1600 test sections of a total length about 253.5 km. Root mean square of a dissipated power was calculated from 19 to 46 W for all four suspension dampers and velocity 60 km/h and from 24 to 58 W for velocity 90 km/h. Results were compared for a full-car model with a tyre-enveloping road contact, full-car and quarter-car models with a tyre-road point contact. Mean difference among three models in calculated power was a few per cent.
Fission barriers at the end of the chart of the nuclides
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; ...
2015-02-12
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less
Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Errington, Jeffrey R.
2003-06-01
An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.
NASA Technical Reports Server (NTRS)
Maskew, B.
1976-01-01
A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).
NASA Astrophysics Data System (ADS)
Salmahaminati; Azis, Muhlas Abdul; Purwiandono, Gani; Arsyik Kurniawan, Muhammad; Rubiyanto, Dwiarso; Darmawan, Arif
2017-11-01
In this research, modeling several alkyl p-methoxy cinnamate (APMC) based on electronic transition by using semiempirical mechanical quantum ZINDO/s calculation is performed. Alkyl cinnamates of C1 (methyl) up to C7 (heptyl) homolog with 1-5 example structures of each homolog are used as materials. Quantum chemistry-package software Hyperchem 8.0 is used to simulate the drawing of the structure, geometry optimization by a semiempirical Austin Model 1 algorithm and single point calculation employing a semiempirical ZINDO/s technique. ZINDO/s calculations use a defined criteria that singly excited -Configuration Interaction (CI) where a gap of HOMO-LUMO energy transition and maximum degeneracy level are 7 and 2, respectively. Moreover, analysis of the theoretical spectra is focused on the UV-B (290-320 nm) and UV-C (200-290 nm) area. The results show that modeling of the compound can be used to predict the type of UV protection activity depends on the electronic transition in the UV area. Modification of the alkyl homolog relatively does not change the value of wavelength absorption to indicate the UV protection activity. Alkyl cinnamate compounds are predicted as UV-B and UV-C sunscreen.
A Global Optimization Method to Calculate Water Retention Curves
NASA Astrophysics Data System (ADS)
Maggi, S.; Caputo, M. C.; Turturro, A. C.
2013-12-01
Water retention curves (WRC) have a key role for the hydraulic characterization of soils and rocks. The behaviour of the medium is defined by relating the unsaturated water content to the matric potential. The experimental determination of WRCs requires an accurate and detailed measurement of the dependence of matric potential on water content, a time-consuming and error-prone process, in particular for rocky media. A complete experimental WRC needs at least a few tens of data points, distributed more or less uniformly from full saturation to oven dryness. Since each measurement requires to wait to reach steady state conditions (i.e., between a few tens of minutes for soils and up to several hours or days for rocks or clays), the whole process can even take a few months. The experimental data are fitted to the most appropriate parametric model, such as the widely used models of Van Genuchten, Brooks and Corey and Rossi-Nimmo, to obtain the analytic WRC. We present here a new method for the determination of the parameters that best fit the models to the available experimental data. The method is based on differential evolution, an evolutionary computation algorithm particularly useful for multidimensional real-valued global optimization problems. With this method it is possible to strongly reduce the number of measurements necessary to optimize the model parameters that accurately describe the WRC of the samples, allowing to decrease the time needed to adequately characterize the medium. In the present work, we have applied our method to calculate the WRCs of sedimentary carbonatic rocks of marine origin, belonging to 'Calcarenite di Gravina' Formation (Middle Pliocene - Early Pleistocene) and coming from two different quarry districts in Southern Italy. WRC curves calculated using the Van Genuchten model by simulated annealing (dashed curve) and differential evolution (solid curve). The curves are calculated using 10 experimental data points randomly extracted from the full experimental dataset. Simulated annealing is not able to find the optimal solution with this reduced data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kudoyarova, V. Kh., E-mail: kudoyarova@mail.ioffe.ru; Tolmachev, V. A.; Gushchina, E. V.
2013-03-15
Rutherford backscattering, IR spectroscopy, ellipsometry, and atomic-force microscopy are used to perform an integrated study of the composition, structure and optical properties of a-Si{sub 1-x}C{sub x}:H Left-Pointing-Angle-Bracket Er Right-Pointing-Angle-Bracket amorphous films. The technique employed to obtain the a-Si{sub 1-x}C{sub x}:H Left-Pointing-Angle-Bracket Er Right-Pointing-Angle-Bracket amorphous films includes the high-frequency decomposition of a mixture of gases, (SiH{sub 4}){sub a} + (CH{sub 4}){sub b}, and the simultaneous thermal evaporation of a complex compound, Er(pd){sub 3}. It is demonstrated that raising the amount of CH{sub 4} in the gas mixture results in an increase in the carbon content of the films under study andmore » an increase in the optical gap E{sub g}{sup opt} from 1.75 to 2.2 eV. Changes in the composition of a-Si{sub 1-x}C{sub x}:H Left-Pointing-Angle-Bracket Er Right-Pointing-Angle-Bracket amorphous films, accompanied, in turn, by changes in the optical constants, are observed in the IR spectra. The ellipsometric spectra obtained are analyzed in terms of multiple-parameter models. The conclusion is made on the basis of this analysis that the experimental and calculated spectra coincide well when variation in the composition of the amorphous films with that of the gas mixture is taken into account. The existence of a thin (6-8 nm) silicon-oxide layer on the surface of the films under study and the validity of using the double-layer model in ellipsometric calculations is confirmed by the results of structural analyses by atomic-force microscopy.« less
NASA Technical Reports Server (NTRS)
Panda, J.; Roozeboom, N. H.; Ross, J. C.
2016-01-01
The recent advancement in fast-response Pressure-Sensitive Paint (PSP) allows time-resolved measurements of unsteady pressure fluctuations from a dense grid of spatial points on a wind tunnel model. This capability allows for direct calculations of the wavenumber-frequency (k-?) spectrum of pressure fluctuations. Such data, useful for the vibro-acoustics analysis of aerospace vehicles, are difficult to obtain otherwise. For the present work, time histories of pressure fluctuations on a flat plate subjected to vortex shedding from a rectangular bluff-body were measured using PSP. The light intensity levels in the photographic images were then converted to instantaneous pressure histories by applying calibration constants, which were calculated from a few dynamic pressure sensors placed at selective points on the plate. Fourier transform of the time-histories from a large number of spatial points provided k-? spectra for pressure fluctuations. The data provides first glimpse into the possibility of creating detailed forcing functions for vibro-acoustics analysis of aerospace vehicles, albeit for a limited frequency range.
Zero-moment point determination of worst-case manoeuvres leading to vehicle wheel lift
NASA Astrophysics Data System (ADS)
Lapapong, S.; Brown, A. A.; Swanson, K. S.; Brennan, S. N.
2012-01-01
This paper proposes a method to evaluate vehicle rollover propensity based on a frequency-domain representation of the zero-moment point (ZMP). Unlike other rollover metrics such as the static stability factor, which is based on the steady-state behaviour, and the load transfer ratio, which requires the calculation of tyre forces, the ZMP is based on a simplified kinematic model of the vehicle and the analysis of the contact point of the vehicle relative to the edge of the support polygon. Previous work has validated the use of the ZMP experimentally in its ability to predict wheel lift in the time domain. This work explores the use of the ZMP in the frequency domain to allow a chassis designer to understand how operating conditions and vehicle parameters affect rollover propensity. The ZMP analysis is then extended to calculate worst-case sinusoidal manoeuvres that lead to untripped wheel lift, and the analysis is tested across several vehicle configurations and compared with that of the standard Toyota J manoeuvre.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1984-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1983-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image
NASA Astrophysics Data System (ADS)
Demir, N.; Kaynarca, M.; Oy, S.
2016-06-01
Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.
Liu, Jian; Liang, Huawei; Wang, Zhiling; Chen, Xiangcheng
2015-01-01
The quick and accurate understanding of the ambient environment, which is composed of road curbs, vehicles, pedestrians, etc., is critical for developing intelligent vehicles. The road elements included in this work are road curbs and dynamic road obstacles that directly affect the drivable area. A framework for the online modeling of the driving environment using a multi-beam LIDAR, i.e., a Velodyne HDL-64E LIDAR, which describes the 3D environment in the form of a point cloud, is reported in this article. First, ground segmentation is performed via multi-feature extraction of the raw data grabbed by the Velodyne LIDAR to satisfy the requirement of online environment modeling. Curbs and dynamic road obstacles are detected and tracked in different manners. Curves are fitted for curb points, and points are clustered into bundles whose form and kinematics parameters are calculated. The Kalman filter is used to track dynamic obstacles, whereas the snake model is employed for curbs. Results indicate that the proposed framework is robust under various environments and satisfies the requirements for online processing. PMID:26404290
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Computation of wind tunnel model deflections. [for transport type solid wing
NASA Technical Reports Server (NTRS)
Mehrotra, S. C.; Gloss, B. B.
1981-01-01
The experimental deflections for a transport type solid wing model were measured for several single point load conditions. These deflections were compared with those obtained by structural modeling of the wing by using plate and solid elements of Structural Performance Analysis and Redesign (SPAR) program. The solid element representation of the wing showed better agreement with the experimental deflections than the plate representation. The difference between the measured and calculated deflections is about 5 percent.
Vanishing Point Extraction and Refinement for Robust Camera Calibration
Tsai, Fuan
2017-01-01
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley
2015-01-15
Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Burns, Angus; Dowling, Adam H; Garvey, Thérèse M; Fleming, Garry J P
2014-10-01
To investigate the inter-examiner variability of contact point displacement measurements (used to calculate the overall Little's Irregularity Index (LII) score) from digital models of the maxillary arch by four independent examiners. Maxillary orthodontic pre-treatment study models of ten patients were scanned using the Lava(tm) Chairside Oral Scanner (LCOS) and 3D digital models were created using Creo(®) computer aided design (CAD) software. Four independent examiners measured the contact point displacements of the anterior maxillary teeth using the software. Measurements were recorded randomly on three separate occasions by the examiners and the measurements (n=600) obtained were analysed using correlation analyses and analyses of variance (ANOVA). LII contact point displacement measurements for the maxillary arch were reproducible for inter-examiner assessment when using the digital method and were highly correlated between examiner pairs for contact point displacement measurements >2mm. The digital measurement technique showed poor correlation for smaller contact point displacement measurements (<2mm) for repeated measurements. The coefficient of variation (CoV) of the digital contact point displacement measurements highlighted 348 of the 600 measurements differed by more than 20% of the mean compared with 516 of 600 for the same measurements performed using the conventional LII measurement technique. Although the inter-examiner variability of LII contact point displacement measurements on the maxillary arch was reduced using the digital compared with the conventional LII measurement methodology, neither method was considered appropriate for orthodontic research purposes particularly when measuring small contact point displacements. Copyright © 2014 Elsevier Ltd. All rights reserved.
Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman
2008-04-24
We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results insignificantly for iron(II) porphyrin coordinated with imidazole. Poor performance of a "locally dense" basis set with a large number of basis functions on the Fe center was observed in calculation of quintet-triplet gaps. Our results lead to a series of suggestions for density functional theory calculations of quintet-triplet energy gaps in ferrohemes with a single axial imidazole; these suggestions are potentially applicable for other transition-metal complexes.
30 CFR 203.86 - What is in a G&G report?
Code of Federal Regulations, 2014 CFR
2014-07-01
... establishing reservoir porosity or labeled points showing values used in calculating reservoir porosity such as... BOE) and oil fraction for your field computed by the resource module of our RSVP model; (2) A description of anticipated hydrocarbon quality (i.e., specific gravity); and (3) The ranges within the...
30 CFR 203.86 - What is in a G&G report?
Code of Federal Regulations, 2013 CFR
2013-07-01
... establishing reservoir porosity or labeled points showing values used in calculating reservoir porosity such as... BOE) and oil fraction for your field computed by the resource module of our RSVP model; (2) A description of anticipated hydrocarbon quality (i.e., specific gravity); and (3) The ranges within the...
30 CFR 203.86 - What is in a G&G report?
Code of Federal Regulations, 2012 CFR
2012-07-01
... establishing reservoir porosity or labeled points showing values used in calculating reservoir porosity such as... BOE) and oil fraction for your field computed by the resource module of our RSVP model; (2) A description of anticipated hydrocarbon quality (i.e., specific gravity); and (3) The ranges within the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jevicki, Antal; Suzuki, Kenta
We continue the study of the Sachdev-Ye-Kitaev model in the Large N limit. Following our formulation in terms of bi-local collective fields with dynamical reparametrization symmetry, we perform perturbative calculations around the conformal IR point. As a result, these are based on an ε expansion which allows for analytical evaluation of correlators and finite temperature quantities.
Quantum criticality and first-order transitions in the extended periodic Anderson model
NASA Astrophysics Data System (ADS)
Hagymási, I.; Itai, K.; Sólyom, J.
2013-03-01
We investigate the behavior of the periodic Anderson model in the presence of d-f Coulomb interaction (Udf) using mean-field theory, variational calculation, and exact diagonalization of finite chains. The variational approach based on the Gutzwiller trial wave function gives a critical value of Udf and two quantum critical points (QCPs), where the valence susceptibility diverges. We derive the critical exponent for the valence susceptibility and investigate how the position of the QCP depends on the other parameters of the Hamiltonian. For larger values of Udf, the Kondo regime is bounded by two first-order transitions. These first-order transitions merge into a triple point at a certain value of Udf. For even larger Udf valence skipping occurs. Although the other methods do not give a critical point, they support this scenario.
One-point fitting of the flux density produced by a heliostat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collado, Francisco J.
Accurate and simple models for the flux density reflected by an isolated heliostat should be one of the basic tools for the design and optimization of solar power tower systems. In this work, the ability and the accuracy of the Universidad de Zaragoza (UNIZAR) and the DLR (HFCAL) flux density models to fit actual energetic spots are checked against heliostat energetic images measured at Plataforma Solar de Almeria (PSA). Both the fully analytic models are able to acceptably fit the spot with only one-point fitting, i.e., the measured maximum flux. As a practical validation of this one-point fitting, the interceptmore » percentage of the measured images, i.e., the percentage of the energetic spot sent by the heliostat that gets the receiver surface, is compared with the intercept calculated through the UNIZAR and HFCAL models. As main conclusions, the UNIZAR and the HFCAL models could be quite appropriate tools for the design and optimization, provided the energetic images from the heliostats to be used in the collector field were previously analyzed. Also note that the HFCAL model is much simpler and slightly more accurate than the UNIZAR model. (author)« less
Threshold-free method for three-dimensional segmentation of organelles
NASA Astrophysics Data System (ADS)
Chan, Yee-Hung M.; Marshall, Wallace F.
2012-03-01
An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butlitsky, M. A.; Zelener, B. V.; Zelener, B. B.
A two-component plasma model, which we called a “shelf Coulomb” model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The “shelf Coulomb” model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for largemore » distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ε parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ε and γ = βe{sup 2}n{sup 1/3} (where β = 1/k{sub B}T, n is the particle's density, k{sub B} is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ε and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ε{sub crit}≈13(T{sub crit}{sup *}≈0.076),γ{sub crit}≈1.8(v{sub crit}{sup *}≈0.17),P{sub crit}{sup *}≈0.39, where specific volume v* = 1/γ{sup 3} and reduced temperature T{sup *} = ε{sup −1}.« less
The influence of track modelling options on the simulation of rail vehicle dynamics
NASA Astrophysics Data System (ADS)
Di Gialleonardo, Egidio; Braghin, Francesco; Bruni, Stefano
2012-09-01
This paper investigates the effect of different models for track flexibility on the simulation of railway vehicle running dynamics on tangent and curved track. To this end, a multi-body model of the rail vehicle is defined including track flexibility effects on three levels of detail: a perfectly rigid pair of rails, a sectional track model and a three-dimensional finite element track model. The influence of the track model on the calculation of the nonlinear critical speed is pointed out and it is shown that neglecting the effect of track flexibility results in an overestimation of the critical speed by more than 10%. Vehicle response to stochastic excitation from track irregularity is also investigated, analysing the effect of track flexibility models on the vertical and lateral wheel-rail contact forces. Finally, the effect of the track model on the calculation of dynamic forces produced by wheel out-of-roundness is analysed, showing that peak dynamic loads are very sensitive to the track model used in the simulation.
Ab Initio Crystal Field for Lanthanides.
Ungur, Liviu; Chibotaru, Liviu F
2017-03-13
An ab initio methodology for the first-principle derivation of crystal-field (CF) parameters for lanthanides is described. The methodology is applied to the analysis of CF parameters in [Tb(Pc) 2 ] - (Pc=phthalocyanine) and Dy 4 K 2 ([Dy 4 K 2 O(OtBu) 12 ]) complexes, and compared with often used approximate and model descriptions. It is found that the application of geometry symmetrization, and the use of electrostatic point-charge and phenomenological CF models, lead to unacceptably large deviations from predictions based on ab initio calculations for experimental geometry. It is shown how the predictions of standard CASSCF (Complete Active Space Self-Consistent Field) calculations (with 4f orbitals in the active space) can be systematically improved by including effects of dynamical electronic correlation (CASPT2 step) and by admixing electronic configurations of the 5d shell. This is exemplified for the well-studied Er-trensal complex (H 3 trensal=2,2',2"-tris(salicylideneimido)trimethylamine). The electrostatic contributions to CF parameters in this complex, calculated with true charge distributions in the ligands, yield less than half of the total CF splitting, thus pointing to the dominant role of covalent effects. This analysis allows the conclusion that ab initio crystal field is an essential tool for the decent description of lanthanides. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment
NASA Astrophysics Data System (ADS)
Bernauer, J. C.; Diefenbach, J.; Elbakian, G.; Gavrilov, G.; Goerrissen, N.; Hasell, D. K.; Henderson, B. S.; Holler, Y.; Karyan, G.; Ludwig, J.; Marukyan, H.; Naryshkin, Y.; O'Connor, C.; Russell, R. L.; Schmidt, A.; Schneekloth, U.; Suvorov, K.; Veretennikov, D.
2016-07-01
The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.
A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations
NASA Astrophysics Data System (ADS)
Jayaram, V.; Crain, K.; Keller, G. R.
2011-12-01
We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.
NASA Astrophysics Data System (ADS)
Yurtseven, Hamit; Yılmaz, Aygül
2016-06-01
We study the temperature dependence of the heat capacity Cp for the pure CH4 and the coadsorbed CH4/CCl4 on graphite near the melting point. The heat capacity peaks are analyzed using the experimental data from the literature by means of the power-law formula. The critical exponents for the heat capacity are deduced below and above the melting point for CH4 (Tm = 104.8 K) and CH4/CCl4 (Tm = 99.2 K). Our exponent values are larger as compared with the predicted values of some theoretical models exhibiting second order transition. Our analyses indicate that the pure methane shows a nearly second order (weak discontinuity in the heat capacity peak), whereas the transition in coadsorbed CH4/CCl4 is of first order (apparent discontinuity in Cp). We also study the T - X phase diagram of a two-component system of CH3CCl3+CCl4 using the Landau phenomenological model. Phase lines of the R+L (rhombohedral+liquid) and FCC+L (face-centred cubic + liquid) are calculated using the observed T - X phase diagram of this binary mixture. Our results show that the Landau mean field theory describes the observed behavior of CH3CCl3+CCl4 adequately. From the calculated T - X phase diagram, critical behavior of some thermodynamic quantities can be predicted at various temperatures and concentrations (CCl4) for a binary mixture of CH3CCl3+CCl4.
NASA Astrophysics Data System (ADS)
Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc
2018-05-01
The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.
NASA Astrophysics Data System (ADS)
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges
2009-02-13
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.
NASA Astrophysics Data System (ADS)
De Lucas, Javier
2015-03-01
A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.
Statistical time-dependent model for the interstellar gas
NASA Technical Reports Server (NTRS)
Gerola, H.; Kafatos, M.; Mccray, R.
1974-01-01
We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.
Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass
NASA Astrophysics Data System (ADS)
Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard
2013-06-01
The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant CQ ∝ |Vzz| and the asymmetry parameter ηQ that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.
Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass.
Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard
2013-06-26
The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant C(Q) is proportional to |V(zz)| and the asymmetry parameter η(Q) that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.
Georeferenced model simulations efficiently support targeted monitoring
NASA Astrophysics Data System (ADS)
Berlekamp, Jürgen; Klasmeier, Jörg
2010-05-01
The European Water Framework Directive (WFD) demands the good ecological and chemical status of surface waters. To meet the definition of good chemical status of the WFD surface water concentrations of priority pollutants must not exceed established environmental quality standards (EQS). Surveillance of the concentrations of numerous chemical pollutants in whole river basins by monitoring is laborious and time-consuming. Moreover, measured data do often not allow for immediate source apportionment which is a prerequisite for defining promising reduction strategies to be implemented within the programme of measures. In this context, spatially explicit model approaches are highly advantageous because they provide a direct link between local point emissions (e.g. treated wastewater) or diffuse non-point emissions (e.g. agricultural runoff) and resulting surface water concentrations. Scenario analyses with such models allow for a priori investigation of potential positive effects of reduction measures such as optimization of wastewater treatment. The geo-referenced model GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers) has been designed to calculate spatially resolved averaged concentrations for different flow conditions (e.g. mean or low flow) based on emission estimations for local point source emissions such as treated effluents from wastewater treatment plants. The methodology was applied to selected pharmaceuticals (diclofenac, sotalol, metoprolol, carbamazepin) in the Main river basin in Germany (approx. 27,290 km²). Average concentrations of the compounds were calculated for each river reach in the whole catchment. Simulation results were evaluated by comparison with available data from orienting monitoring and used to develop an optimal monitoring strategy for the assessment of water quality regarding micropollutants at the catchment scale.
Groundwater recharge from point to catchment scale
NASA Astrophysics Data System (ADS)
Leterme, Bertrand; Di Ciacca, Antoine; Laloy, Eric; Jacques, Diederik
2016-04-01
Accurate estimation of groundwater recharge is a challenging task as only a few devices (if any) can measure it directly. In this study, we discuss how groundwater recharge can be calculated at different temporal and spatial scales in the Kleine Nete catchment (Belgium). A small monitoring network is being installed, that is aimed to monitor the changes in dominant processes and to address data availability as one goes from the point to the catchment scale. At the point scale, groundwater recharge is estimated using inversion of soil moisture and/or water potential data and stable isotope concentrations (Koeniger et al. 2015). At the plot scale, it is proposed to monitor the discharge of a small drainage ditch in order to calculate the field groundwater recharge. Electrical conductivity measurements are necessary to separate shallow from deeper groundwater contribution to the ditch discharge (see Di Ciacca et al. poster in session HS8.3.4). At this scale, two or three-dimensional process-based vadose zone models will be used to model subsurface flow. At the catchment scale though, using a mechanistic, process-based model to estimate groundwater recharge is debatable (because of, e.g., the presence of numerous drainage ditches, mixed land use pixels, etc.). We therefore investigate to which extent various types of surrogate models can be used to make the necessary upscaling from the plot scale to the scale of the whole Kleine Nete catchment. Ref. Koeniger P, Gaj M, Beyer M, Himmelsbach T (2015) Review on soil water isotope based groundwater recharge estimations. Hydrological Processes, DOI: 10.1002/hyp.10775
Wide-Field Imaging System and Rapid Direction of Optical Zoom (WOZ)
2011-03-25
COMSOL Multiphysics, and ZEMAX optical design. The multiphysics design tool is nearing completion. We have demonstrated the ability to create a model in...and mechanical modeling to calculate the deformation resulting from the applied voltages. Finally, the deformed surface can be exported to ZEMAX via...MatLab. From ZEMAX , various analyses can be conducted to determine important parameters such as focal point, aberrations, and wavefront distortion
Multipole correction of atomic monopole models of molecular charge distribution. I. Peptides
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Keller, D. A.; Ornstein, R. L.; Rein, R.
1993-01-01
The defects in atomic monopole models of molecular charge distribution have been analyzed for several model-blocked peptides and compared with accurate quantum chemical values. The results indicate that the angular characteristics of the molecular electrostatic potential around functional groups capable of forming hydrogen bonds can be considerably distorted within various models relying upon isotropic atomic charges only. It is shown that these defects can be corrected by augmenting the atomic point charge models by cumulative atomic multipole moments (CAMMs). Alternatively, sets of off-center atomic point charges could be automatically derived from respective multipoles, providing approximately equivalent corrections. For the first time, correlated atomic multipoles have been calculated for N-acetyl, N'-methylamide-blocked derivatives of glycine, alanine, cysteine, threonine, leucine, lysine, and serine using the MP2 method. The role of the correlation effects in the peptide molecular charge distribution are discussed.
Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).
Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie
2017-01-01
This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.
Julien, Maxime; Gilbert, Alexis; Yamada, Keita; Robins, Richard J; Höhener, Patrick; Yoshida, Naohiro; Remaud, Gérald S
2018-01-01
The enrichment factor (ε) is a common way to express Isotope Effects (IEs) associated with a phenomenon. Many studies determine ε using a Rayleigh-plot, which needs multiple data points. More recent articles describe an alternative method using the Rayleigh equation that allows the determination of ε using only one experimental point, but this method is often subject to controversy. However, a calculation method using two points (one experimental point and one at t 0 ) should lead to the same results because the calculation is derived from the Rayleigh equation. But, it is frequently asked "what is the valid domain of use of this two point calculation?" The primary aim of the present work is a systematic comparison of results obtained with these two methodologies and the determination of the conditions required for the valid calculation of ε. In order to evaluate the efficiency of the two approaches, the expanded uncertainty (U) associated with determining ε has been calculated using experimental data from three published articles. The second objective of the present work is to describe how to determine the expanded uncertainty (U) associated with determining ε. Comparative methodologies using both Rayleigh-plot and two point calculation are detailed and it is clearly demonstrated that calculation of ε using a single data point can give the same result as a Rayleigh-plot provided one strict condition is respected: that the experimental value is measured at a small fraction of unreacted substrate (f < 30%). This study will help stable isotope users to present their results in a more rigorous expression: ε ± U and therefore to define better the significance of an experimental results prior interpretation. Capsule: Enrichment factor can be determined through two different methods and the calculation of associated expanded uncertainty allows checking its significance. Copyright © 2017 Elsevier B.V. All rights reserved.
Xu, Y.; Xia, J.; Miller, R.D.
2007-01-01
The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ni, Yicun; Skinner, J. L.
2016-06-07
Supercooled water exhibits many thermodynamic anomalies, and several scenarios have been proposed to interpret them, among which the liquid-liquid critical point (LLCP) hypothesis is the most commonly discussed. We investigated Widom lines and the LLCP of deeply supercooled water, by using molecular dynamics simulation with a newly reparameterized water model that explicitly includes three-body interactions. Seven isobars are studied from ambient pressure to 2.5 kbar, and Widom lines are identified by calculating maxima in the coefficient of thermal expansion and the isothermal compressibility (both with respect to temperature). From these data we estimate that the LLCP of the new watermore » model is at 180 K and 2.1 kbar. The oxygen radial distribution function is calculated along the 2 kbar isobar. It shows a steep change in the height of its second peak between 180 and 185 K, which indicates a transition between the high-density liquid and low-density liquid phases and which is consistent with the ascribed location of the critical point. The good agreement of the height of the second peak of the radial distribution function between simulation and experiment at 1 bar, as a function of temperature, supports the validity of the model. The location of the LLCP within the model is close to the kink in the experimental homogeneous nucleation line. We use existing experimental data to argue that the experimental LLCP is at 168 K and 1.95 kbar and speculate how this LLCP and its Widom line might be responsible for the kink in the homogeneous nucleation line.« less
Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI.
Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon
2015-01-01
Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. Copyright © 2014. Published by Elsevier Inc.
Meshless deformable models for 3D cardiac motion and strain analysis from tagged MRI
Wang, Xiaoxu; Chen, Ting; Zhang, Shaoting; Schaerer, Joël; Qian, Zhen; Huh, Suejung; Metaxas, Dimitris; Axel, Leon
2016-01-01
Tagged magnetic resonance imaging (TMRI) provides a direct and noninvasive way to visualize the in-wall deformation of the myocardium. Due to the through-plane motion, the tracking of 3D trajectories of the material points and the computation of 3D strain field call for the necessity of building 3D cardiac deformable models. The intersections of three stacks of orthogonal tagging planes are material points in the myocardium. With these intersections as control points, 3D motion can be reconstructed with a novel meshless deformable model (MDM). Volumetric MDMs describe an object as point cloud inside the object boundary and the coordinate of each point can be written in parametric functions. A generic heart mesh is registered on the TMRI with polar decomposition. A 3D MDM is generated and deformed with MR image tagging lines. Volumetric MDMs are deformed by calculating the dynamics function and minimizing the local Laplacian coordinates. The similarity transformation of each point is computed by assuming its neighboring points are making the same transformation. The deformation is computed iteratively until the control points match the target positions in the consecutive image frame. The 3D strain field is computed from the 3D displacement field with moving least squares. We demonstrate that MDMs outperformed the finite element method and the spline method with a numerical phantom. Meshless deformable models can track the trajectory of any material point in the myocardium and compute the 3D strain field of any particular area. The experimental results on in vivo healthy and patient heart MRI show that the MDM can fully recover the myocardium motion in three dimensions. PMID:25157446
Analysis of motion of the body of a motor car hit on its side by another passenger car
NASA Astrophysics Data System (ADS)
Gidlewski, M.; Prochowski, L.
2016-09-01
Based on an analysis of the course of a few experimental crash tests, a physical model and afterwards a mathematical model were prepared to describe the motion of bodies of the vehicles involved during the phase of impact. The motion was analysed in a global coordinate system attached to the road surface. Local coordinate systems were also adopted with their origins being placed at the centres of mass of the vehicles. Equations of motion of the model were derived. The calculation results enabled defining the influence of the location of the point of impact against the vehicle side on e.g. the following: - time history of the impact force exerted by the impacting car (A) on the impacted car (B) as well as characteristic values of this force and of the impulse of the impact force; - time histories showing changes in the velocity of the centre of vehicle mass and in the angle of deviation of the velocity vector from the direction of motion of the impacted vehicle before the collision; - trajectory of the centre of mass and angle of rotation of the body of the impacted vehicle. The calculations were focused on the initial period of motion of the body of the impacted vehicle, up to the instant of 200 ms from the start of the collision process. After this time, the vehicles separate from each other and move independently. The results obtained from the calculations covering this initial period make it possible to determine the starting-point values of the parameters to be taken for further calculations of the free post-impact motion of the cars.
Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools
NASA Astrophysics Data System (ADS)
Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.
2017-12-01
The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.