Sample records for model calculation applied

  1. How much crosstalk can be allowed in a stereoscopic system at various grey levels?

    NASA Astrophysics Data System (ADS)

    Shestak, Sergey; Kim, Daesik; Kim, Yongie

    2012-03-01

    We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.

  2. Multipoint Green's functions in 1 + 1 dimensional integrable quantum field theories

    DOE PAGES

    Babujian, H. M.; Karowski, M.; Tsvelik, A. M.

    2017-02-14

    We calculate the multipoint Green functions in 1+1 dimensional integrable quantum field theories. We use the crossing formula for general models and calculate the 3 and 4 point functions taking in to account only the lower nontrivial intermediate states contributions. Then we apply the general results to the examples of the scaling Z 2 Ising model, sinh-Gordon model and Z 3 scaling Potts model. We demonstrate this calculations explicitly. The results can be applied to physical phenomena as for example to the Raman scattering.

  3. The sdg interacting-boson model applied to 168Er

    NASA Astrophysics Data System (ADS)

    Yoshinaga, N.; Akiyama, Y.; Arima, A.

    1986-03-01

    The sdg interacting-boson model is applied to 168Er. Energy levels and E2 transitions are calculated. This model is shown to solve the problem of anharmonicity regarding the excitation energy of the first Kπ=4+ band relative to that of the first Kπ=2+ one. The level scheme including the Kπ=3+ band is well reproduced and the calculated B(E2)'s are consistent with the experimental data.

  4. 76 FR 53497 - Florida Power and Light Company; St. Lucie Plant, Units 1 and 2; Environmental Assessment and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... Appendix G to the Code for calculating K IM factors, and instead applies FEM [finite element modeling..., Units 1 and 2 are calculated using the CE NSSS finite element modeling methods. The Need for the... Society of Mechanical Engineers (ASME) Code, Section XI, Appendix G) or determined by applying finite...

  5. On the computation of molecular surface correlations for protein docking using fourier techniques.

    PubMed

    Sakk, Eric

    2007-08-01

    The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.

  6. Predicting performance of polymer-bonded Terfenol-D composites under different magnetic fields

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2009-09-01

    Considering demagnetization effect, the model used to calculate the magnetostriction of the single particle under the applied field is first created. Based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate the average magnetostriction of the composites under any applied field, as well as the saturation, is studied by treating the magnetostriction particulate as an eigenstrain. The results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with an increase of particle aspect and particle volume fraction, and a decrease of Young's modulus of the matrix. The influence of an applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect. Experiments were done to verify the effectiveness of the model, the results of which indicate that the model only can provide approximate results.

  7. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  8. Paleotemperature reconstruction from mammalian phosphate δ18O records - an alternative view on data processing

    NASA Astrophysics Data System (ADS)

    Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej

    2017-04-01

    The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.

  9. Application of an unsteady-state model for predicting vertical temperature distribution to an existing atrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takemasa, Yuichi; Togari, Satoshi; Arai, Yoshinobu

    1996-11-01

    Vertical temperature differences tend to be great in a large indoor space such as an atrium, and it is important to predict variations of vertical temperature distribution in the early stage of the design. The authors previously developed and reported on a new simplified unsteady-state calculation model for predicting vertical temperature distribution in a large space. In this paper, this model is applied to predicting the vertical temperature distribution in an existing low-rise atrium that has a skylight and is affected by transmitted solar radiation. Detailed calculation procedures that use the model are presented with all the boundary conditions, andmore » analytical simulations are carried out for the cooling condition. Calculated values are compared with measured results. The results of the comparison demonstrate that the calculation model can be applied to the design of a large space. The effects of occupied-zone cooling are also discussed and compared with those of all-zone cooling.« less

  10. Predicting relationship between magnetostriction and applied field of magnetostrictive composites

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2008-03-01

    Consideration of demagnetization effect, the model used to calculate the magnetostriction of single particle under the applied field is firstly built up. Then treating the magnetostriction particulate as an eigenstrain, based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate average magnetostriction of the composites under any applied field as well as saturation is studied. Results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with increasing of particle aspect, particle volume fraction and decreasing of Young' modulus of matrix, and the influence of applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect.

  11. DIFFRACTION FROM MODEL CRYSTALS

    USDA-ARS?s Scientific Manuscript database

    Although calculating X-ray diffraction patterns from atomic coordinates of a crystal structure is a widely available capability, calculation from non-periodic arrays of atoms has not been widely applied to cellulose. Non-periodic arrays result from modeling studies that, even though started with at...

  12. Analytical calculation of vibrations of electromagnetic origin in electrical machines

    NASA Astrophysics Data System (ADS)

    McCloskey, Alex; Arrasate, Xabier; Hernández, Xabier; Gómez, Iratxo; Almandoz, Gaizka

    2018-01-01

    Electrical motors are widely used and are often required to satisfy comfort specifications. Thus, vibration response estimations are necessary to reach optimum machine designs. This work presents an improved analytical model to calculate vibration response of an electrical machine. The stator and windings are modelled as a double circular cylindrical shell. As the stator is a laminated structure, orthotropic properties are applied to it. The values of those material properties are calculated according to the characteristics of the motor and the known material properties taken from previous works. Therefore, the model proposed takes into account the axial direction, so that length is considered, and also the contribution of windings, which differs from one machine to another. These aspects make the model valuable for a wide range of electrical motor types. In order to validate the analytical calculation, natural frequencies are calculated and compared to those obtained by Finite Element Method (FEM), giving relative errors below 10% for several circumferential and axial mode order combinations. It is also validated the analytical vibration calculation with acceleration measurements in a real machine. The comparison shows good agreement for the proposed model, being the most important frequency components in the same magnitude order. A simplified two dimensional model is also applied and the results obtained are not so satisfactory.

  13. Elaborate SMART MCNP Modelling Using ANSYS and Its Applications

    NASA Astrophysics Data System (ADS)

    Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng

    2017-09-01

    An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.

  14. An analytical model for calculating microdosimetric distributions from heavy ions in nanometer site targets.

    PubMed

    Czopyk, L; Olko, P

    2006-01-01

    The analytical model of Xapsos used for calculating microdosimetric spectra is based on the observation that straggling of energy loss can be approximated by a log-normal distribution of energy deposition. The model was applied to calculate microdosimetric spectra in spherical targets of nanometer dimensions from heavy ions at energies between 0.3 and 500 MeV amu(-1). We recalculated the originally assumed 1/E(2) initial delta electrons spectrum by applying the Continuous Slowing Down Approximation for secondary electrons. We also modified the energy deposition from electrons of energy below 100 keV, taking into account the effective path length of the scattered electrons. Results of our model calculations agree favourably with results of Monte Carlo track structure simulations using MOCA-14 for light ions (Z = 1-8) of energy ranging from E = 0.3 to 10.0 MeV amu(-1) as well as with results of Nikjoo for a wall-less proportional counter (Z = 18).

  15. Calculation of short-wave signal amplitude on the basis of the waveguide approach and the method of characteristics

    NASA Astrophysics Data System (ADS)

    Mikhailov, S. Ia.; Tumatov, K. I.

    The paper compares the results obtained using two methods to calculate the amplitude of a short-wave signal field incident on or reflected from a perfectly conducting earth. A technique is presented for calculating the geometric characteristics of the field based on the waveguide approach. It is shown that applying an extended system of characteristic equations to calculate the field amplitude is inadmissible in models which include the discontinuity second derivatives of the permittivity unless a suitable treament of the discontinuity points is applied.

  16. Modeling the forces of cutting with scissors.

    PubMed

    Mahvash, Mohsen; Voo, Liming M; Kim, Diana; Jeung, Kristin; Wainer, Joshua; Okamura, Allison M

    2008-03-01

    Modeling forces applied to scissors during cutting of biological materials is useful for surgical simulation. Previous approaches to haptic display of scissor cutting are based on recording and replaying measured data. This paper presents an analytical model based on the concepts of contact mechanics and fracture mechanics to calculate forces applied to scissors during cutting of a slab of material. The model considers the process of cutting as a sequence of deformation and fracture phases. During deformation phases, forces applied to the scissors are calculated from a torque-angle response model synthesized from measurement data multiplied by a ratio that depends on the position of the cutting crack edge and the curve of the blades. Using the principle of conservation of energy, the forces of fracture are related to the fracture toughness of the material and the geometry of the blades of the scissors. The forces applied to scissors generally include high-frequency fluctuations. We show that the analytical model accurately predicts the average applied force. The cutting model is computationally efficient, so it can be used for real-time computations such as haptic rendering. Experimental results from cutting samples of paper, plastic, cloth, and chicken skin confirm the model, and the model is rendered in a haptic virtual environment.

  17. Study on Development of 1D-2D Coupled Real-time Urban Inundation Prediction model

    NASA Astrophysics Data System (ADS)

    Lee, Seungsoo

    2017-04-01

    In recent years, we are suffering abnormal weather condition due to climate change around the world. Therefore, countermeasures for flood defense are urgent task. In this research, study on development of 1D-2D coupled real-time urban inundation prediction model using predicted precipitation data based on remote sensing technology is conducted. 1 dimensional (1D) sewerage system analysis model which was introduced by Lee et al. (2015) is used to simulate inlet and overflow phenomena by interacting with surface flown as well as flows in conduits. 2 dimensional (2D) grid mesh refinement method is applied to depict road networks for effective calculation time. 2D surface model is coupled with 1D sewerage analysis model in order to consider bi-directional flow between both. Also parallel computing method, OpenMP, is applied to reduce calculation time. The model is estimated by applying to 25 August 2014 extreme rainfall event which caused severe inundation damages in Busan, Korea. Oncheoncheon basin is selected for study basin and observed radar data are assumed as predicted rainfall data. The model shows acceptable calculation speed with accuracy. Therefore it is expected that the model can be used for real-time urban inundation forecasting system to minimize damages.

  18. A STUDY OF PREDICTED BONE MARROW DISTRIBUTION ON CALCULATED MARROW DOSE FROM EXTERNAL RADIATION EXPOSURES USING TWO SETS OF IMAGE DATA FOR THE SAME INDIVIDUAL

    PubMed Central

    Caracappa, Peter F.; Chao, T. C. Ephraim; Xu, X. George

    2010-01-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body. PMID:19430219

  19. A study of predicted bone marrow distribution on calculated marrow dose from external radiation exposures using two sets of image data for the same individual.

    PubMed

    Caracappa, Peter F; Chao, T C Ephraim; Xu, X George

    2009-06-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body.

  20. Impact of multilayered compression bandages on sub-bandage interface pressure: a model.

    PubMed

    Al Khaburi, J; Nelson, E A; Hutchinson, J; Dehghani-Sanij, A A

    2011-03-01

    Multi-component medical compression bandages are widely used to treat venous leg ulcers. The sub-bandage interface pressures induced by individual components of the multi-component compression bandage systems are not always simply additive. Current models to explain compression bandage performance do not take account of the increase in leg circumference when each bandage is applied, and this may account for the difference between predicted and actual pressures. To calculate the interface pressure when a multi-component compression bandage system is applied to a leg. Use thick wall cylinder theory to estimate the sub-bandage pressure over the leg when a multi-component compression bandage is applied to a leg. A mathematical model was developed based on thick cylinder theory to include bandage thickness in the calculation of the interface pressure in multi-component compression systems. In multi-component compression systems, the interface pressure corresponds to the sum of the pressures applied by individual bandage layers. However, the change in the limb diameter caused by additional bandage layers should be considered in the calculation. Adding the interface pressure produced by single components without considering the bandage thickness will result in an overestimate of the overall interface pressure produced by the multi-component compression systems. At the ankle (circumference 25 cm) this error can be 19.2% or even more in the case of four components bandaging systems. Bandage thickness should be considered when calculating the pressure applied using multi-component compression systems.

  1. Calculation of AC loss in two-layer superconducting cable with equal currents in the layers

    NASA Astrophysics Data System (ADS)

    Erdogan, Muzaffer

    2016-12-01

    A new method for calculating AC loss of two-layer SC power transmission cables using the commercial software Comsol Multiphysics, relying on the approach of the equal partition of current between the layers is proposed. Applying the method to calculate the AC-loss in a cable composed of two coaxial cylindrical SC tubes, the results are in good agreement with the analytical ones of duoblock model. Applying the method to calculate the AC-losses of a cable composed of a cylindrical copper former, surrounded by two coaxial cylindrical layers of superconducting tapes embedded in an insulating medium with tape-on-tape and tape-on-gap configurations are compared. A good agreement between the duoblock model and the numerical results for the tape-on-gap cable is observed.

  2. A Novel Degradation Identification Method for Wind Turbine Pitch System

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Dong

    2018-04-01

    It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.

  3. Radiation risk predictions for Space Station Freedom orbits

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Atwell, William; Weyland, Mark; Hardy, Alva C.; Wilson, John W.; Townsend, Lawrence W.; Shinn, Judy L.; Katz, Robert

    1991-01-01

    Risk assessment calculations are presented for the preliminary proposed solar minimum and solar maximum orbits for Space Station Freedom (SSF). Integral linear energy transfer (LET) fluence spectra are calculated for the trapped proton and GCR environments. Organ dose calculations are discussed using the computerized anatomical man model. The cellular track model of Katz is applied to calculate cell survival, transformation, and mutation rates for various aluminum shields. Comparisons between relative biological effectiveness (RBE) and quality factor (QF) values for SSF orbits are made.

  4. A method for simulating the entire leaking process and calculating the liquid leakage volume of a damaged pressurized pipeline.

    PubMed

    He, Guoxi; Liang, Yongtu; Li, Yansong; Wu, Mengyu; Sun, Liying; Xie, Cheng; Li, Feng

    2017-06-15

    The accidental leakage of long-distance pressurized oil pipelines is a major area of risk, capable of causing extensive damage to human health and environment. However, the complexity of the leaking process, with its complex boundary conditions, leads to difficulty in calculating the leakage volume. In this study, the leaking process is divided into 4 stages based on the strength of transient pressure. 3 models are established to calculate the leaking flowrate and volume. First, a negative pressure wave propagation attenuation model is applied to calculate the sizes of orifices. Second, a transient oil leaking model, consisting of continuity, momentum conservation, energy conservation and orifice flow equations, is built to calculate the leakage volume. Third, a steady-state oil leaking model is employed to calculate the leakage after valves and pumps shut down. Moreover, sensitive factors that affect the leak coefficient of orifices and volume are analyzed respectively to determine the most influential one. To validate the numerical simulation, two types of leakage test with different sizes of leakage holes were conducted from Sinopec product pipelines. More validations were carried out by applying commercial software to supplement the experimental insufficiency. Thus, the leaking process under different leaking conditions are described and analyzed. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Evaluation of FSK models for radiative heat transfer under oxyfuel conditions

    NASA Astrophysics Data System (ADS)

    Clements, Alastair G.; Porter, Rachael; Pranzitelli, Alessandro; Pourkashanian, Mohamed

    2015-01-01

    Oxyfuel is a promising technology for carbon capture and storage (CCS) applied to combustion processes. It would be highly advantageous in the deployment of CCS to be able to model and optimise oxyfuel combustion, however the increased concentrations of CO2 and H2O under oxyfuel conditions modify several fundamental processes of combustion, including radiative heat transfer. This study uses benchmark narrow band radiation models to evaluate the influence of assumptions in global full-spectrum k-distribution (FSK) models, and whether they are suitable for modelling radiation in computational fluid dynamics (CFD) calculations of oxyfuel combustion. The statistical narrow band (SNB) and correlated-k (CK) models are used to calculate benchmark data for the radiative source term and heat flux, which are then compared to the results calculated from FSK models. Both the full-spectrum correlated k (FSCK) and the full-spectrum scaled k (FSSK) models are applied using up-to-date spectral data. The results show that the FSCK and FSSK methods achieve good agreement in the test cases. The FSCK method using a five-point Gauss quadrature scheme is recommended for CFD calculations in oxyfuel conditions, however there are still potential inaccuracies in cases with very wide variations in the ratio between CO2 and H2O concentrations.

  6. Propagation of electromagnetic waves in a turbulent medium

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Hartke, G. J.

    1986-01-01

    Theoretical modeling of the wealth of experimental data on propagation of electromagnetic radiation through turbulent media has centered on the use of the Heisenberg-Kolmogorov (HK) model, which is, however, valid only for medium to small sized eddies. Ad hoc modifications of the HK model to encompass the large-scale region of the eddy spectrum have been widely used, but a sound physical basis has been lacking. A model for large-scale turbulence that was recently proposed is applied to the above problem. The spectral density of the temperature field is derived and used to calculate the structure function of the index of refraction N. The result is compared with available data, yielding a reasonably good fit. The variance of N is also in accord with the data. The model is also applied to propagation effects. The phase structure function, covariance of the log amplitude, and variance of the log intensity are calculated. The calculated phase structure function is in excellent agreement with available data.

  7. Atomic Calculations and Laboratory Measurements Relevant to X-ray Warm Absorbers

    NASA Technical Reports Server (NTRS)

    Kallman, Tim; Bautista, M.; Palmeri, P.

    2007-01-01

    This viewgraph document reviews the atomic calculations and the measurements from the laboratory that are relevant to our understanding of X-Ray Warm Absorbers. Included is a brief discussion of the theoretical and the experimental tools. Also included is a discussion of the challenges, and calculations relevant to dielectronic recombination, photoionization cross sections, and collisional ionization. A review of the models is included, and the sequence that the models were applied.

  8. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  9. Transport coefficients in nonequilibrium gas-mixture flows with electronic excitation.

    PubMed

    Kustova, E V; Puzyreva, L A

    2009-10-01

    In the present paper, a one-temperature model of transport properties in chemically nonequilibrium neutral gas-mixture flows with electronic excitation is developed. The closed set of governing equations for the macroscopic parameters taking into account electronic degrees of freedom of both molecules and atoms is derived using the generalized Chapman-Enskog method. The transport algorithms for the calculation of the thermal-conductivity, diffusion, and viscosity coefficients are proposed. The developed theoretical model is applied for the calculation of the transport coefficients in the electronically excited N/N(2) mixture. The specific heats and transport coefficients are calculated in the temperature range 50-50,000 K. Two sets of data for the collision integrals are applied for the calculations. An important contribution of the excited electronic states to the heat transfer is shown. The Prandtl number of atomic species is found to be substantially nonconstant.

  10. Flood Hazard Mapping Assessment for El-Awali River Catchment-Lebanon

    NASA Astrophysics Data System (ADS)

    Hdeib, Rouya; Abdallah, Chadi; Moussa, Roger; Hijazi, Samar

    2016-04-01

    River flooding prediction and flood forecasting has become an essential stage in the major flood mitigation plans worldwide. Delineation of floodplains resulting from a river flooding event requires coupling between a Hydrological rainfall-runoff model to calculate the resulting outflows of the catchment and a hydraulic model to calculate the corresponding water surface profiles along the river main course. In this study several methods were applied to predict the flood discharge of El-Awali River using the available historical data and gauging records and by conducting several site visits. The HEC-HMS Rainfall-Runoff model was built and applied to calculate the flood hydrographs along several outlets on El-Awali River and calibrated using the storm that took place on January 2013 and caused flooding of the major Lebanese rivers and by conducting additional site visits to calculate proper river sections and record witnesses of the locals. The Hydraulic HEC-RAS model was then applied to calculate the corresponding water surface profiles along El-Awali River main reach. Floodplain delineation and Hazard mapping for 10,50 and 100 years return periods was performed using the Watershed Modeling System WMS. The results first show an underestimation of the flood discharge recorded by the operating gauge stations on El-Awali River, whereas, the discharge of the 100 years flood may reach up to 506 m3/s compared by lower values calculated using the traditional discharge estimation methods. Second any flooding of El-Awali River may be catastrophic especially to the coastal part of the catchment and can cause tragic losses in agricultural lands and properties. Last a major floodplain was noticed in Marj Bisri village this floodplain can reach more than 200 meters in width. Overall, performance was good and the Rainfall-Runoff model can provide valuable information about flows especially on ungauged points and can perform a great aid for the floodplain delineation and flood prediction methods in poorly gauged basins, but further model updates and calibration is always required to compensate the weaknesses in such model and attain better results.

  11. Calculations of turbulent separated flows

    NASA Technical Reports Server (NTRS)

    Zhu, J.; Shih, T. H.

    1993-01-01

    A numerical study of incompressible turbulent separated flows is carried out by using two-equation turbulence models of the K-epsilon type. On the basis of realizability analysis, a new formulation of the eddy-viscosity is proposed which ensures the positiveness of turbulent normal stresses - a realizability condition that most existing two-equation turbulence models are unable to satisfy. The present model is applied to calculate two backward-facing step flows. Calculations with the standard K-epsilon model and a recently developed RNG-based K-epsilon model are also made for comparison. The calculations are performed with a finite-volume method. A second-order accurate differencing scheme and sufficiently fine grids are used to ensure the numerical accuracy of solutions. The calculated results are compared with the experimental data for both mean and turbulent quantities. The comparison shows that the present model performs quite well for separated flows.

  12. Projected shell model study on nuclei near the N = Z line

    NASA Astrophysics Data System (ADS)

    Sun, Y.

    2003-04-01

    Study of the N ≈ Z nuclei in the mass-80 region is not only interesting due to the existence of abundant nuclear-structure phenomena, but also important in understanding the nucleosynthesis in the rp-process. It is difficult to apply a conventional shell model due to the necessary involvement of the g 9/2 sub-shell. In this paper, the projected shell model is introduced to this study. Calculations are systematically performed for the collective levels as well as the quasi-particle excitations. It is demonstrated that calculations with this truncation scheme can achieve a comparable quality as the large-scale shell model diagonalizations for 48 Cr, but the present method can be applied to much heavier mass regions. While the known experimental data of the yrast bands in the N ≈ Z nuclei (from Se to Ru) are reasonably described, the present calculations predict the existence of high- K states, some of which lie low in energy under certain structure conditions.

  13. Anharmonic effects in the quantum cluster equilibrium method

    NASA Astrophysics Data System (ADS)

    von Domaros, Michael; Perlt, Eva

    2017-03-01

    The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.

  14. Application of a Three-Layer Photochemical Box Model in an Athens Street Canyon.

    PubMed

    Proyou, Athena G; Ziomas, Loannis C; Stathopoulos, Antony

    1998-05-01

    The aim of this paper is to show that a photochemical box model could describe the air pollution diurnal profiles within a typical street canyon in the city of Athens. As sophisticated three-dimensional dispersion models are computationally expensive and they cannot serve to simulate pollution levels in the scale of an urban street canyon, a suitably modified three-layer photochemical box model was applied. A street canyon of Athens with heavy traffic was chosen to apply the aforementioned model. The model was used to calculate pollutant concentrations during two days with meteorological conditions favoring pollutant accumulation. Road traffic emissions were calculated based on existing traffic load measurements. Meteorological data, as well as various pollutant concentrations, in order to compare with the model results, were provided by available measurements. The calculated concentrations were found to be in good agreement with measured concentration levels and show that, when traffic load and traffic composition data are available, this model can be used to predict pollution episodes. It is noteworthy that high concentrations persisted, even after additional traffic restriction measures were taken on the second day because of the high pollution levels.

  15. Agglomeration of Non-metallic Inclusions at Steel/Ar Interface: In- Situ Observation Experiments and Model Validation

    NASA Astrophysics Data System (ADS)

    Mu, Wangzhong; Dogan, Neslihan; Coley, Kenneth S.

    2017-10-01

    Better understanding of agglomeration behavior of nonmetallic inclusions in the steelmaking process is important to control the cleanliness of the steel. In this work, a revision on the Paunov simplified model has been made according to the original Kralchevsky-Paunov model. Thus, this model has been applied to quantitatively calculate the attractive capillary force on inclusions agglomerating at the liquid steel/gas interface. Moreover, the agglomeration behavior of Al2O3 inclusions at a low carbon steel/Ar interface has been observed in situ by high-temperature confocal laser scanning microscopy (CLSM). The velocity and acceleration of inclusions and attractive forces between Al2O3 inclusions of various sizes were calculated based on the CLSM video. The results calculated using the revised model offered a reasonable fit with the present experimental data for different inclusion sizes. Moreover, a quantitative comparison was made between calculations using the equivalent radius of a circle and those using the effective radius. It was found that the calculated capillary force using equivalent radius offered a better fit with the present experimental data because of the inclusion characteristics. Comparing these results with other studies in the literature allowed the authors to conclude that when applied in capillary force calculations, the equivalent radius is more suitable for inclusions with large size and irregular shape, and the effective radius is more appropriate for inclusions with small size or a large shape factor. Using this model, the effect of inclusion size on attractive capillary force has been investigated, demonstrating that larger inclusions are more strongly attracted.

  16. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  17. [Theoretical model study about the application risk of high risk medical equipment].

    PubMed

    Shang, Changhao; Yang, Fenghui

    2014-11-01

    Research for establishing a risk monitoring theoretical model of high risk medical equipment at applying site. Regard the applying site as a system which contains some sub-systems. Every sub-system consists of some risk estimating indicators. After quantizing of each indicator, the quantized values are multiplied with corresponding weight and then the products are accumulated. Hence, the risk estimating value of each subsystem is attained. Follow the calculating method, the risk estimating values of each sub-system are multiplied with corresponding weights and then the product is accumulated. The cumulative sum is the status indicator of the high risk medical equipment at applying site. The status indicator reflects the applying risk of the medical equipment at applying site. Establish a risk monitoring theoretical model of high risk medical equipment at applying site. The model can monitor the applying risk of high risk medical equipment at applying site dynamically and specially.

  18. The application of muscle wrapping to voxel-based finite element models of skeletal structures.

    PubMed

    Liu, Jia; Shi, Junfen; Fitton, Laura C; Phillips, Roger; O'Higgins, Paul; Fagan, Michael J

    2012-01-01

    Finite elements analysis (FEA) is now used routinely to interpret skeletal form in terms of function in both medical and biological applications. To produce accurate predictions from FEA models, it is essential that the loading due to muscle action is applied in a physiologically reasonable manner. However, it is common for muscle forces to be represented as simple force vectors applied at a few nodes on the model's surface. It is certainly rare for any wrapping of the muscles to be considered, and yet wrapping not only alters the directions of muscle forces but also applies an additional compressive load from the muscle belly directly to the underlying bone surface. This paper presents a method of applying muscle wrapping to high-resolution voxel-based finite element (FE) models. Such voxel-based models have a number of advantages over standard (geometry-based) FE models, but the increased resolution with which the load can be distributed over a model's surface is particularly advantageous, reflecting more closely how muscle fibre attachments are distributed. In this paper, the development, application and validation of a muscle wrapping method is illustrated using a simple cylinder. The algorithm: (1) calculates the shortest path over the surface of a bone given the points of origin and ultimate attachment of the muscle fibres; (2) fits a Non-Uniform Rational B-Spline (NURBS) curve from the shortest path and calculates its tangent, normal vectors and curvatures so that normal and tangential components of the muscle force can be calculated and applied along the fibre; and (3) automatically distributes the loads between adjacent fibres to cover the bone surface with a fully distributed muscle force, as is observed in vivo. Finally, we present a practical application of this approach to the wrapping of the temporalis muscle around the cranium of a macaque skull.

  19. Data Science Innovations That Streamline Development, Documentation, Reproducibility, and Dissemination of Models in Computational Thermodynamics: An Application of Image Processing Techniques for Rapid Computation, Parameterization and Modeling of Phase Diagrams

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) represents a collection of numerical techniques that are used to calculate quantitative results from thermodynamic theory. In the Earth sciences, CT is most often applied to estimate the equilibrium properties of solutions, to calculate phase equilibria from models of the thermodynamic properties of materials, and to approximate irreversible reaction pathways by modeling these as a series of local equilibrium steps. The thermodynamic models that underlie CT calculations relate the energy of a phase to temperature, pressure and composition. These relationships are not intuitive and they are seldom well constrained by experimental data; often, intuition must be applied to generate a robust model that satisfies the expectations of use. As a consequence of this situation, the models and databases the support CT applications in geochemistry and petrology are tedious to maintain as new data and observations arise. What is required to make the process more streamlined and responsive is a computational framework that permits the rapid generation of observable outcomes from the underlying data/model collections, and importantly, the ability to update and re-parameterize the constitutive models through direct manipulation of those outcomes. CT procedures that take models/data to the experiential reference frame of phase equilibria involve function minimization, gradient evaluation, the calculation of implicit lines, curves and surfaces, contour extraction, and other related geometrical measures. All these procedures are the mainstay of image processing analysis. Since the commercial escalation of video game technology, open source image processing libraries have emerged (e.g., VTK) that permit real time manipulation and analysis of images. These tools find immediate application to CT calculations of phase equilibria by permitting rapid calculation and real time feedback between model outcome and the underlying model parameters.

  20. Nonequilibrium simulations of model ionomers in an oscillating electric field

    DOE PAGES

    Ting, Christina L.; Sorensen-Unruh, Karen E.; Stevens, Mark J.; ...

    2016-07-25

    Here, we perform molecular dynamics simulations of a coarse-grained model of ionomer melts in an applied oscillating electric field. The frequency-dependent conductivity and susceptibility are calculated directly from the current density and polarization density, respectively. At high frequencies, we find a peak in the real part of the conductivity due to plasma oscillations of the ions. At lower frequencies, the dynamic response of the ionomers depends on the ionic aggregate morphology in the system, which consists of either percolated or isolated aggregates. We show that the dynamic response of the model ionomers to the applied oscillating field can be understoodmore » by comparison with relevant time scales in the systems, obtained from independent calculations.« less

  1. Nonequilibrium simulations of model ionomers in an oscillating electric field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting, Christina L.; Sorensen-Unruh, Karen E.; Stevens, Mark J.

    Here, we perform molecular dynamics simulations of a coarse-grained model of ionomer melts in an applied oscillating electric field. The frequency-dependent conductivity and susceptibility are calculated directly from the current density and polarization density, respectively. At high frequencies, we find a peak in the real part of the conductivity due to plasma oscillations of the ions. At lower frequencies, the dynamic response of the ionomers depends on the ionic aggregate morphology in the system, which consists of either percolated or isolated aggregates. We show that the dynamic response of the model ionomers to the applied oscillating field can be understoodmore » by comparison with relevant time scales in the systems, obtained from independent calculations.« less

  2. Results of EAS characteristics calculations in the framework of the universal hadronic interaction model NEXUS

    NASA Astrophysics Data System (ADS)

    Kalmykov, N. N.; Ostapchenko, S. S.; Werner, K.

    An extensive air shower (EAS) calculation scheme based on cascade equations and some EAS characteristics for energies 1014 -1017 eV are presented. The universal hadronic interaction model NEXUS is employed to provide the necessary data concerning hadron-air collisions. The influence of model assumptions on the longitudinal EAS development is discussed in the framework of the NEXUS and QGSJET models. Applied to EAS simulations, perspectives of combined Monte Carlo and numerical methods are considered.

  3. Three-phase Power Flow Calculation of Low Voltage Distribution Network Considering Characteristics of Residents Load

    NASA Astrophysics Data System (ADS)

    Wang, Yaping; Lin, Shunjiang; Yang, Zhibin

    2017-05-01

    In the traditional three-phase power flow calculation of the low voltage distribution network, the load model is described as constant power. Since this model cannot reflect the characteristics of actual loads, the result of the traditional calculation is always different from the actual situation. In this paper, the load model in which dynamic load represented by air conditioners parallel with static load represented by lighting loads is used to describe characteristics of residents load, and the three-phase power flow calculation model is proposed. The power flow calculation model includes the power balance equations of three-phase (A,B,C), the current balance equations of phase 0, and the torque balancing equations of induction motors in air conditioners. And then an alternating iterative algorithm of induction motor torque balance equations with each node balance equations is proposed to solve the three-phase power flow model. This method is applied to an actual low voltage distribution network of residents load, and by the calculation of three different operating states of air conditioners, the result demonstrates the effectiveness of the proposed model and the algorithm.

  4. Long-range Ising model for credit portfolios with heterogeneous credit exposures

    NASA Astrophysics Data System (ADS)

    Kato, Kensuke

    2016-11-01

    We propose the finite-size long-range Ising model as a model for heterogeneous credit portfolios held by a financial institution in the view of econophysics. The model expresses the heterogeneity of the default probability and the default correlation by dividing a credit portfolio into multiple sectors characterized by credit rating and industry. The model also expresses the heterogeneity of the credit exposure, which is difficult to evaluate analytically, by applying the replica exchange Monte Carlo method to numerically calculate the loss distribution. To analyze the characteristics of the loss distribution for credit portfolios with heterogeneous credit exposures, we apply this model to various credit portfolios and evaluate credit risk. As a result, we show that the tail of the loss distribution calculated by this model has characteristics that are different from the tail of the loss distribution of the standard models used in credit risk modeling. We also show that there is a possibility of different evaluations of credit risk according to the pattern of heterogeneity.

  5. Coastal flooding hazard assessment on potentially vulnerable coastal sectors at Varna regional coast

    NASA Astrophysics Data System (ADS)

    Eftimova, Petya; Valchev, Nikolay; Andreeva, Nataliya

    2017-04-01

    Storm induced flooding is one of the most significant threats that the coastal communities face. In the light of the climate change it is expected to gain even more importance. Therefore, the adequate assessment of this hazard could increase the capability of mitigation of environmental, social, and economic impacts. The study was accomplished in the frames of the Coastal Risk Assessment Framework (CRAF) developed within the FP7 RISC-KIT Project (Resilience-Increasing Strategies for Coasts - toolkit). The hazard assessment was applied on three potentially vulnerable coastal sectors located at the regional coast of Varna, Bulgarian Black Sea coast. The potential "hotspot" candidates were selected during the initial phase of CRAF which evaluated the coastal risks at regional level. The area of interest comprises different coastal types - from natural beaches and rocky cliffs to man modified environments presented by coastal and port defense structures such as the Varna Port breakwater, groynes, jetties and beaches formed by the presence of coastal structures. The assessment of coastal flooding was done using combination of models -XBeach model and LISFLOOD inundation model applied consecutively. The XBeach model was employed to calculate the hazard intensities at the coast up to the berm crest, while LISFLOOD model was used to calculate the intensity and extent of flooding in the hinterland. At the first stage, 75 extreme storm events were simulated using XBeach model run in "non-hydrostatic" mode to obtain series of flood depth, depth-velocity and overtopping discharges at the predefined coastal cross-shore transects. Extreme value analysis was applied to the calculated hazard parameters series in order to determine their probability distribution functions. This is so called response approach, which is focused on the onshore impact rather than on the deep water boundary conditions. It allows calculation of the hazard extremes probability distribution induced by a variety of combinations of waves and surges. The considered return periods were 20, 50 and 100 years. Subsequently, the overtopping volumes corresponding to preferred return periods were fed into LISFLOOD model to calculate the intensity and extent of the hinterland flooding. For the beaches with fast-rising slopes backed by cliffs a combination of XBeach and LISFLOOD output was applied in order to properly map the flood depth and depth-velocity spatial distribution.

  6. The desorptivity model of bulk soil-water evaporation

    NASA Technical Reports Server (NTRS)

    Clapp, R. B.

    1983-01-01

    Available models of bulk evaporation from a bare-surfaced soil are difficult to apply to field conditions where evaporation is complicated by two main factors: rate-limiting climatic conditions and redistribution of soil moisture following infiltration. Both factors are included in the "desorptivity model', wherein the evaporation rate during the second stage (the soil-limiting stage) of evaporation is related to the desorptivity parameter, A. Analytical approximations for A are presented. The approximations are independent of the surface soil moisture. However, calculations using the approximations indicate that both soil texture and soil moisture content at depth significantly affect A. Because the moisture content at depth decreases in time during redistribution, it follows that the A parameter also changes with time. Consequently, a method to calculate a representative value of A was developed. When applied to field data, the desorptivity model estimated cumulative evaporation well. The model is easy to calculate, but its usefulness is limited because it requires an independent estimate of the time of transition between the first and second stages of evaporation. The model shows that bulk evaporation after the transition to the second stage is largely independent of climatic conditions.

  7. Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard

    2013-06-01

    The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant CQ ∝ |Vzz| and the asymmetry parameter ηQ that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.

  8. Extended Czjzek model applied to NMR parameter distributions in sodium metaphosphate glass.

    PubMed

    Vasconcelos, Filipe; Cristol, Sylvain; Paul, Jean-François; Delevoye, Laurent; Mauri, Francesco; Charpentier, Thibault; Le Caër, Gérard

    2013-06-26

    The extended Czjzek model (ECM) is applied to the distribution of NMR parameters of a simple glass model (sodium metaphosphate, NaPO3) obtained by molecular dynamics (MD) simulations. Accurate NMR tensors, electric field gradient (EFG) and chemical shift anisotropy (CSA) are calculated from density functional theory (DFT) within the well-established PAW/GIPAW framework. The theoretical results are compared to experimental high-resolution solid-state NMR data and are used to validate the considered structural model. The distributions of the calculated coupling constant C(Q) is proportional to |V(zz)| and the asymmetry parameter η(Q) that characterize the quadrupolar interaction are discussed in terms of structural considerations with the help of a simple point charge model. Finally, the ECM analysis is shown to be relevant for studying the distribution of CSA tensor parameters and gives new insight into the structural characterization of disordered systems by solid-state NMR.

  9. Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ruf, Joe

    2007-01-01

    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.

  10. HP-25 PROGRAMMABLE POCKET CALCULATOR APPLIED TO AIR POLLUTION MEASUREMENT STUDIES: STATIONARY SOURCES

    EPA Science Inventory

    The report should be useful to persons concerned with Air Pollution Measurement Studies of Stationary Industrial Sources. It gives detailed descriptions of 22 separate programs, written specifically for the Hewlett Packard Model HP-25 manually programmable pocket calculator. Each...

  11. HP-65 PROGRAMMABLE POCKET CALCULATOR APPLIED TO AIR POLLUTION MEASUREMENT STUDIES: STATIONARY SOURCES

    EPA Science Inventory

    The handbook is intended for persons concerned with air pollution measurement studies of stationary industrial sources. It gives detailed descriptions of 22 different programs written specifically for the Hewlett Packard Model HP-65 card-programmable pocket calculator. For each p...

  12. Wheel life prediction model - an alternative to the FASTSIM algorithm for RCF

    NASA Astrophysics Data System (ADS)

    Hossein-Nia, Saeed; Sichani, Matin Sh.; Stichel, Sebastian; Casanueva, Carlos

    2018-07-01

    In this article, a wheel life prediction model considering wear and rolling contact fatigue (RCF) is developed and applied to a heavy-haul locomotive. For wear calculations, a methodology based on Archard's wear calculation theory is used. The simulated wear depth is compared with profile measurements within 100,000 km. For RCF, a shakedown-based theory is applied locally, using the FaStrip algorithm to estimate the tangential stresses instead of FASTSIM. The differences between the two algorithms on damage prediction models are studied. The running distance between the two reprofiling due to RCF is estimated based on a Wöhler-like relationship developed from laboratory test results from the literature and the Palmgren-Miner rule. The simulated crack locations and their angles are compared with a five-year field study. Calculations to study the effects of electro-dynamic braking, track gauge, harder wheel material and the increase of axle load on the wheel life are also carried out.

  13. Determining the ventilation and aerosol deposition rates from routine indoor-air measurements.

    PubMed

    Halios, Christos H; Helmis, Costas G; Deligianni, Katerina; Vratolis, Sterios; Eleftheriadis, Konstantinos

    2014-01-01

    Measurement of air exchange rate provides critical information in energy and indoor-air quality studies. Continuous measurement of ventilation rates is a rather costly exercise and requires specific instrumentation. In this work, an alternative methodology is proposed and tested, where the air exchange rate is calculated by utilizing indoor and outdoor routine measurements of a common pollutant such as SO2, whereas the uncertainties induced in the calculations are analytically determined. The application of this methodology is demonstrated, for three residential microenvironments in Athens, Greece, and the results are also compared against ventilation rates calculated from differential pressure measurements. The calculated time resolved ventilation rates were applied to the mass balance equation to estimate the particle loss rate which was found to agree with literature values at an average of 0.50 h(-1). The proposed method was further evaluated by applying a mass balance numerical model for the calculation of the indoor aerosol number concentrations, using the previously calculated ventilation rate, the outdoor measured number concentrations and the particle loss rates as input values. The model results for the indoors' concentrations were found to be compared well with the experimentally measured values.

  14. Development of a CSP plant energy yield calculation tool applying predictive models to analyze plant performance sensitivities

    NASA Astrophysics Data System (ADS)

    Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons

    2017-06-01

    At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.

  15. Proceedings of a Workshop on V/STOL Aircraft Aerodynamics. Volume 2. Held At Naval Postgraduate School Monterey, California, May 16-18, 1979

    DTIC Science & Technology

    1979-05-18

    called " VAPE ." This program ’ias six modules, three of which are the jet models: The Wooler-Ziegler model, the Fearn-Weston model, and the Thames...rectangular jet model. The " VAPE " program has been applied to a NASA V/STOL model as discussed by Tom. The agreemernt between the calculations and the...properties of the jet are known, this model is intended to calculate surface pressures. It is in the VAPE program as I mentioned earlier. I would like to

  16. Finite geometry effects of field-aligned currents

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; Hoffman, R. A.

    1992-01-01

    Results are presented of model calculations of the magnetic field produced by finite current regions that would be measured by a spaceborne magnetometer. Conditions were examined under which the infinite current sheet approximation can be applied to the calculation of the field-aligned current (FAC) density, using satellite magnetometer data. The accuracy of the three methods used for calculating the current sheet normal direction with respect to the spacecraft trajectory was assessed. It is shown that the model can be used to obtain the position and the orientation of the spacecraft trajectory through the FAC region.

  17. Deconvolution of acoustic emissions for source localization using time reverse modeling

    NASA Astrophysics Data System (ADS)

    Kocur, Georg Karl

    2017-01-01

    Impact experiments on small-scale slabs made of concrete and aluminum were carried out. Wave motion radiated from the epicenter of the impact was recorded as voltage signals by resonant piezoelectric transducers. Numerical simulations of the elastic wave propagation are performed to simulate the physical experiments. The Hertz theory of contact is applied to estimate the force impulse, which is subsequently used for the numerical simulation. Displacements at the transducer positions are calculated numerically. A deconvolution function is obtained by comparing the physical (voltage signal) and the numerical (calculated displacement) experiments. Acoustic emission signals due to pencil-lead breaks are recorded, deconvolved and applied for localization using time reverse modeling.

  18. Calculation of thermodynamic functions of aluminum plasma for high-energy-density systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaev, V. V., E-mail: shumaev@student.bmstu.ru

    The results of calculating the degree of ionization, the pressure, and the specific internal energy of aluminum plasma in a wide temperature range are presented. The TERMAG computational code based on the Thomas–Fermi model was used at temperatures T > 105 K, and the ionization equilibrium model (Saha model) was applied at lower temperatures. Quantitatively similar results were obtained in the temperature range where both models are applicable. This suggests that the obtained data may be joined to produce a wide-range equation of state.

  19. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  20. A simple cohesive zone model that generates a mode-mixity dependent toughness

    DOE PAGES

    Reedy, Jr., E. D.; Emery, J. M.

    2014-07-24

    A simple, mode-mixity dependent toughness cohesive zone model (MDG c CZM) is described. This phenomenological cohesive zone model has two elements. Mode I energy dissipation is defined by a traction–separation relationship that depends only on normal separation. Mode II (III) dissipation is generated by shear yielding and slip in the cohesive surface elements that lie in front of the region where mode I separation (softening) occurs. The nature of predictions made by analyses that use the MDG c CZM is illustrated by considering the classic problem of an elastic layer loaded by rigid grips. This geometry, which models a thinmore » adhesive bond with a long interfacial edge crack, is similar to that which has been used to measure the dependence of interfacial toughness on crack-tip mode-mixity. The calculated effective toughness vs. applied mode-mixity relationships all display a strong dependence on applied mode-mixity with the effective toughness increasing rapidly with the magnitude of the mode-mixity. The calculated relationships also show a pronounced asymmetry with respect to the applied mode-mixity. As a result, this dependence is similar to that observed experimentally, and calculated results for a glass/epoxy interface are in good agreement with published data that was generated using a test specimen of the same type as analyzed here.« less

  1. Diffusion model of penetration of a chloride-containing environment in the volume of a constructive element

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, I. I.; Snezhkina, O. V.; Ovchinnikov, I. G.

    2018-06-01

    A generalized model of diffusional penetration of a chloride-containing medium into the volume of a compressed reinforced concrete element is considered. The equations of deformation values of reinforced concrete structure are presented, taking into account the degradation of concrete and corrosion of reinforcement. At the initial stage, an applied force calculation of section of the structural element with mechanical properties of the material which are determined by the initial field of concentration of aggressive medium, is carried out. Furthermore, at each discrete instant moment of time, the following properties are determined: the distribution law of concentration for chloride field, corresponding to the parameters of the stress-strain state; the calculation of corrosion damage field of reinforcing elements and the applied force calculation of section of the structural element with parameters corresponding to the distribution of the concentration field and the field of corrosion damage are carried out.

  2. Navier-Stokes turbine heat transfer predictions using two-equation turbulence closures

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Arnone, Andrea

    1992-01-01

    Navier-Stokes calculations were carried out in order to predict the heat-transfer rates on turbine blades. The calculations were performed using TRAF2D which is a k-epsilon, explicit, finite volume mass-averaged Navier-Stokes solver. Turbulence was modeled using Coakley's q-omega and Chien's k-epsilon two-equation models and the Baldwin-Lomax algebraic model. The model equations along with the flow equations were solved explicitly on a nonperiodic C grid. Implicit residual smoothing (IRS) or a combination of multigrid technique and IRS was applied to enhance convergence rates. Calculations were performed to predict the Stanton number distributions on the first stage vane and blade row as well as the second stage vane row of the SSME high-pressure fuel turbine. The comparison serves to highlight the weaknesses of the turbulence models for use in turbomachinery heat-transfer calculations.

  3. Calculations on the Back of an Envelope Model: Applying Seasonal Fecundity Models to Species’ Range Limits

    EPA Science Inventory

    Most predictions of the effect of climate change on species’ ranges are based on correlations between climate and current species’ distributions. These so-called envelope models may be a good first approximation, but we need demographically mechanistic models to incorporate the ...

  4. Development of a near-wall Reynolds-stress closure based on the SSG model for the pressure strain

    NASA Technical Reports Server (NTRS)

    So, R. M. C.; Aksoy, H.; Sommer, T. P.; Yuan, S. P.

    1994-01-01

    In this research, a near-wall second-order closure based on the Speziable et al.(1991) or SSG model for the pressure-strain term is proposed. Unlike the LRR model, the SSG model is quasi-nonlinear and yields better results when applied to calculate rotating homogeneous turbulent flows. An asymptotic analysis near the wall is applied to both the exact and modeled, equations so that appropriate near-wall corrections to the SSG model and the modeled dissipation-rate equation can be derived to satisfy the physical wall boundary conditions as well as the asymptotic near-wall behavior of the exact equations. Two additional model constants are introduced and they are determined by calibrating against one set of near-wall channel flow data. Once determined, their values are found to remain constant irrespective of the type of flow examined. The resultant model is used to calculate simple turbulent flows, near separating turbulent flows, complex turbulent flows and compressible turbulent flows with a freestream Mach number as high as 10. In all the flow cases investigated, the calculated results are in good agreement with data. This new near-wall model is less ad hoc, physically and mathematically more sound and eliminates the empiricism introduced by Zhang. Therefore, it is quite general, as demonstrated by the good agreement achieved with measurements covering a wide range of Reynolds numbers and Mach numbers.

  5. Applying Probabilistic Decision Models to Clinical Trial Design

    PubMed Central

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075

  6. Diffraction Analysis of Antennas With Mesh Surfaces

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Yahya

    1987-01-01

    Strip-aperture model replaces wire-grid model. Far-field radiation pattern of antenna with mesh reflector calculated more accurately with new strip-aperture model than with wire-grid model of reflector surface. More adaptable than wire-grid model to variety of practical configurations and decidedly superior for reflectors in which mesh-cell width exceeds mesh thickness. Satisfies reciprocity theorem. Applied where mesh cells are no larger than tenth of wavelength. Small cell size permits use of simplifying approximation that reflector-surface current induced by electromagnetic field is present even in apertures. Approximation useful in calculating far field.

  7. Development of a model to compute the extension of life supporting zones for Earth-like exoplanets.

    PubMed

    Neubauer, David; Vrtala, Aron; Leitner, Johannes J; Firneis, Maria G; Hitzenberger, Regina

    2011-12-01

    A radiative convective model to calculate the width and the location of the life supporting zone (LSZ) for different, alternative solvents (i.e. other than water) is presented. This model can be applied to the atmospheres of the terrestrial planets in the solar system as well as (hypothetical, Earth-like) terrestrial exoplanets. Cloud droplet formation and growth are investigated using a cloud parcel model. Clouds can be incorporated into the radiative transfer calculations. Test runs for Earth, Mars and Titan show a good agreement of model results with observations.

  8. Coma dust scattering concepts applied to the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Fink, Uwe; Rinaldi, Giovanna

    2015-09-01

    This paper describes basic concepts, as well as providing a framework, for the interpretation of the light scattered by the dust in a cometary coma as observed by instruments on a spacecraft such as Rosetta. It is shown that the expected optical depths are small enough that single scattering can be applied. Each of the quantities that contribute to the scattered intensity is discussed in detail. Using optical constants of the likely coma dust constituents, olivine, pyroxene and carbon, the scattering properties of the dust are calculated. For the resulting observable scattering intensities several particle size distributions are considered, a simple power law, power laws with a small particle cut off and a log-normal distributions with various parameters. Within the context of a simple outflow model, the standard definition of Afρ for a circular observing aperture is expanded to an equivalent Afρ for an annulus and specific line-of-sight observation. The resulting equivalence between the observed intensity and Afρ is used to predict observable intensities for 67P/Churyumov-Gerasimenko at the spacecraft encounter near 3.3 AU and near perihelion at 1.3 AU. This is done by normalizing particle production rates of various size distributions to agree with observed ground based Afρ values. Various geometries for the column densities in a cometary coma are considered. The calculations for a simple outflow model are compared with more elaborate Direct Simulation Monte Carlo Calculation (DSMC) models to define the limits of applicability of the simpler analytical approach. Thus our analytical approach can be applied to the majority of the Rosetta coma observations, particularly beyond several nuclear radii where the dust is no longer in a collisional environment, without recourse to computer intensive DSMC calculations for specific cases. In addition to a spherically symmetric 1-dimensional approach we investigate column densities for the 2-dimensional DSMC model on the day and night side of the comet. Our calculations are also applied to estimates of the dust particle densities and flux which are useful for the in-situ experiments on Rosetta.

  9. Effective emissivities of isothermal blackbody cavities calculated by the Monte Carlo method using the three-component bidirectional reflectance distribution function model.

    PubMed

    Prokhorov, Alexander

    2012-05-01

    This paper proposes a three-component bidirectional reflectance distribution function (3C BRDF) model consisting of diffuse, quasi-specular, and glossy components for calculation of effective emissivities of blackbody cavities and then investigates the properties of the new reflection model. The particle swarm optimization method is applied for fitting a 3C BRDF model to measured BRDFs. The model is incorporated into the Monte Carlo ray-tracing algorithm for isothermal cavities. Finally, the paper compares the results obtained using the 3C model and the conventional specular-diffuse model of reflection.

  10. A Prototype Physical Database for Passive Microwave Retrievals of Precipitation over the US Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.

    2015-01-01

    An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.

  11. New 2D diffraction model and its applications to terahertz parallel-plate waveguide power splitters

    PubMed Central

    Zhang, Fan; Song, Kaijun; Fan, Yong

    2017-01-01

    A two-dimensional (2D) diffraction model for the calculation of the diffraction field in 2D space and its applications to terahertz parallel-plate waveguide power splitters are proposed in this paper. Compared with the Huygens-Fresnel principle in three-dimensional (3D) space, the proposed model provides an approximate analytical expression to calculate the diffraction field in 2D space. The diffraction filed is regarded as the superposition integral in 2D space. The calculated results obtained from the proposed diffraction model agree well with the ones by software HFSS based on the element method (FEM). Based on the proposed 2D diffraction model, two parallel-plate waveguide power splitters are presented. The splitters consist of a transmitting horn antenna, reflectors, and a receiving antenna array. The reflector is cylindrical parabolic with superimposed surface relief to efficiently couple the transmitted wave into the receiving antenna array. The reflector is applied as computer-generated holograms to match the transformed field to the receiving antenna aperture field. The power splitters were optimized by a modified real-coded genetic algorithm. The computed results of the splitters agreed well with the ones obtained by software HFSS verify the novel design method for power splitter, which shows good applied prospects of the proposed 2D diffraction model. PMID:28181514

  12. The WRF-CMAQ Integrated On-Line Modeling System: Development, Testing, and Initial Applications

    EPA Science Inventory

    Traditionally, atmospheric chemistry-transport and meteorology models have been applied in an off-line paradigm, in which archived output on the dynamical state of the atmosphere simulated using the meteorology model is used to drive transport and chemistry calculations of atmos...

  13. Integrated analyses in plastics forming

    NASA Astrophysics Data System (ADS)

    Bo, Wang

    This is the thesis which explains the progress made in the analysis, simulation and testing of plastics forming. This progress can be applied to injection and compression mould design. Three activities of plastics forming have been investigated, namely filling analysis, cooling analysis and ejecting analysis. The filling section of plastics forming has been analysed and calculated by using MOLDFLOW and FILLCALC V. software. A comparing of high speed compression moulding and injection moulding has been made. The cooling section of plastics forming has been analysed by using MOLDFLOW software and a finite difference computer program. The latter program can be used as a sample program to calculate the feasibility of cooling different materials to required target temperatures under controlled cooling conditions. The application of thermal imaging has been also introduced to determine the actual process temperatures. Thermal imaging can be used as a powerful tool to analyse mould surface temperatures and to verify the mathematical model. A buckling problem for ejecting section has been modelled and calculated by PATRAN/ABAQUS finite element analysis software and tested. These calculations and analysis are applied to the special case but can be use as an example for general analysis and calculation in the ejection section of plastics forming.

  14. Synthesis of Biofluidic Microsystems (SYNBIOSYS)

    DTIC Science & Technology

    2007-10-01

    reaction system. 58 FIGURE 41. The micro reactor is represented by a PFR network model. The calculation of reaction and convection is conducted in...one column of PFRs and the calculation of diffusional mixing is conducted between two columns of PFRs. 59 FIGURE 42. Apply the numerical method of...lines to calculate the diffusion in the channel width direction. Here, we take 10 discretized concentration points in the channel: ci1 - ci10. Points

  15. Sensitivity of NTCP parameter values against a change of dose calculation algorithm.

    PubMed

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-01

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.

  16. Sensitivity of NTCP parameter values against a change of dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-15

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less

  17. Rapid Quasirelativistic and Relativistic Calculations of Atomic Data for Plasma Modeling

    DTIC Science & Technology

    1991-08-20

    slightly more complicated formula apply when mixing is included outside a complex. It is also noted that similar expressions apply for photoionization ...simplification an the appropriate formula 2 for ionization is even somewhat greater. Moreover, similar siml:’- formulae apply for photoionization and for semi...like that given in the appendix of Ref. 18, one finds that Eq. (35) also applies for photoionization if the collisional ionization cross sections on

  18. Electron paramagnetic resonance g-tensors from state interaction spin-orbit coupling density matrix renormalization group

    NASA Astrophysics Data System (ADS)

    Sayfutyarova, Elvira R.; Chan, Garnet Kin-Lic

    2018-05-01

    We present a state interaction spin-orbit coupling method to calculate electron paramagnetic resonance g-tensors from density matrix renormalization group wavefunctions. We apply the technique to compute g-tensors for the TiF3 and CuCl42 - complexes, a [2Fe-2S] model of the active center of ferredoxins, and a Mn4CaO5 model of the S2 state of the oxygen evolving complex. These calculations raise the prospects of determining g-tensors in multireference calculations with a large number of open shells.

  19. Direct Simulation of Reentry Flows with Ionization

    NASA Technical Reports Server (NTRS)

    Carlson, Ann B.; Hassan, H. A.

    1989-01-01

    The Direct Simulation Monte Carlo (DSMC) method is applied in this paper to the study of rarefied, hypersonic, reentry flows. The assumptions and simplifications involved with the treatment of ionization, free electrons and the electric field are investigated. A new method is presented for the calculation of the electric field and handling of charged particles with DSMC. In addition, a two-step model for electron impact ionization is implemented. The flow field representing a 10 km/sec shock at an altitude of 65 km is calculated. The effects of the new modeling techniques on the calculation results are presented and discussed.

  20. Implementation of the nudged elastic band method in a dislocation dynamics formalism: Application to dislocation nucleation

    NASA Astrophysics Data System (ADS)

    Geslin, Pierre-Antoine; Gatti, Riccardo; Devincre, Benoit; Rodney, David

    2017-11-01

    We propose a framework to study thermally-activated processes in dislocation glide. This approach is based on an implementation of the nudged elastic band method in a nodal mesoscale dislocation dynamics formalism. Special care is paid to develop a variational formulation to ensure convergence to well-defined minimum energy paths. We also propose a methodology to rigorously parametrize the model on atomistic data, including elastic, core and stacking fault contributions. To assess the validity of the model, we investigate the homogeneous nucleation of partial dislocation loops in aluminum, recovering the activation energies and loop shapes obtained with atomistic calculations and extending these calculations to lower applied stresses. The present method is also applied to heterogeneous nucleation on spherical inclusions.

  1. The Lα (λ = 121.6 nm) solar plage contrasts calculations.

    NASA Astrophysics Data System (ADS)

    Bruevich, E. A.

    1991-06-01

    The results of calculations of Lα plage contrasts based on experimental data are presented. A three-component model ideology of Lα solar flux using "Prognoz-10" and SME daily smoothed values of Lα solar flux are applied. The values of contrast are discussed and compared with experimental values based on "Skylab" data.

  2. GPU-based Green's function simulations of shear waves generated by an applied acoustic radiation force in elastic and viscoelastic models.

    PubMed

    Yang, Yiqun; Urban, Matthew W; McGough, Robert J

    2018-05-15

    Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green's functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green's function approach are ideally suited for high-performance GPUs.

  3. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  4. Mathematical model of phase transformations and elastoplastic stress in the water spray quenching of steel bars

    NASA Astrophysics Data System (ADS)

    Nagasaka, Y.; Brimacombe, J. K.; Hawbolt, E. B.; Samarasekera, I. V.; Hernandez-Morales, B.; Chidiac, S. E.

    1993-04-01

    A mathematical model, based on the finite-element technique and incorporating thermo-elasto-plastic behavior during the water spray quenching of steel, has been developed. In the model, the kinetics of diffusion-dependent phase transformation and martensitic transformation have been coupled with the transient heat flow to predict the microstructural evolution of the steel. Furthermore, an elasto-plastic constitutive relation has been applied to calculate internal stresses resulting from phase changes as well as temperature variation. The computer code has been verified for internal consistency with previously published results for pure iron bars. The model has been applied to the water spray quenching of two grades of steel bars, 1035 carbon and nickel-chromium alloyed steel; the calculated temperature, hardness, distortion, and residual stresses in the bars agreed well with experimental measurements. The results show that the phase changes occurring during this process affect the internal stresses significantly and must be included in the thermomechanical model.

  5. New generation of universal modeling for centrifugal compressors calculation

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.

    2015-08-01

    The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.

  6. Criteria for representing circular arc and sine wave spar webs by non-curved elements

    NASA Technical Reports Server (NTRS)

    Jenkins, J. M.

    1979-01-01

    The basic problem of how to simply represent a curved web of a spar in a finite element structural model was addressed. The ratio of flat web to curved web axial deformations and longitudinal rotations were calculated using NASTRAN models. Multiplying factors were developed from these calculations for various web thicknesses. These multiplying factors can be applied directly to the area and moment of inertia inputs of the finite element model. This allows the thermal stress relieving configurations of sine wave and circular arc webs to be simply accounted for in finite element structural models.

  7. Feasibility of supersonic diode pumped alkali lasers: Model calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barmashenko, B. D.; Rosenwaks, S.

    The feasibility of supersonic operation of diode pumped alkali lasers (DPALs) is studied for Cs and K atoms applying model calculations, based on a semi-analytical model previously used for studying static and subsonic flow DPALs. The operation of supersonic lasers is compared with that measured and modeled in subsonic lasers. The maximum power of supersonic Cs and K lasers is found to be higher than that of subsonic lasers with the same resonator and alkali density at the laser inlet by 25% and 70%, respectively. These results indicate that for scaling-up the power of DPALs, supersonic expansion should be considered.

  8. Reproduction numbers for epidemic models with households and other social structures. I. Definition and calculation of R0

    PubMed Central

    Pellis, Lorenzo; Ball, Frank; Trapman, Pieter

    2012-01-01

    The basic reproduction number R0 is one of the most important quantities in epidemiology. However, for epidemic models with explicit social structure involving small mixing units such as households, its definition is not straightforward and a wealth of other threshold parameters has appeared in the literature. In this paper, we use branching processes to define R0, we apply this definition to models with households or other more complex social structures and we provide methods for calculating it. PMID:22085761

  9. A recursive method for calculating the total number of spanning trees and its applications in self-similar small-world scale-free network models

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Su, Jing; Yao, Bing

    2018-05-01

    The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.

  10. ADE-FDTD Scattered-Field Formulation for Dispersive Materials

    PubMed Central

    Kong, Soon-Cheol; Simpson, Jamesina J.; Backman, Vadim

    2009-01-01

    This Letter presents a scattered-field formulation for modeling dispersive media using the finite-difference time-domain (FDTD) method. Specifically, the auxiliary differential equation method is applied to Drude and Lorentz media for a scattered field FDTD model. The present technique can also be applied in a straightforward manner to Debye media. Excellent agreement is achieved between the FDTD-calculated and exact theoretical results for the reflection coefficient in half-space problems. PMID:19844602

  11. ADE-FDTD Scattered-Field Formulation for Dispersive Materials.

    PubMed

    Kong, Soon-Cheol; Simpson, Jamesina J; Backman, Vadim

    2008-01-01

    This Letter presents a scattered-field formulation for modeling dispersive media using the finite-difference time-domain (FDTD) method. Specifically, the auxiliary differential equation method is applied to Drude and Lorentz media for a scattered field FDTD model. The present technique can also be applied in a straightforward manner to Debye media. Excellent agreement is achieved between the FDTD-calculated and exact theoretical results for the reflection coefficient in half-space problems.

  12. Combining molecular dynamics and an electrodiffusion model to calculate ion channel conductance

    NASA Astrophysics Data System (ADS)

    Wilson, Michael A.; Nguyen, Thuy Hien; Pohorille, Andrew

    2014-12-01

    Establishing the relation between the structures and functions of protein ion channels, which are protein assemblies that facilitate transmembrane ion transport through water-filled pores, is at the forefront of biological and medical sciences. A reliable way to determine whether our understanding of this relation is satisfactory is to reproduce the measured ionic conductance over a broad range of applied voltages. This can be done in molecular dynamics simulations by way of applying an external electric field to the system and counting the number of ions that traverse the channel per unit time. Since this approach is computationally very expensive we develop a markedly more efficient alternative in which molecular dynamics is combined with an electrodiffusion equation. This alternative approach applies if steady-state ion transport through channels can be described with sufficient accuracy by the one-dimensional diffusion equation in the potential given by the free energy profile and applied voltage. The theory refers only to line densities of ions in the channel and, therefore, avoids ambiguities related to determining the surface area of the channel near its endpoints or other procedures connecting the line and bulk ion densities. We apply the theory to a simple, model system based on the trichotoxin channel. We test the assumptions of the electrodiffusion equation, and determine the precision and consistency of the calculated conductance. We demonstrate that it is possible to calculate current/voltage dependence and accurately reconstruct the underlying (equilibrium) free energy profile, all from molecular dynamics simulations at a single voltage. The approach developed here applies to other channels that satisfy the conditions of the electrodiffusion equation.

  13. sdg interacting-boson model in the SU(3) scheme and its application to 168Er

    NASA Astrophysics Data System (ADS)

    Yoshinaga, N.; Akiyama, Y.; Arima, A.

    1988-07-01

    The sdg interacting-boson model is presented in the SU(3) tensor formalism. The interactions are decomposed according to their SU(3) tensor character. The existence of the SU(3)-seniority preserving operator is found to be important. The model is applied to 168Er. Energy levels and electromagnetic transitions are calculated. This model is shown to solve the problem of anharmonicity regarding the excitation energy of the first Kπ=4+ band relative to that of the first Kπ=2+ one. E4 transitions are calculated to give different predictions from those by the quasiparticle-phonon nuclear model.

  14. [Application of three compartment model and response surface model to clinical anesthesia using Microsoft Excel].

    PubMed

    Abe, Eiji; Abe, Mari

    2011-08-01

    With the spread of total intravenous anesthesia, clinical pharmacology has become more important. We report Microsoft Excel file applying three compartment model and response surface model to clinical anesthesia. On the Microsoft Excel sheet, propofol, remifentanil and fentanyl effect-site concentrations are predicted (three compartment model), and probabilities of no response to prodding, shaking, surrogates of painful stimuli and laryngoscopy are calculated using predicted effect-site drug concentration. Time-dependent changes in these calculated values are shown graphically. Recent development in anesthetic drug interaction studies are remarkable, and its application to clinical anesthesia with this Excel file is simple and helpful for clinical anesthesia.

  15. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  16. The Reference Forward Model (RFM)

    NASA Astrophysics Data System (ADS)

    Dudhia, Anu

    2017-01-01

    The Reference Forward Model (RFM) is a general purpose line-by-line radiative transfer model, currently supported by the UK National Centre for Earth Observation. This paper outlines the algorithms used by the RFM, focusing on standard calculations of terrestrial atmospheric infrared spectra followed by a brief summary of some additional capabilities and extensions to microwave wavelengths and extraterrestrial atmospheres. At its most basic level - the 'line-by-line' component - it calculates molecular absorption cross-sections by applying the Voigt lineshape to all transitions up to ±25 cm-1 from line-centre. Alternatively, absorptions can be directly interpolated from various forms of tabulated data. These cross-sections are then used to construct infrared radiance or transmittance spectra for ray paths through homogeneous cells, plane-parallel or circular atmospheres. At a higher level, the RFM can apply instrumental convolutions to simulate measurements from Fourier transform spectrometers. It can also calculate Jacobian spectra and so act as a stand-alone forward model within a retrieval scheme. The RFM is designed for robustness, flexibility and ease-of-use (particularly by the non-expert), and no claims are made for superior accuracy, or indeed novelty, compared to other line-by-line codes. Its main limitations at present are a lack of scattering and simplified modelling of surface reflectance and line-mixing.

  17. Determination of Scaled Wind Turbine Rotor Characteristics from Three Dimensional RANS Calculations

    NASA Astrophysics Data System (ADS)

    Burmester, S.; Gueydon, S.; Make, M.

    2016-09-01

    Previous studies have shown the importance of 3D effects when calculating the performance characteristics of a scaled down turbine rotor [1-4]. In this paper the results of 3D RANS (Reynolds-Averaged Navier-Stokes) computations by Make and Vaz [1] are taken to calculate 2D lift and drag coefficients. These coefficients are assigned to FAST (Blade Element Momentum Theory (BEMT) tool from NREL) as input parameters. Then, the rotor characteristics (power and thrust coefficients) are calculated using BEMT. This coupling of RANS and BEMT was previously applied by other parties and is termed here the RANS-BEMT coupled approach. Here the approach is compared to measurements carried out in a wave basin at MARIN applying Froude scaled wind, and the direct 3D RANS computation. The data of both a model and full scale wind turbine are used for the validation and verification. The flow around a turbine blade at full scale has a more 2D character than the flow properties around a turbine blade at model scale (Make and Vaz [1]). Since BEMT assumes 2D flow behaviour, the results of the RANS-BEMT coupled approach agree better with the results of the CFD (Computational Fluid Dynamics) simulation at full- than at model-scale.

  18. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Akihiro; Maeda, Keiichi; Shigeyama, Toshikazu

    A two-dimensional special relativistic radiation-hydrodynamics code is developed and applied to numerical simulations of supernova shock breakout in bipolar explosions of a blue supergiant. Our calculations successfully simulate the dynamical evolution of a blast wave in the star and its emergence from the surface. Results of the model with spherical energy deposition show a good agreement with previous simulations. Furthermore, we calculate several models with bipolar energy deposition and compare their results with the spherically symmetric model. The bolometric light curves of the shock breakout emission are calculated by a ray-tracing method. Our radiation-hydrodynamic models indicate that the early partmore » of the shock breakout emission can be used to probe the geometry of the blast wave produced as a result of the gravitational collapse of the iron core.« less

  20. An analytical approach to obtaining JWL parameters from cylinder tests

    NASA Astrophysics Data System (ADS)

    Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.

    2017-01-01

    An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.

  1. Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions

    NASA Astrophysics Data System (ADS)

    Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong

    2018-06-01

    In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.

  2. Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions

    NASA Astrophysics Data System (ADS)

    Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong

    2018-02-01

    In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.

  3. Optimization Model for Reducing Emissions of Greenhouse Gases from Automobiles (OMEGA)

    EPA Science Inventory

    The EPA Vehicle Greenhouse Gas (VGHG) model is used to apply various technologies to a defined set of vehicles in order to meet a specified GHG emission target, and to then calculate the costs and benefits of doing so.

  4. Models for mean bonding length, melting point and lattice thermal expansion of nanoparticle materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omar, M.S., E-mail: dr_m_s_omar@yahoo.com

    2012-11-15

    Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to thatmore » of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.« less

  5. Comparison between Radiation-Hydrodynamic Simulation of Supercritical Accretion Flows and a Steady Model with Outflows

    NASA Astrophysics Data System (ADS)

    Jiao, Cheng-Liang; Mineshige, Shin; Takeuchi, Shun; Ohsuga, Ken

    2015-06-01

    We apply our two-dimensional (2D), radially self-similar steady-state accretion flow model to the analysis of hydrodynamic simulation results of supercritical accretion flows. Self-similarity is checked and the input parameters for the model calculation, such as advective factor and heat capacity ratio, are obtained from time-averaged simulation data. Solutions of the model are then calculated and compared with the simulation results. We find that in the converged region of the simulation, excluding the part too close to the black hole, the radial distributions of azimuthal velocity {{v}φ }, density ρ and pressure p basically follow the self-similar assumptions, i.e., they are roughly proportional to {{r}-0.5}, {{r}-n}, and {{r}-(n+1)}, respectively, where n∼ 0.85 for the mass injection rate of 1000{{L}E}/{{c}2}, and n∼ 0.74 for 3000{{L}E}/{{c}2}. The distribution of vr and {{v}θ } agrees less with self-similarity, possibly due to convective motions in the rθ plane. The distribution of velocity, density, and pressure in the θ direction obtained by the steady model agrees well with the simulation results within the calculation boundary of the steady model. Outward mass flux in the simulations is overall directed toward a polar angle of 0.8382 rad (∼ 48\\buildrel{\\circ}\\over{.} 0) for 1000{{L}E}/{{c}2} and 0.7852 rad (∼ 43\\buildrel{\\circ}\\over{.} 4) for 3000{{L}E}/{{c}2}, and ∼94% of the mass inflow is driven away as outflow, while outward momentum and energy fluxes are focused around the polar axis. Parts of these fluxes lie in the region that is not calculated by the steady model, and special attention should be paid when the model is applied.

  6. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    PubMed

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.

  7. Stability and mobility of Cu-vacancy clusters in Fe-Cu alloys: A computational study based on the use of artificial neural networks for energy barrier calculations

    NASA Astrophysics Data System (ADS)

    Pascuet, M. I.; Castin, N.; Becquart, C. S.; Malerba, L.

    2011-05-01

    An atomistic kinetic Monte Carlo (AKMC) method has been applied to study the stability and mobility of copper-vacancy clusters in Fe. This information, which cannot be obtained directly from experimental measurements, is needed to parameterise models describing the nanostructure evolution under irradiation of Fe alloys (e.g. model alloys for reactor pressure vessel steels). The physical reliability of the AKMC method has been improved by employing artificial intelligence techniques for the regression of the activation energies required by the model as input. These energies are calculated allowing for the effects of local chemistry and relaxation, using an interatomic potential fitted to reproduce them as accurately as possible and the nudged-elastic-band method. The model validation was based on comparison with available ab initio calculations for verification of the used cohesive model, as well as with other models and theories.

  8. Forward modeling magnetic fields of induced and remanent magnetization in the lithosphere using tesseroids

    NASA Astrophysics Data System (ADS)

    Baykiev, Eldar; Ebbing, Jörg; Brönner, Marco; Fabian, Karl

    2016-11-01

    A newly developed software package to calculate the magnetic field in a spherical coordinate system near the Earth's surface and on satellite height is shown to produce reliable modeling results for global and regional applications. The discretization cells of the model are uniformly magnetized spherical prisms, so called tesseroids. The presented algorithm extends an existing code for gravity calculations by applying Poisson's relation to identify the magnetic potential with the sum over pseudogravity fields of tesseroids. By testing different lithosphere discretization grids it is possible to determine the optimal size of tesseroids for field calculations on satellite altitude within realistic measurement error bounds. Also the influence of the Earth's ellipticity upon the modeling result is estimated and global examples are studied. The new software calculates induced and remanent magnetic fields for models at global and regional scale. For regional models far-field effects are evaluated and discussed. This provides bounds for the minimal size of a regional model that is necessary to predict meaningful satellite total field anomalies over the corresponding area.

  9. Linking molecular models with ion mobility experiments. Illustration with a rigid nucleic acid structure

    PubMed Central

    D'Atri, Valentina; Porrini, Massimiliano; Rosu, Frédéric; Gabelica, Valérie

    2015-01-01

    Ion mobility spectrometry experiments allow the mass spectrometrist to determine an ion's rotationally averaged collision cross section ΩEXP. Molecular modelling is used to visualize what ion three-dimensional structure(s) is(are) compatible with the experiment. The collision cross sections of candidate molecular models have to be calculated, and the resulting ΩCALC are compared with the experimental data. Researchers who want to apply this strategy to a new type of molecule face many questions: (1) What experimental error is associated with ΩEXP determination, and how to estimate it (in particular when using a calibration for traveling wave ion guides)? (2) How to generate plausible 3D models in the gas phase? (3) Different collision cross section calculation models exist, which have been developed for other analytes than mine. Which one(s) can I apply to my systems? To apply ion mobility spectrometry to nucleic acid structural characterization, we explored each of these questions using a rigid structure which we know is preserved in the gas phase: the tetramolecular G-quadruplex [dTGGGGT]4, and we will present these detailed investigation in this tutorial. © 2015 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons Ltd. PMID:26259654

  10. Dynamic global model of oxide Czochralski process with weighing control

    NASA Astrophysics Data System (ADS)

    Mamedov, V. M.; Vasiliev, M. G.; Yuferev, V. S.

    2011-03-01

    A dynamic model of oxide Czochralski growth with weighing control has been developed for the first time. A time-dependent approach is used for the calculation of temperature fields in different parts of a crystallization set-up and convection patterns in a melt, while internal radiation in crystal is considered in a quasi-steady approximation. A special algorithm is developed for the calculation of displacement of a triple point and simulation of a crystal surface formation. To calculate variations in the heat generation, a model of weighing control with a commonly used PID regulator is applied. As an example, simulation of the growth process of gallium-gadolinium garnet (GGG) crystals starting from the stage of seeding is performed.

  11. Indoor Residence Times of Semivolatile Organic Compounds: Model Estimation and Field Evaluation

    EPA Science Inventory

    Indoor residence times of semivolatile organic compounds (SVOCs) are a major and mostly unavailable input for residential exposure assessment. We calculated residence times for a suite of SVOCs using a fugacity model applied to residential environments. Residence times depend on...

  12. Finite-Element Modeling of 3C-SiC Membranes

    NASA Technical Reports Server (NTRS)

    DeAnna, R. G.; Mitchell, J.; Zorman, C. A.; Mehregany, M.

    2000-01-01

    Finite-element modeling (FEM) of 3C-SiC thin-film membranes on Si substrates was used to determine the residual stress and center deflection with applied pressure. The anisotropic, three-dimensional model includes the entire 3C-SiC membrane and Si substrate with appropriate material properties and boundary conditions. Residual stress due to the thermal-expansion-coefficient mismatch between the3C-SiC film and Si substrate was included in the model. Both before-and after-etching, residual stresses were calculated. In-plane membrane stress and normal deflection with applied pressure were also calculated. FEM results predict a tensile residual stress fo 259 MPa in the 3C-SiC membrane before etching. This decreases to 247 MPa after etching the substrate below the membrane. The residual stress experimentally measured on sample made at Case Western Reserve University was 280 MPa on post-etched membranes. This is excellent agreement when an additional 30-40 MPa of residual stress to account for lattice mismatch is added to the FEM results.

  13. A comparative analysis of simulated and observed landslide locations triggered by Hurricane Camille in Nelson County, Virginia

    USGS Publications Warehouse

    Morrissey, M.M.; Wieczorek, G.F.; Morgan, B.A.

    2008-01-01

    In 1969, Nelson County, Virginia received up to 71 cm of rain within 12 h starting at 7 p.m. on August 19. The total rainfall from the storm exceeded the 1000-year return period in the region. Several thousands of landslides were induced by rainfall associated with Hurricane Camille causing fatalities and destroying infrastructure. We apply a distributed transient response model for regional slope stability analysis to shallow landslides. Initiation points of over 3000 debris flows and effects of flooding from this storm are applied to the model. Geotechnical data used in the calculations are published data from samples of colluvium. Results from these calculations are compared with field observations such as landslide trigger location and timing of debris flows to assess how well the model predicts the spatial and temporal distribution. of landslide initiation locations. The model predicts many of the initiation locations in areas where debris flows are observed. Copyright ?? 2007 John Wiley & Sons, Ltd.

  14. Empirical model with independent variable moments of inertia for triaxial nuclei applied to 76Ge and 192Os

    NASA Astrophysics Data System (ADS)

    Sugawara, M.

    2018-05-01

    An empirical model with independent variable moments of inertia for triaxial nuclei is devised and applied to 76Ge and 192Os. Three intrinsic moments of inertia, J1, J2, and J3, are varied independently as a particular function of spin I within a revised version of the triaxial rotor model so as to reproduce the energy levels of the ground-state, γ , and (in the case of 192Os) Kπ=4+ bands. The staggering in the γ band is well reproduced in both phase and amplitude. Effective γ values are extracted as a function of spin I from the ratios of the three moments of inertia. The eigenfunctions and the effective γ values are subsequently used to calculate the ratios of B (E 2 ) values associated with these bands. Good agreement between the model calculation and the experimental data is obtained for both 76Ge and 192Os.

  15. Radiative transfer theory for active remote sensing of a forested canopy

    NASA Technical Reports Server (NTRS)

    Karam, M. A.; Fung, A. K.

    1989-01-01

    A canopy is modeled as a two-layer medium above a rough interface. The upper layer stands for the forest crown, with the leaves modeled as randomly oriented and distributed disks and needles and the branches modeled as randomly oriented finite dielectric cylinders. The lower layer contains the tree trunks, modeled as randomly positioned vertical cylinders above the rough soil. Radiative-transfer theory is applied to calculate EM scattering from such a canopy, is expressed in terms of the scattering-amplitude tensors (SATs). For leaves, the generalized Rayleigh-Gans approximation is applied, whereas the branch and trunk SATs are obtained by estimating the inner field by fields inside a similar cylinder of infinite length. The Kirchhoff method is used to calculate the soil SAT. For a plane wave exciting the canopy, the radiative-transfer equations are solved by iteration to the first order in albedo of the leaves and the branches. Numerical results are illustrated as a function of the incidence angle.

  16. Calculating the surface tension of binary solutions of simple fluids of comparable size

    NASA Astrophysics Data System (ADS)

    Zaitseva, E. S.; Tovbin, Yu. K.

    2017-11-01

    A molecular theory based on the lattice gas model (LGM) is used to calculate the surface tension of one- and two-component planar vapor-liquid interfaces of simple fluids. Interaction between nearest neighbors is considered in the calculations. LGM is applied as a tool of interpolation: the parameters of the model are corrected using experimental surface tension data. It is found that the average accuracy of describing the surface tension of pure substances (Ar, N2, O2, CH4) and their mixtures (Ar-O2, Ar-N2, Ar-CH4, N2-CH4) does not exceed 2%.

  17. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  18. Analysis of space radiation exposure levels at different shielding configurations by ray-tracing dose estimation method

    NASA Astrophysics Data System (ADS)

    Kartashov, Dmitry; Shurshakov, Vyacheslav

    2018-03-01

    A ray-tracing method to calculate radiation exposure levels of astronauts at different spacecraft shielding configurations has been developed. The method uses simplified shielding geometry models of the spacecraft compartments together with depth-dose curves. The depth-dose curves can be obtained with different space radiation environment models and radiation transport codes. The spacecraft shielding configurations are described by a set of geometry objects. To calculate the shielding probability functions for each object its surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Such description can be applied for any complex shape objects. The method is applied to the space experiment MATROSHKA-R modeling conditions. The experiment has been carried out onboard the ISS from 2004 to 2016. Dose measurements were realized in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility that provides an additional shielding on the crew cabin wall. The space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms and for an additional shielding installed in the compartment are calculated. There is agreement within accuracy of about 15% between the data obtained in the experiment and calculated ones. Thus the calculation method used has been successfully verified with the MATROSHKA-R experiment data. The ray-tracing radiation dose calculation method can be recommended for estimation of dose distribution in astronaut body in different space station compartments and for estimation of the additional shielding efficiency, especially when exact compartment shielding geometry and the radiation environment for the planned mission are not known.

  19. Modeling brine-rock interactions in an enhanced geothermal systemdeep fractured reservoir at Soultz-Sous-Forets (France): a joint approachusing two geochemical codes: frachem and toughreact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, Laurent; Spycher, Nicolas; Xu, Tianfu

    The modeling of coupled thermal, hydrological, and chemical (THC) processes in geothermal systems is complicated by reservoir conditions such as high temperatures, elevated pressures and sometimes the high salinity of the formation fluid. Coupled THC models have been developed and applied to the study of enhanced geothermal systems (EGS) to forecast the long-term evolution of reservoir properties and to determine how fluid circulation within a fractured reservoir can modify its rock properties. In this study, two simulators, FRACHEM and TOUGHREACT, specifically developed to investigate EGS, were applied to model the same geothermal reservoir and to forecast reservoir evolution using theirmore » respective thermodynamic and kinetic input data. First, we report the specifics of each of these two codes regarding the calculation of activity coefficients, equilibrium constants and mineral reaction rates. Comparisons of simulation results are then made for a Soultz-type geothermal fluid (ionic strength {approx}1.8 molal), with a recent (unreleased) version of TOUGHREACT using either an extended Debye-Hueckel or Pitzer model for calculating activity coefficients, and FRACHEM using the Pitzer model as well. Despite somewhat different calculation approaches and methodologies, we observe a reasonably good agreement for most of the investigated factors. Differences in the calculation schemes typically produce less difference in model outputs than differences in input thermodynamic and kinetic data, with model results being particularly sensitive to differences in ion-interaction parameters for activity coefficient models. Differences in input thermodynamic equilibrium constants, activity coefficients, and kinetics data yield differences in calculated pH and in predicted mineral precipitation behavior and reservoir-porosity evolution. When numerically cooling a Soultz-type geothermal fluid from 200 C (initially equilibrated with calcite at pH 4.9) to 20 C and suppressing mineral precipitation, pH values calculated with FRACHEM and TOUGHREACT/Debye-Hueckel decrease by up to half a pH unit, whereas pH values calculated with TOUGHREACT/Pitzer increase by a similar amount. As a result of these differences, calcite solubilities computed using the Pitzer formalism (the more accurate approach) are up to about 1.5 orders of magnitude lower. Because of differences in Pitzer ion-interaction parameters, the calcite solubility computed with TOUGHREACT/Pitzer is also typically about 0.5 orders of magnitude lower than that computed with FRACHEM, with the latter expected to be most accurate. In a second part of this investigation, both models were applied to model the evolution of a Soultz-type geothermal reservoir under high pressure and temperature conditions. By specifying initial conditions reflecting a reservoir fluid saturated with respect to calcite (a reasonable assumption based on field data), we found that THC reservoir simulations with the three models yield similar results, including similar trends and amounts of reservoir porosity decrease over time, thus pointing to the importance of model conceptualization. This study also highlights the critical effect of input thermodynamic data on the results of reactive transport simulations, most particularly for systems involving brines.« less

  20. Does the anthropometric model influence whole-body center of mass calculations in gait?

    PubMed

    Catena, Robert D; Chen, Szu-Hua; Chou, Li-Shan

    2017-07-05

    Examining whole-body center of mass (COM) motion is one of method being used to quantify dynamic balance and energy during gait. One common method for estimating the COM position is to apply an anthropometric model to a marker set and calculate the weighted sum from known segmental COM positions. Several anthropometric models are available to perform such a calculation. However, to date there has been no study of how the anthropometric model affects whole-body COM calculations during gait. This information is pertinent to researchers because the choice of anthropometric model may influence gait research findings and currently the trend is to consistently use a single model. In this study we analyzed a single stride of gait data from 103 young adult participants. We compared the whole-body COM motion calculated from 4 different anthropometric models (Plagenhoef et al., 1983; Winter, 1990; de Leva, 1996; Pavol et al., 2002). We found that anterior-posterior motion calculations are relatively unaffected by the anthropometric model. However, medial-lateral and vertical motions are significantly affected by the use of different anthropometric models. Our findings suggest that the researcher carefully choose an anthropometric model to fit their study populations when interested in medial-lateral or vertical motions of the COM. Our data can provide researchers a priori information on the model determination depending on the particular variable and how conservative they may want to be with COM comparisons between groups. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. NOx formation in combustion of gaseous fuel in ejection burner

    NASA Astrophysics Data System (ADS)

    Rimár, Miroslav; Kulikov, Andrii

    2016-06-01

    The aim of this work is to prepare model for researching of the formation in combustion of gaseous fuels. NOx formation is one of the main ecological problems nowadays as nitrogen oxides is one of main reasons of acid rains. The ANSYS model was designed according to the calculation to provide full combustion and good mixing of the fuel and air. The current model is appropriate to research NOx formation and the influence of the different principles of NOx reduction method. Applying of designed model should spare both time of calculations and research and also money as you do not need to measure the burner characteristics.

  2. Seismic hazard, risk, and design for South America

    USGS Publications Warehouse

    Petersen, Mark D.; Harmsen, Stephen; Jaiswal, Kishor; Rukstales, Kenneth S.; Luco, Nicolas; Haller, Kathleen; Mueller, Charles; Shumway, Allison

    2018-01-01

    We calculate seismic hazard, risk, and design criteria across South America using the latest data, models, and methods to support public officials, scientists, and engineers in earthquake risk mitigation efforts. Updated continental scale seismic hazard models are based on a new seismicity catalog, seismicity rate models, evaluation of earthquake sizes, fault geometry and rate parameters, and ground‐motion models. Resulting probabilistic seismic hazard maps show peak ground acceleration, modified Mercalli intensity, and spectral accelerations at 0.2 and 1 s periods for 2%, 10%, and 50% probabilities of exceedance in 50 yrs. Ground shaking soil amplification at each site is calculated by considering uniform soil that is applied in modern building codes or by applying site‐specific factors based on VS30">VS30 shear‐wave velocities determined through a simple topographic proxy technique. We use these hazard models in conjunction with the Prompt Assessment of Global Earthquakes for Response (PAGER) model to calculate economic and casualty risk. Risk is computed by incorporating the new hazard values amplified by soil, PAGER fragility/vulnerability equations, and LandScan 2012 estimates of population exposure. We also calculate building design values using the guidelines established in the building code provisions. Resulting hazard and associated risk is high along the northern and western coasts of South America, reaching damaging levels of ground shaking in Chile, western Argentina, western Bolivia, Peru, Ecuador, Colombia, Venezuela, and in localized areas distributed across the rest of the continent where historical earthquakes have occurred. Constructing buildings and other structures to account for strong shaking in these regions of high hazard and risk should mitigate losses and reduce casualties from effects of future earthquake strong ground shaking. National models should be developed by scientists and engineers in each country using the best available science.

  3. Modeling the surface evapotranspiration over the southern Great Plains

    NASA Technical Reports Server (NTRS)

    Liljegren, J. C.; Doran, J. C.; Hubbe, J. M.; Shaw, W. J.; Zhong, S.; Collatz, G. J.; Cook, D. R.; Hart, R. L.

    1996-01-01

    We have developed a method to apply the Simple Biosphere Model of Sellers et al to calculate the surface fluxes of sensible heat and water vapor at high spatial resolution over the domain of the US DOE's Cloud and Radiation Testbed (CART) in Kansas and Oklahoma. The CART, which is within the GCIP area of interest for the Mississippi River Basin, is an extensively instrumented facility operated as part of the DOE's Atmospheric Radiation Measurement (ARM) program. Flux values calculated with our method will be used to provide lower boundary conditions for numerical models to study the atmosphere over the CART domain.

  4. Using time-dependent density functional theory in real time for calculating electronic transport

    NASA Astrophysics Data System (ADS)

    Schaffhauser, Philipp; Kümmel, Stephan

    2016-01-01

    We present a scheme for calculating electronic transport within the propagation approach to time-dependent density functional theory. Our scheme is based on solving the time-dependent Kohn-Sham equations on grids in real space and real time for a finite system. We use absorbing and antiabsorbing boundaries for simulating the coupling to a source and a drain. The boundaries are designed to minimize the effects of quantum-mechanical reflections and electrical polarization build-up, which are the major obstacles when calculating transport by applying an external bias to a finite system. We show that the scheme can readily be applied to real molecules by calculating the current through a conjugated molecule as a function of time. By comparing to literature results for the conjugated molecule and to analytic results for a one-dimensional model system we demonstrate the reliability of the concept.

  5. Geometrical optics approach in liquid crystal films with three-dimensional director variations.

    PubMed

    Panasyuk, G; Kelly, J; Gartland, E C; Allender, D W

    2003-04-01

    A formal geometrical optics approach (GOA) to the optics of nematic liquid crystals whose optic axis (director) varies in more than one dimension is described. The GOA is applied to the propagation of light through liquid crystal films whose director varies in three spatial dimensions. As an example, the GOA is applied to the calculation of light transmittance for the case of a liquid crystal cell which exhibits the homeotropic to multidomainlike transition (HMD cell). Properties of the GOA solution are explored, and comparison with the Jones calculus solution is also made. For variations on a smaller scale, where the Jones calculus breaks down, the GOA provides a fast, accurate method for calculating light transmittance. The results of light transmittance calculations for the HMD cell based on the director patterns provided by two methods, direct computer calculation and a previously developed simplified model, are in good agreement.

  6. Impact cratering calculations. Part 1: Early time results

    NASA Technical Reports Server (NTRS)

    Thomsen, J. M.; Sauer, F. N.; Austin, M. G.; Ruhl, S. F.; Shultz, P. H.; Orphal, D. L.

    1979-01-01

    Early time two dimensional finite difference calculations of laboratory scale hypervelocity impact of 0.3 g spherical 2024 aluminum projectiles into homogeneous plasticene clay targets were performed. Analysis of resulting material motions showed that energy and momentum were coupled quickly from the aluminum projectile to the target material. In the process of coupling, some of the plasticene clay target was vaporized while the projectile become severely deformed. The velocity flow field developed within the target was shown to have features similar to those found in calculations of near surface explosion cratering. Specific application of Maxwell's analytic Z-Model showed that this model can be used to describe the early time flow fields resulting from the impact cratering calculations as well, provided the flow field centers are located beneath the target surface and most of the projectile momentum is dissipated before the model is applied.

  7. The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.

    ERIC Educational Resources Information Center

    Gilbert, R.

    1979-01-01

    Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)

  8. Radiant heat exchange calculations in radiantly heated and cooled enclosures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, K.S.; Zhang, P.

    1995-08-01

    This paper presents the development of a three-dimensional mathematical model to compute the radiant heat exchange between surfaces separated by a transparent and/or opaque medium. The model formulation accommodates arbitrary arrangements of the interior surfaces, as well as arbitrary placement of obstacles within the enclosure. The discrete ordinates radiation model is applied and has the capability to analyze the effect of irregular geometries and diverse surface temperatures and radiative properties. The model is verified by comparing calculated heat transfer rates to heat transfer rates determined from the exact radiosity method for four different enclosures. The four enclosures were selected tomore » provide a wide range of verification. This three-dimensional model based on the discrete ordinates method can be applied to a building to assist the design engineer in sizing a radiant heating system. By coupling this model with a convective and conductive heat transfer model and a thermal comfort model, the comfort levels throughout the room can be easily and efficiently mapped for a given radiant heater location. In addition, objects such as airplanes, trucks, furniture, and partitions can be easily incorporated to determine their effect on the performance of the radiant heating system.« less

  9. An Analytical Approach to Obtaining JWL Parameters from Cylinder Tests

    NASA Astrophysics Data System (ADS)

    Sutton, Ben; Ferguson, James

    2015-06-01

    An analytical method for determining parameters for the JWL equation of state (EoS) from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated parameters and pressure-volume (p-V) curves agree with those produced by hydro-code modelling. The calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-V curve. The calculated and model energies are 8.64 GPa and 8.76 GPa respectively.

  10. Public biobanks: calculation and recovery of costs.

    PubMed

    Clément, Bruno; Yuille, Martin; Zaltoukal, Kurt; Wichmann, Heinz-Erich; Anton, Gabriele; Parodi, Barbara; Kozera, Lukasz; Bréchot, Christian; Hofman, Paul; Dagher, Georges

    2014-11-05

    A calculation grid developed by an international expert group was tested across biobanks in six countries to evaluate costs for collections of various types of biospecimens. The assessment yielded a tool for setting specimen-access prices that were transparently related to biobank costs, and the tool was applied across three models of collaborative partnership. Copyright © 2014, American Association for the Advancement of Science.

  11. 75 FR 3418 - Airworthiness Directives; British Aerospace Regional Aircraft Model HP.137 Jetstream Mk.1...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-21

    ... tightening torque are contained in APPH SB 32-76 Revision 1. As a result, pistons which were previously... formula for calculating the piston safe life. This calculation and a revised end fitting tightening torque... piston rod adjacent to the eye-end. This was caused by excessive torque which had been applied to the eye...

  12. GPU-based Green’s function simulations of shear waves generated by an applied acoustic radiation force in elastic and viscoelastic models

    NASA Astrophysics Data System (ADS)

    Yang, Yiqun; Urban, Matthew W.; McGough, Robert J.

    2018-05-01

    Shear wave calculations induced by an acoustic radiation force are very time-consuming on desktop computers, and high-performance graphics processing units (GPUs) achieve dramatic reductions in the computation time for these simulations. The acoustic radiation force is calculated using the fast near field method and the angular spectrum approach, and then the shear waves are calculated in parallel with Green’s functions on a GPU. This combination enables rapid evaluation of shear waves for push beams with different spatial samplings and for apertures with different f/#. Relative to shear wave simulations that evaluate the same algorithm on an Intel i7 desktop computer, a high performance nVidia GPU reduces the time required for these calculations by a factor of 45 and 700 when applied to elastic and viscoelastic shear wave simulation models, respectively. These GPU-accelerated simulations also compared to measurements in different viscoelastic phantoms, and the results are similar. For parametric evaluations and for comparisons with measured shear wave data, shear wave simulations with the Green’s function approach are ideally suited for high-performance GPUs.

  13. Numerical analysis of stress effects on Frank loop evolution during irradiation in austenitic Fe&z.sbnd;Cr&z.sbnd;Ni alloy

    NASA Astrophysics Data System (ADS)

    Tanigawa, Hiroyasu; Katoh, Yutai; Kohyama, Akira

    1995-08-01

    Effects of applied stress on early stages of interstitial type Frank loop evolution were investigated by both numerical calculation and irradiation experiments. The final objective of this research is to propose a comprehensive model of complex stress effects on microstructural evolution under various conditions. In the experimental part of this work, the microstructural analysis revealed that the differences in resolved normal stress caused those in the nucleation rates of Frank loops on {111} crystallographic family planes, and that with increasing external applied stress the total nucleation rate of Frank loops was increased. A numerical calculation was carried out primarily to evaluate the validity of models of stress effects on nucleation processes of Frank loop evolution. The calculation stands on rate equuations which describe evolution of point defects, small points defect clusters and Frank loops. The rate equations of Frank loop evolution were formulated for {111} planes, considering effects of resolved normal stress to clustering processes of small point defects and growth processes of Frank loops, separately. The experimental results and the predictions from the numerical calculation qualitatively coincided well with each other.

  14. Non-Tidal Ocean Loading Correction for the Argentinean-German Geodetic Observatory Using an Empirical Model of Storm Surge for the Río de la Plata

    NASA Astrophysics Data System (ADS)

    Oreiro, F. A.; Wziontek, H.; Fiore, M. M. E.; D'Onofrio, E. E.; Brunini, C.

    2018-05-01

    The Argentinean-German Geodetic Observatory is located 13 km from the Río de la Plata, in an area that is frequently affected by storm surges that can vary the level of the river over ±3 m. Water-level information from seven tide gauge stations located in the Río de la Plata are used to calculate every hour an empirical model of water heights (tidal + non-tidal component) and an empirical model of storm surge (non-tidal component) for the period 01/2016-12/2016. Using the SPOTL software, the gravimetric response of the models and the tidal response are calculated, obtaining that for the observatory location, the range of the tidal component (3.6 nm/s2) is only 12% of the range of the non-tidal component (29.4 nm/s2). The gravimetric response of the storm surge model is subtracted from the superconducting gravimeter observations, after applying the traditional corrections, and a reduction of 7% of the RMS is obtained. The wavelet transform is applied to the same series, before and after the non-tidal correction, and a clear decrease in the spectral energy in the periods between 2 and 12 days is identify between the series. Using the same software East, North and Up displacements are calculated, and a range of 3, 2, and 11 mm is obtained, respectively. The residuals obtained after applying the non-tidal correction allow to clearly identify the influence of rain events in the superconducting gravimeter observations, indicating the need of the analysis of this, and others, hydrological and geophysical effects.

  15. Verification of ARES transport code system with TAKEDA benchmarks

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  16. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    PubMed

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  17. Prediction of Radical Scavenging Activities of Anthocyanins Applying Adaptive Neuro-Fuzzy Inference System (ANFIS) with Quantum Chemical Descriptors

    PubMed Central

    Jhin, Changho; Hwang, Keum Taek

    2014-01-01

    Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627

  18. The electronegativity equalization method and the split charge equilibration applied to organic systems: parametrization, validation, and comparison.

    PubMed

    Verstraelen, Toon; Van Speybroeck, Veronique; Waroquier, Michel

    2009-07-28

    An extensive benchmark of the electronegativity equalization method (EEM) and the split charge equilibration (SQE) model on a very diverse set of organic molecules is presented. These models efficiently compute atomic partial charges and are used in the development of polarizable force fields. The predicted partial charges that depend on empirical parameters are calibrated to reproduce results from quantum mechanical calculations. Recently, SQE is presented as an extension of the EEM to obtain the correct size dependence of the molecular polarizability. In this work, 12 parametrization protocols are applied to each model and the optimal parameters are benchmarked systematically. The training data for the empirical parameters comprise of MP2/Aug-CC-pVDZ calculations on 500 organic molecules containing the elements H, C, N, O, F, S, Cl, and Br. These molecules have been selected by an ingenious and autonomous protocol from an initial set of almost 500,000 small organic molecules. It is clear that the SQE model outperforms the EEM in all benchmark assessments. When using Hirshfeld-I charges for the calibration, the SQE model optimally reproduces the molecular electrostatic potential from the ab initio calculations. Applications on chain molecules, i.e., alkanes, alkenes, and alpha alanine helices, confirm that the EEM gives rise to a divergent behavior for the polarizability, while the SQE model shows the correct trends. We conclude that the SQE model is an essential component of a polarizable force field, showing several advantages over the original EEM.

  19. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Scott E., E-mail: sedavids@utmb.edu

    Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less

  20. [The numerical Hatze-model: also qualified for calculations on children?].

    PubMed

    Holley, Stephanie; Adamec, Jiri; Praxl, Norbert; Schönpflug, Markus; Graw, Matthias

    2005-01-01

    The aim of this study was to find out whether the Hatze-model, which is specifically designed for adults, is suitable for calculations on children as well. By means of that program it is possible to calculate various parameters of the human body. After the collection of data and analysis of the results according to Hatze it becomes evident that this model provides good results only for the calculation of the total body mass. As regards the body segments, there are significant under- and overestimations. The same applies to the calculation of mean body density. Indeed there is a significant gender dimorphism indicating that girls have a higher fraction of body fat than boys. However, the values are far below those described in the literature. Due to the formula, the values of the centres of gravity are linear and congruent in both sides of the body. Interpretation of the results is difficult, as there are no valid reference values. Furthermore the program is not able to take characteristic shapes and proportions of children into account. For this reason 88% of the children are defined as either pregnant or obese. In summary, the study shows that the present model should not be used to calculate children and the human models have to be designed specifically for children.

  1. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  2. Analysis of different models for atmospheric correction of meteosat infrared images. A new approach

    NASA Astrophysics Data System (ADS)

    Pérez, A. M.; Illera, P.; Casanova, J. L.

    A comparative study of several atmospheric correction models has been carried out. As primary data, atmospheric profiles of temperature and humidity obtained from radiosoundings on cloud-free days have been used. Special attention has been paid to the model used operationally in the European Space operations Centre (ESOC) for sea temperature calculations. The atmospheric correction results are expressed in terms of the increase in the brightness temperature and the surface temperature. A difference of up to a maximum of 1.4 degrees with respect to the correction obtained in the studied models has been observed. The radiances calculated by models are also compared with those obtained directly from the satellite. The temperature corrections by the latter are greater than the former in practically every case. As a result of this, the operational calibration coefficients should be first recalculated if we wish to apply an atmospheric correction model to the satellite data. Finally, a new simplified calculation scheme which may be introduced into any model is proposed.

  3. Limits of Predictability in Commuting Flows in the Absence of Data for Calibration

    PubMed Central

    Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.

    2014-01-01

    The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599

  4. A one-dimensional peridynamic model of defect propagation and its relation to certain other continuum models

    NASA Astrophysics Data System (ADS)

    Wang, Linjuan; Abeyaratne, Rohan

    2018-07-01

    The peridynamic model of a solid does not involve spatial gradients of the displacement field and is therefore well suited for studying defect propagation. Here, bond-based peridynamic theory is used to study the equilibrium and steady propagation of a lattice defect - a kink - in one dimension. The material transforms locally, from one state to another, as the kink passes through. The kink is in equilibrium if the applied force is less than a certain critical value that is calculated, and propagates if it exceeds that value. The kinetic relation giving the propagation speed as a function of the applied force is also derived. In addition, it is shown that the dynamical solutions of certain differential-equation-based models of a continuum are the same as those of the peridynamic model provided the micromodulus function is chosen suitably. A formula for calculating the micromodulus function of the equivalent peridynamic model is derived and illustrated. This ability to replace a differential-equation-based model with a peridynamic one may prove useful when numerically studying more complicated problems such as those involving multiple and interacting defects.

  5. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    NASA Astrophysics Data System (ADS)

    Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel

    In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.

  6. Reconstruction of palaeoatmospheric carbon dioxide using stomatal densities of various beech plants (Fagaceae): testing and application of a mechanistic model

    NASA Astrophysics Data System (ADS)

    Grein, M.; Roth-Nebelsick, A.; Konrad, W.

    2006-12-01

    A mechanistic model (Konrad &Roth-Nebelsick a, in prep.) was applied for the reconstruction of atmospheric carbon dioxide using stomatal densities and photosynthesis parameters of extant and fossil Fagaceae. The model is based on an approach which couples diffusion and the biochemical process of photosynthesis. Atmospheric CO2 is calculated on the basis of stomatal diffusion and photosynthesis parameters of the considered taxa. The considered species include the castanoid Castanea sativa, two quercoids Quercus petraea and Quercus rhenana and an intermediate species Eotrigonobalanus furcinervis. In the case of Quercus petraea literature data were used. Stomatal data of Eotrigonobalanus furcinervis, Quercus rhenana and Castanea sativa were determined by the authors. Data of the extant Castanea sativa were collected by applying a peeling method and by counting of stomatal densities on the digitalized images of the peels. Additionally, isotope data of leaf samples of Castanea sativa were determined to estimate the ratio of intercellular to ambient carbon dioxide. The CO2 values calculated by the model (on the basis of stomatal data and measured or estimated biochemical parameters) are in good agreement with literature data, with the exception of the Late Eocene. The results thus demonstrate that the applied approach is principally suitable for reconstructing palaeoatmospheric CO2.

  7. On bifurcation in dynamics of hemispherical resonator gyroscope

    NASA Astrophysics Data System (ADS)

    Volkov, D. Yu.; Galunova, K. V.

    2018-05-01

    A mathematical model of wave solid-state gyro (HRG) are constructed. Wave pattern of resonant oscillations was studied applying normal form method. We calculate the Birkhoff-Gustavson normal form of unterturbed system.

  8. Gamow-Teller response in the configuration space of a density-functional-theory-rooted no-core configuration-interaction model

    NASA Astrophysics Data System (ADS)

    Konieczka, M.; Kortelainen, M.; Satuła, W.

    2018-03-01

    Background: The atomic nucleus is a unique laboratory in which to study fundamental aspects of the electroweak interaction. This includes a question concerning in medium renormalization of the axial-vector current, which still lacks satisfactory explanation. Study of spin-isospin or Gamow-Teller (GT) response may provide valuable information on both the quenching of the axial-vector coupling constant as well as on nuclear structure and nuclear astrophysics. Purpose: We have performed a seminal calculation of the GT response by using the no-core configuration-interaction approach rooted in multireference density functional theory (DFT-NCCI). The model treats properly isospin and rotational symmetries and can be applied to calculate both the nuclear spectra and transition rates in atomic nuclei, irrespectively of their mass and particle-number parity. Methods: The DFT-NCCI calculation proceeds as follows: First, one builds a configuration space by computing relevant, for a given physical problem, (multi)particle-(multi)hole Slater determinants. Next, one applies the isospin and angular-momentum projections and performs the isospin and K mixing in order to construct a model space composed of linearly dependent states of good angular momentum. Eventually, one mixes the projected states by solving the Hill-Wheeler-Griffin equation. Results: The method is applied to compute the GT strength distribution in selected N ≈Z nuclei including the p -shell 8Li and 8Be nuclei and the s d -shell well-deformed nucleus 24Mg. In order to demonstrate a flexibility of the approach we present also a calculation of the superallowed GT β decay in doubly-magic spherical 100Sn and the low-spin spectrum in 100In. Conclusions: It is demonstrated that the DFT-NCCI model is capable of capturing the GT response satisfactorily well by using a relatively small configuration space, exhausting simultaneously the GT sum rule. The model, due to its flexibility and broad range of applicability, may either serve as a complement or even as an alternative to other theoretical approaches, including the conventional nuclear shell model.

  9. Agatha: Disentangling period signals from correlated noise in a periodogram framework

    NASA Astrophysics Data System (ADS)

    Feng, F.; Tuomi, M.; Jones, H. R. A.

    2018-04-01

    Agatha is a framework of periodograms to disentangle periodic signals from correlated noise and to solve the two-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. Agatha can be used to select the optimal noise model and to test the consistency of signals in time and can be applied to time series analyses in other astronomical and scientific disciplines. An interactive web implementation of the software is also available at http://agatha.herts.ac.uk/.

  10. A thermochemical model of radiation damage and annealing applied to GaAs solar cells

    NASA Technical Reports Server (NTRS)

    Conway, E. J.; Walker, G. H.; Heinbockel, J. H.

    1981-01-01

    Calculations of the equilibrium conditions for continuous radiation damage and thermal annealing are reported. The calculations are based on a thermochemical model developed to analyze the incorporation of point imperfections in GaAs, and modified by introducing the radiation to produce native lattice defects rather than high-temperature and arsenic atmospheric pressure. The concentration of a set of defects, including vacancies, divacancies, and impurity vacancy complexes, are calculated as a function of temperature. Minority carrier lifetimes, short circuit current, and efficiency are deduced for a range of equilibrium temperatures. The results indicate that GaAs solar cells could have a mission life which is not greatly limited by radiation damage.

  11. Multiphase-field model of small strain elasto-plasticity according to the mechanical jump conditions

    NASA Astrophysics Data System (ADS)

    Herrmann, Christoph; Schoof, Ephraim; Schneider, Daniel; Schwab, Felix; Reiter, Andreas; Selzer, Michael; Nestler, Britta

    2018-04-01

    We introduce a small strain elasto-plastic multiphase-field model according to the mechanical jump conditions. A rate-independent J_2 -plasticity model with linear isotropic hardening and without kinematic hardening is applied exemplary. Generally, any physically nonlinear mechanical model is compatible with the subsequently presented procedure. In contrast to models with interpolated material parameters, the proposed model is able to apply different nonlinear mechanical constitutive equations for each phase separately. The Hadamard compatibility condition and the static force balance are employed as homogenization approaches to calculate the phase-inherent stresses and strains. Several verification cases are discussed. The applicability of the proposed model is demonstrated by simulations of the martensitic transformation and quantitative parameters.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzhioev, Alan A., E-mail: dzhioev@theor.jinr.ru; Vdovin, A. I., E-mail: vdovin@theor.jinr.ru; Stoyanov, Ch., E-mail: stoyanov@inrne.bas.bg

    We combine the thermal QRPA approach with the Skyrme energy density functional theory (Skyrme–TQRPA) for modelling the process of electron capture on nuclei in supernova environment. For a sample nucleus, {sup 56}Fe, the Skyrme–TQRPA approach is applied to analyze thermal effects on the strength function of GT{sub +} transitions which dominate electron capture at E{sub e} ≤ 30 MeV. Several Skyrme interactions are used in order to verify the sensitivity of the obtained results to the Skyrme force parameters. Finite-temperature cross sections are calculated and the results are comparedwith those of the other model calculations.

  13. YORP: Influence on Rotation Rate

    NASA Astrophysics Data System (ADS)

    Golubov, A. A.; Krugly, Yu. N.

    2010-06-01

    We have developed a semi-analytical model for calculating angular acceleration of asteroids due to Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect. The calculation of the YORP effect has been generalized for the case of elliptic orbits. It has been shown that the acceleration does not depend on thermal inertia of the asteroid's surface. The model was applied to the asteroid 1620 Geographos and led to acceleration 2×10^{-18}s^{-2}. This value is close to the acceleration obtained from photometric observations of Geographos by Durech et al. [1].

  14. Pitfalls in Prediction Modeling for Normal Tissue Toxicity in Radiation Therapy: An Illustration With the Individual Radiation Sensitivity and Mammary Carcinoma Risk Factor Investigation Cohorts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert

    Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less

  15. Optical modeling of fiber organic photovoltaic structures using a transmission line method.

    PubMed

    Moshonas, N; Stathopoulos, N A; O'Connor, B T; Bedeloglu, A Celik; Savaidis, S P; Vasiliadis, S

    2017-12-01

    An optical model has been developed and evaluated for the calculation of the external quantum efficiency of cylindrical fiber photovoltaic structures. The model is based on the transmission line theory and has been applied on single and bulk heterojunction fiber-photovoltaic cells. Using this model, optimum design characteristics have been proposed for both configurations, and comparison with experimental results has been assessed.

  16. Electromagnetic absorption in a multilayered slab model of tissue under near-field exposure conditions.

    PubMed

    Chatterjee, I; Hagmann, M J; Gandhi, O P

    1980-01-01

    The electromagnetic energy deposited in a semi-infinite slab model consisting of skin, fat, and muscle layers is calculated for both plane-wave and near-field exposures. The plane-wave spectrum (PWS) approach is used to calculate the energy deposited in the model by fields present due to leakage from equipment using electromagnetic energy. This analysis applies to near-field exposures where coupling of the target to the leakage source can be neglected. Calculations were made for 2,450 MHz, at which frequency the layered slab adequately models flat regions of the human body. Resonant absorption due to layering is examined as a function of the skin and fat thicknesses for plane-wave exposure and as a function of the physical extent of the near-field distribution. Calculations show that for fields that are nearly constant over at least a free-space wavelength, the energy deposition (for skin, fat, and muscle combination that gives resonant absorption) is equal to or less than that resulting from plane-wave exposure, but is appreciably greater than that obtained for a homogeneous muscle slab model.

  17. Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices

    NASA Astrophysics Data System (ADS)

    Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The

    2018-01-01

    ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.

  18. The induced electric field due to a current transient

    NASA Astrophysics Data System (ADS)

    Beck, Y.; Braunstein, A.; Frankental, S.

    2007-05-01

    Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.

  19. Cohen's Kappa and classification table metrics 2.0: An ArcView 3.x extension for accuracy assessment of spatially explicit models

    Treesearch

    Jeff Jenness; J. Judson Wynne

    2005-01-01

    In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...

  20. A system for 3D representation of burns and calculation of burnt skin area.

    PubMed

    Prieto, María Felicidad; Acha, Begoña; Gómez-Cía, Tomás; Fondón, Irene; Serrano, Carmen

    2011-11-01

    In this paper a computer-based system for burnt surface area estimation (BAI), is presented. First, a 3D model of a patient, adapted to age, weight, gender and constitution is created. On this 3D model, physicians represent both burns as well as burn depth allowing the burnt surface area to be automatically calculated by the system. Each patient models as well as photographs and burn area estimation can be stored. Therefore, these data can be included in the patient's clinical records for further review. Validation of this system was performed. In a first experiment, artificial known sized paper patches were attached to different parts of the body in 37 volunteers. A panel of 5 experts diagnosed the extent of the patches using the Rule of Nines. Besides, our system estimated the area of the "artificial burn". In order to validate the null hypothesis, Student's t-test was applied to collected data. In addition, intraclass correlation coefficient (ICC) was calculated and a value of 0.9918 was obtained, demonstrating that the reliability of the program in calculating the area is of 99%. In a second experiment, the burnt skin areas of 80 patients were calculated using BAI system and the Rule of Nines. A comparison between these two measuring methods was performed via t-Student test and ICC. The hypothesis of null difference between both measures is only true for deep dermal burns and the ICC is significantly different, indicating that the area estimation calculated by applying classical techniques can result in a wrong diagnose of the burnt surface. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.

  1. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  2. Ab initio method for calculating total cross sections

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.; Schneider, B. I.; Temkin, A.

    1993-01-01

    A method for calculating total cross sections without formally including nonelastic channels is presented. The idea is to use a one channel T-matrix variational principle with a complex correlation function. The derived T matrix is therefore not unitary. Elastic scattering is calculated from T-parallel-squared, but total scattering is derived from the imaginary part of T using the optical theorem. The method is applied to the spherically symmetric model of electron-hydrogen scattering. No spurious structure arises; results for sigma(el) and sigma(total) are in excellent agreement with calculations of Callaway and Oza (1984). The method has wide potential applicability.

  3. Nonlinear Simulation of the Tooth Enamel Spectrum for EPR Dosimetry

    NASA Astrophysics Data System (ADS)

    Kirillov, V. A.; Dubovsky, S. V.

    2016-07-01

    Software was developed where initial EPR spectra of tooth enamel were deconvoluted based on nonlinear simulation, line shapes and signal amplitudes in the model initial spectrum were calculated, the regression coefficient was evaluated, and individual spectra were summed. Software validation demonstrated that doses calculated using it agreed excellently with the applied radiation doses and the doses reconstructed by the method of additive doses.

  4. Ecological Footprint and Ecosystem Services Models: A Comparative Analysis of Environmental Carrying Capacity Calculation Approach in Indonesia

    NASA Astrophysics Data System (ADS)

    Subekti, R. M.; Suroso, D. S. A.

    2018-05-01

    Calculation of environmental carrying capacity can be done by various approaches. The selection of an appropriate approach determines the success of determining and applying environmental carrying capacity. This study aimed to compare the ecological footprint approach and the ecosystem services approach for calculating environmental carrying capacity. It attempts to describe two relatively new models that require further explanation if they are used to calculate environmental carrying capacity. In their application, attention needs to be paid to their respective advantages and weaknesses. Conceptually, the ecological footprint model is more complete than the ecosystem services model, because it describes the supply and demand of resources, including supportive and assimilative capacity of the environment, and measurable output through a resource consumption threshold. However, this model also has weaknesses, such as not considering technological change and resources beneath the earth’s surface, as well as the requirement to provide trade data between regions for calculating at provincial and district level. The ecosystem services model also has advantages, such as being in line with strategic environmental assessment (SEA) of ecosystem services, using spatial analysis based on ecoregions, and a draft regulation on calculation guidelines formulated by the government. Meanwhile, weaknesses are that it only describes the supply of resources, that the assessment of the different types of ecosystem services by experts tends to be subjective, and that the output of the calculation lacks a resource consumption threshold.

  5. Modelling crystal plasticity by 3D dislocation dynamics and the finite element method: The Discrete-Continuous Model revisited

    NASA Astrophysics Data System (ADS)

    Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.

    2014-02-01

    A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.

  6. Molecular Modeling for Calculation of Mechanical Properties of Epoxies with Moisture Ingress

    NASA Technical Reports Server (NTRS)

    Clancy, Thomas C.; Frankland, Sarah J.; Hinkley, J. A.; Gates, T. S.

    2009-01-01

    Atomistic models of epoxy structures were built in order to assess the effect of crosslink degree, moisture content and temperature on the calculated properties of a typical representative generic epoxy. Each atomistic model had approximately 7000 atoms and was contained within a periodic boundary condition cell with edge lengths of about 4 nm. Four atomistic models were built with a range of crosslink degree and moisture content. Each of these structures was simulated at three temperatures: 300 K, 350 K, and 400 K. Elastic constants were calculated for these structures by monitoring the stress tensor as a function of applied strain deformations to the periodic boundary conditions. The mechanical properties showed reasonably consistent behavior with respect to these parameters. The moduli decreased with decreasing crosslink degree with increasing temperature. The moduli generally decreased with increasing moisture content, although this effect was not as consistent as that seen for temperature and crosslink degree.

  7. CE-QUAL-RIV1: A Dynamic, One-Dimensional (Longitudinal) Water Quality Model for Streams. User’s Manual

    DTIC Science & Technology

    1990-11-01

    Applying the chain rule: dB dB dH dB 1 (180) 95 It remains to calculate dBi/ dHi and dB i+1/dHi+ 1 . Now as calculation pro - ceeds from node to node...simulation models can be powerful tools for studying these issues. However, to be useful, the water quality model must be properly vuited for the problem...0 GN_1(QNANQN _1AN_ ) = 0 GN(QN,AN) = 0 (47) 44. The e. l solution of these nobi -e-.a. equations c .-n proceed in two ways. First the nonlinear terms

  8. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  9. Development of a model for on-line control of crystal growth by the AHP method

    NASA Astrophysics Data System (ADS)

    Gonik, M. A.; Lomokhova, A. V.; Gonik, M. M.; Kuliev, A. T.; Smirnov, A. D.

    2007-05-01

    The possibility to apply a simplified 2D model for heat transfer calculations in crystal growth by the axial heat close to phase interface (AHP) method is discussed in this paper. A comparison with global heat transfer calculations with the CGSim software was performed to confirm the accuracy of this model. The simplified model was shown to provide adequate results for the shape of the melt-crystal interface and temperature field in an opaque (Ge) and a transparent crystal (CsI:Tl). The model proposed is used for identification of the growth setup as a control object, for synthesis of a digital controller (PID controller at the present stage) and, finally, in on-line simulations of crystal growth control.

  10. A frost formation model and its validation under various experimental conditions

    NASA Technical Reports Server (NTRS)

    Dietenberger, M. A.

    1982-01-01

    A numerical model that was used to calculate the frost properties for all regimes of frost growth is described. In the first regime of frost growth, the initial frost density and thickness was modeled from the theories of crystal growth. The 'frost point' temperature was modeled as a linear interpolation between the dew point temperature and the fog point temperature, based upon the nucleating capability of the particular condensing surfaces. For a second regime of frost growth, the diffusion model was adopted with the following enhancements: the generalized correlation of the water frost thermal conductivity was applied to practically all water frost layers being careful to ensure that the calculated heat and mass transfer coefficients agreed with experimental measurements of the same coefficients.

  11. Comparing The Effectiveness of a90/95 Calculations (Preprint)

    DTIC Science & Technology

    2006-09-01

    Nachtsheim, John Neter, William Li, Applied Linear Statistical Models , 5th ed., McGraw-Hill/Irwin, 2005 5. Mood, Graybill and Boes, Introduction...curves is based on methods that are only valid for ordinary linear regression. Requirements for a valid Ordinary Least-Squares Regression Model There... linear . For example is a linear model ; is not. 2. Uniform variance (homoscedasticity

  12. Field-dependent critical state of high-Tc superconducting strip simultaneously exposed to transport current and perpendicular magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Cun; He, An; Yong, Huadong

    We present an exact analytical approach for arbitrary field-dependent critical state of high-T{sub c} superconducting strip with transport current. The sheet current and flux-density profiles are derived by solving the integral equations, which agree with experiments quite well. For small transport current, the approximate explicit expressions of sheet current, flux-density and penetration depth for the Kim model are derived based on the mean value theorem for integration. We also extend the results to the field-dependent critical state of superconducting strip in the simultaneous presence of applied field and transport current. The sheet current distributions calculated by the Kim model agreemore » with experiments better than that by the Bean model. Moreover, the lines in the I{sub a}-B{sub a} plane for the Kim model are not monotonic, which is quite different from that the Bean model. The results reveal that the maximum transport current in thin superconducting strip will decrease with increasing applied field which vanishes for the Bean model. The results of this paper are useful to calculate ac susceptibility and ac loss.« less

  13. Time-Domain Receiver Function Deconvolution using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.

    2017-12-01

    Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.

  14. Calculation of the radial electric field with RF sheath boundary conditions in divertor geometry

    NASA Astrophysics Data System (ADS)

    Gui, B.; Xia, T. Y.; Xu, X. Q.; Myra, J. R.; Xiao, X. T.

    2018-02-01

    The equilibrium electric field that results from an imposed DC bias potential, such as that driven by a radio frequency (RF) sheath, is calculated using a new minimal two-field model in the BOUT++ framework. Biasing, using an RF-modified sheath boundary condition, is applied to an axisymmetric limiter, and a thermal sheath boundary is applied to the divertor plates. The penetration of the bias potential into the plasma is studied with a minimal self-consistent model that includes the physics of vorticity (charge balance), ion polarization currents, force balance with E× B , ion diamagnetic flow (ion pressure gradient) and parallel electron charge loss to the thermal and biased sheaths. It is found that a positive radial electric field forms in the scrape-off layer and it smoothly connects across the separatrix to the force-balanced radial electric field in the closed flux surface region. The results are in qualitative agreement with the experiments. Plasma convection related to the E× B net flow in front of the limiter is also obtained from the calculation.

  15. Economic impact of providing workplace influenza vaccination. A model and case study application at a Brazilian pharma-chemical company.

    PubMed

    Burckel, E; Ashraf, T; de Sousa Filho, J P; Forleo Neto, E; Guarino, H; Yauti, C; Barreto F de, B; Champion, L

    1999-11-01

    To develop and apply a model to assess the economic value of a workplace influenza programme from the perspective of the employer. The model calculated the avoided costs of influenza, including treatment costs, lost productivity, lost worker added value and the cost of replacing workers. Subtracted from this benefit were the costs associated with a vaccination programme, including administrative costs, the time to give the vaccine, and lost productivity due to adverse reactions. The framework of the model can be applied to any company to estimate the cost-benefit of an influenza immunisation programme. The model developed was applied to 4030 workers in the core divisions of a Brazilian pharma-chemical company. The model determined a net benefit of $US121,441 [129,335 Brazilian reals ($Brz)], or $US35.45 ($Brz37.75) per vaccinated employee (1997 values). The cost-benefit ratio was 1:2.47. The calculations were subjected to a battery of 1-way and 2-way sensitivity analyses that determined that net benefit would be retained as long as the vaccine cost remained below $US45.40 ($Brz48.40) or the vaccine was at least 32.5% effective. Other alterations would retain a net benefit as well, including several combinations of incidence rate and vaccine effectiveness. The analysis suggests that providing an influenza vaccination programme can incur a substantial net benefit for an employer, although the size of the benefit will depend upon who normally absorbs the costs of treating influenza and compensating workers for lost work time due to illness, as well as the type of company in which the immunisation programme is applied.

  16. Development and application of a geospatial wildfire exposure and risk calculation tool

    Treesearch

    Matthew P. Thompson; Jessica R. Haas; Julie W. Gilbertson-Day; Joe H. Scott; Paul Langowski; Elise Bowne; David E. Calkin

    2015-01-01

    Applying wildfire risk assessment models can inform investments in loss mitigation and landscape restoration, and can be used to monitor spatiotemporal trends in risk. Assessing wildfire risk entails the integration of fire modeling outputs, maps of highly valued resources and assets (HVRAs), characterization of fire effects, and articulation of relative importance...

  17. Forward calculation of gravity and its gradient using polyhedral representation of density interfaces: an application of spherical or ellipsoidal topographic gravity effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chen, Chao

    2018-02-01

    A density interface modeling method using polyhedral representation is proposed to construct 3-D models of spherical or ellipsoidal interfaces such as the terrain surface of the Earth and applied to forward calculating gravity effect of topography and bathymetry for regional or global applications. The method utilizes triangular facets to fit undulation of the target interface. The model maintains almost equal accuracy and resolution at different locations of the globe. Meanwhile, the exterior gravitational field of the model, including its gravity and gravity gradients, is obtained simultaneously using analytic solutions. Additionally, considering the effect of distant relief, an adaptive computation process is introduced to reduce the computational burden. Then features and errors of the method are analyzed. Subsequently, the method is applied to an area for the ellipsoidal Bouguer shell correction as an example and the result is compared to existing methods, which shows our method provides high accuracy and great computational efficiency. Suggestions for further developments and conclusions are drawn at last.

  18. An asymmetric mesoscopic model for single bulges in RNA

    NASA Astrophysics Data System (ADS)

    de Oliveira Martins, Erik; Weber, Gerald

    2017-10-01

    Simple one-dimensional DNA or RNA mesoscopic models are of interest for their computational efficiency while retaining the key elements of the molecular interactions. However, they only deal with perfectly formed DNA or RNA double helices and consider the intra-strand interactions to be the same on both strands. This makes it difficult to describe highly asymmetric structures such as bulges and loops and, for instance, prevents the application of mesoscopic models to determine RNA secondary structures. Here we derived the conditions for the Peyrard-Bishop mesoscopic model to overcome these limitations and applied it to the calculation of single bulges, the smallest and simplest of these asymmetric structures. We found that these theoretical conditions can indeed be applied to any situation where stacking asymmetry needs to be considered. The full set of parameters for group I RNA bulges was determined from experimental melting temperatures using an optimization procedure, and we also calculated average opening profiles for several RNA sequences. We found that guanosine bulges show the strongest perturbation on their neighboring base pairs, considerably reducing the on-site interactions of their neighboring base pairs.

  19. The Mixing of Regolith on the Moon and Beyond; A Model Refreshed

    NASA Astrophysics Data System (ADS)

    Costello, E.; Ghent, R. R.; Lucey, P. G.

    2017-12-01

    Meteoritic impactors constantly mix the lunar regolith, affecting stratigraphy, the lifetime of rays and other anomalous surface features, and the burial, exposure, and break down of volatiles and rocks. In this work we revisit the pioneering regolith mixing model presented by Gault et al. (1974), with updated assumptions and input parameters. Our updates significantly widen the parameter space and allow us to explore mixing as it is driven by different impactors in different materials (e.g. radar-dark halos and melt ponds). The updated treatment of micrometeorites suggests a very high rate of processing at the immediate lunar surface, with implications for rock breakdown and regolith production on melt ponds. We find that the inclusion of secondary impacts has a very strong effect on the rate and magnitude of mixing at all depths and timescales. Our calculations are in good agreement with the timescale of reworking in the top 2-3 cm of regolith that was predicted by observations of LROC temporal pairs and by the depth profile of 26Al abundance in Apollo drill cores. Further, our calculations with secondaries included are consistent with the depth profile of in situ exposure age calculated from Is/FeO and cosmic track abundance in Apollo deep drill cores down to 50cm. The mixing we predict is also consistent with the erasure of density anomalies, or `cold spots', observed in the top decimeters of regolith by LRO Diviner, and the 1Gyr lifetime of 1-10m thick Copernican rays. This exploration of Moon's surface evolution has profound implications for our understanding of other planetary bodies. We take advantage of this computationally inexpensive analytic model and apply it to describe mixing on a variety of bodies across the solar system; including asteroids, Mercury, and Europa. We use the results of ongoing studies that describe porosity calculations and cratering laws in porous asteroid-like material to explore the reworking rate experienced by an asteroid. On Mercury, we apply this model to describe the rate at which reworking depletes water ice and calculate the maximum age of Mercury's polar ice deposits. We apply the model to Europa to understand the impact portion of its regolith evolution and provide insight into the sampling zone intended for a future Europa lander.

  20. New functions for estimating AOT40 from ozone passive sampling

    NASA Astrophysics Data System (ADS)

    De Marco, Alessandra; Vitale, Marcello; Kilic, Umit; Serengil, Yusuf; Paoletti, Elena

    2014-10-01

    AOT40 is the present European standard to assess whether ozone (O3) pollution is a risk for vegetation, and is calculated by using hourly O3 concentrations from automatic devices, i.e. by active monitoring. Passive O3 monitoring is widespread in remote environments. The Loibl function estimates the mean daily O3 profile and thus hourly O3 concentrations, and has been proposed to calculate AOT40 from passive samplers. We investigated whether this function performs well in inhomogeneous terrains such as over the Italian country. Data from 75 active monitoring stations (28 rural and 47 suburban) were analysed over two years. AOT40 was calculated from hourly O3 data either measured by active measurements or estimated by the Loibl function applied to biweekly averages of active-measurement hourly data. The latter approach simulated the data obtained from passive monitoring, as two weeks is the usual exposure window of passive samplers. Residuals between AOT40 estimated by applying the Loibl function and AOT40 calculated from active monitoring ranged from +241% to -107%, suggesting that the Loibl function is inadequate to accurately predict AOT40 in Italy. New statistical models were built for both rural and suburban areas by using non-linear models and including predictors that can be easily measured at forest sites. The modelled AOT40 values strongly depended on physical predictors (latitude and longitude), alone or in combination with other predictors, such as seasonal cumulated ozone and elevation. These results suggest that multi-variate, non-linear regression models work better than the Loibl-based approach in estimating AOT40.

  1. The Swedish radiological environmental protection regulations applied in a review of a license application for a geological repository for spent nuclear fuel.

    PubMed

    Andersson, Pål; Stark, Karolina; Xu, Shulan; Nordén, Maria; Dverstorp, Björn

    2017-11-01

    For the first time, a system for specific consideration of radiological environmental protection has been applied in a major license application in Sweden. In 2011 the Swedish Nuclear Fuel & Waste Management Co. (SKB) submitted a license application for construction of a geological repository for spent nuclear fuel at the Forsmark site. The license application is supported by a post-closure safety assessment, which in accordance with regulatory requirements includes an assessment of environmental consequences. SKB's environmental risk assessment uses the freely available ERICA Tool. Environmental media activity concentrations needed as input to the tool are calculated by means of complex biosphere modelling based on site-specific information gathered from site investigations, as well as from supporting modelling studies and projections of future biosphere conditions in response to climate change and land rise due to glacial rebound. SKB's application is currently being reviewed by the Swedish Radiation Safety Authority (SSM). In addition to a traditional document review with an aim to determine whether SKB's models are relevant, correctly implemented and adequately parametrized, SSM has performed independent modelling in order to gain confidence in the robustness of SKB's assessment. Thus, SSM has used alternative stylized reference biosphere models to calculate environmental activity concentrations for use in subsequent exposure calculations. Secondly, an alternative dose model (RESRAD-BIOTA) is used to calculate doses to biota that are compared with SKB's calculations with the ERICA tool. SSM's experience from this review is that existing tools for environmental dose assessment are possible to use in order to show compliance with Swedish legislation. However, care is needed when site representative species are assessed with the aim to contrast them to generic reference organism. The alternative modelling of environmental concentrations resulted in much lower concentrations compared to SKB's results. However, SSM judges that SKB's in this part conservative approach is relevant for a screening assessment. SSM also concludes that there are big differences in dose rates calculated to different organisms depending on which tool that is used, although not systematically higher for either of them. Finally, independent regulatory modelling has proven valuable for SSM's review in gaining understanding and confidence in SKB's assessment presented in the license application. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Calculating phase equilibrium properties of plasma pseudopotential model using hybrid Gibbs statistical ensemble Monte-Carlo technique

    NASA Astrophysics Data System (ADS)

    Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.

    2015-11-01

    Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.

  3. Mathematical Modeling of Loop Heat Pipes

    NASA Technical Reports Server (NTRS)

    Kaya, Tarik; Ku, Jentung; Hoang, Triem T.; Cheung, Mark L.

    1998-01-01

    The primary focus of this study is to model steady-state performance of a Loop Heat Pipe (LHP). The mathematical model is based on the steady-state energy balance equations at each component of the LHP. The heat exchange between each LHP component and the surrounding is taken into account. Both convection and radiation environments are modeled. The loop operating temperature is calculated as a function of the applied power at a given loop condition. Experimental validation of the model is attempted by using two different LHP designs. The mathematical model is tested at different sink temperatures and at different elevations of the loop. Tbc comparison of the calculations and experimental results showed very good agreement (within 3%). This method proved to be a useful tool in studying steady-state LHP performance characteristics.

  4. Controls on sediment entrainment shear stress determined from X-Ray CT scans and a 3D moment-balance model

    NASA Astrophysics Data System (ADS)

    Hodge, R. A.; Voepel, H.; Leyland, J.; Sear, D. A.; Ahmed, S. I.

    2017-12-01

    The shear stress at which a grain is entrained is determined by the balance between the applied fluid forces, and the resisting forces of the grain. Recent research has tended to focus on the applied fluid forces; calculating the resisting forces requires measurement of the geometry of in-situ sediment grains which has previously been very difficult to measure. We have used CT scanning to measure the grain geometry of in-situ water-worked grains, and from these data have calculated metrics that are relevant to grain entrainment. We use these metrics to parameterise a new, fully 3D, moment-balance model of grain entrainment. Inputs to the model are grain dimensions, exposed area, elevation relative to the velocity profile, the location of grain-grain contact points, and contact area with fine matrix sediment. The new CT data and model mean that assumptions of previous grain-entrainment models (e.g. spherical grains, 1D measurements of protrusion, entrainment in the downstream direction) are no longer necessary. The model calculates the critical shear stress for each possible set of contact points, and outputs the lowest value. Consequently, metrics including pivot angle and the direction of grain entrainment are now model outputs, rather than having to be pre-determined. We use the CT data and model to calculate the critical shear stress of 1092 in-situ grains from baskets that were buried and water-worked in a flume prior to scanning. We find that there is no consistent relationship between relative grain size (D/D50) and pivot angle, whereas there is a negative relationship between D/D50 and protrusion. Out of all measured metrics, critical shear stress is most strongly controlled by protrusion. This finding suggests that grain-scale topographic data could be used to estimate grain protrusion and hence improve estimates of critical shear stress.

  5. A novel methodology for interpreting air quality measurements from urban streets using CFD modelling

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming

    2011-09-01

    In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population exposure studies.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andronov, V.A.; Zhidov, I.G.; Meskov, E.E.

    The report presents the basic results of some calculations, theoretical and experimental efforts in the study of Rayleigh-Taylor, Kelvin-Helmholtz, Richtmyer-Meshkov instabilities and the turbulent mixing which is caused by their evolution. Since the late forties the VNIIEF has been conducting these investigations. This report is based on the data which were published in different times in Russian and foreign journals. The first part of the report deals with calculations an theoretical techniques for the description of hydrodynamic instabilities applied currently, as well as with the results of several individual problems and their comparison with the experiment. These methods can bemore » divided into two types: direct numerical simulation methods and phenomenological methods. The first type includes the regular 2D and 3D gasdynamical techniques as well as the techniques based on small perturbation approximation and on incompressible liquid approximation. The second type comprises the techniques based on various phenomenological turbulence models. The second part of the report describes the experimental methods and cites the experimental results of Rayleigh-Taylor and Richtmyer-Meskov instability studies as well as of turbulent mixing. The applied methods were based on thin-film gaseous models, on jelly models and liquid layer models. The research was done for plane and cylindrical geometries. As drivers, the shock tubes of different designs were used as well as gaseous explosive mixtures, compressed air and electric wire explosions. The experimental results were applied in calculational-theoretical technique calibrations. The authors did not aim at covering all VNIIEF research done in this field of science. To a great extent the choice of the material depended on the personal contribution of the author in these studies.« less

  7. Feasibility of Obtaining Quantitative 3-Dimensional Information Using Conventional Endoscope: A Pilot Study

    PubMed Central

    Hyun, Jong Jin; Keum, Bora; Seo, Yeon Seok; Kim, Yong Sik; Jeen, Yoon Tae; Lee, Hong Sik; Um, Soon Ho; Kim, Chang Duck; Ryu, Ho Sang; Lim, Jong-Wook; Woo, Dong-Gi; Kim, Young-Joong; Lim, Myo-Taeg

    2012-01-01

    Background/Aims Three-dimensional (3D) imaging is gaining popularity and has been partly adopted in laparoscopic surgery or robotic surgery but has not been applied to gastrointestinal endoscopy. As a first step, we conducted an experiment to evaluate whether images obtained by conventional gastrointestinal endoscopy could be used to acquire quantitative 3D information. Methods Two endoscopes (GIF-H260) were used in a Borrmann type I tumor model made of clay. The endoscopes were calibrated by correcting the barrel distortion and perspective distortion. Obtained images were converted to gray-level image, and the characteristics of the images were obtained by edge detection. Finally, data on 3D parameters were measured by using epipolar geometry, two view geometry, and pinhole camera model. Results The focal length (f) of endoscope at 30 mm was 258.49 pixels. Two endoscopes were fixed at predetermined distance, 12 mm (d12). After matching and calculating disparity (v2-v1), which was 106 pixels, the calculated length between the camera and object (L) was 29.26 mm. The height of the object projected onto the image (h) was then applied to the pinhole camera model, and the result of H (height and width) was 38.21 mm and 41.72 mm, respectively. Measurements were conducted from 2 different locations. The measurement errors ranged from 2.98% to 7.00% with the current Borrmann type I tumor model. Conclusions It was feasible to obtain parameters necessary for 3D analysis and to apply the data to epipolar geometry with conventional gastrointestinal endoscope to calculate the size of an object. PMID:22977798

  8. Nuclear deformation in the laboratory frame

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.; Bertsch, G. F.

    2018-01-01

    We develop a formalism for calculating the distribution of the axial quadrupole operator in the laboratory frame within the rotationally invariant framework of the configuration-interaction shell model. The calculation is carried out using a finite-temperature auxiliary-field quantum Monte Carlo method. We apply this formalism to isotope chains of even-mass samarium and neodymium nuclei and show that the quadrupole distribution provides a model-independent signature of nuclear deformation. Two technical advances are described that greatly facilitate the calculations. The first is to exploit the rotational invariance of the underlying Hamiltonian to reduce the statistical fluctuations in the Monte Carlo calculations. The second is to determine quadruple invariants from the distribution of the axial quadrupole operator in the laboratory frame. This allows us to extract effective values of the intrinsic quadrupole shape parameters without invoking an intrinsic frame or a mean-field approximation.

  9. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  10. Development of quantitative atomic modeling for tungsten transport study using LHD plasma with tungsten pellet injection

    NASA Astrophysics Data System (ADS)

    Murakami, I.; Sakaue, H. A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2015-09-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from plasmas of the large helical device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) emission of W24+ to W33+ ions at 1.5-3.5 nm are sensitive to electron temperature and useful to examine the tungsten behavior in edge plasmas. We can reproduce measured EUV spectra at 1.5-3.5 nm by calculated spectra with the tungsten atomic model and obtain charge state distributions of tungsten ions in LHD plasmas at different temperatures around 1 keV. Our model is applied to calculate the unresolved transition array (UTA) seen at 4.5-7 nm tungsten spectra. We analyze the effect of configuration interaction on population kinetics related to the UTA structure in detail and find the importance of two-electron-one-photon transitions between 4p54dn+1- 4p64dn-14f. Radiation power rate of tungsten due to line emissions is also estimated with the model and is consistent with other models within factor 2.

  11. Validation of a national hydrological model

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Booker, D. J.; Cattoën, C.

    2016-10-01

    Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.

  12. A methodology for the assessment of inhalation exposure to aluminium from antiperspirant sprays.

    PubMed

    Schwarz, Katharina; Pappa, Gerlinde; Miertsch, Heike; Scheel, Julia; Koch, Wolfgang

    2018-04-01

    Inhalative exposure can occur accidentally when using cosmetic spray products. Usually, a tiered approach is applied for exposure assessment, starting with rather conservative, simplistic calculation models that may be improved with measured data and more refined modelling. Here we report on an advanced methodology to mimic in-use conditions for antiperspirant spray products to provide a more accurate estimate of the amount of aluminium possibly inhaled and taken up systemically, thus contributing to the overall body burden. Four typical products were sprayed onto a skin surrogate in defined rooms. For aluminium, size-related aerosol release fractions, i.e. inhalable, thoracic and respirable, were determined by a mass balance method taking droplet maturation into account. These data were included into a simple two-box exposure model, allowing calculation of the inhaled aluminium dose over 12 min. Systemic exposure doses were calculated for exposure of the deep lung and the upper respiratory tract using the Multiple Path Particle Deposition Model (MPPD) model. The total systemically available dose of aluminium was in all cases found to be less than 0.5 µg per application. With this study it could be demonstrated that refinement of the input data of the two-box exposure model with measured data of released airborne aluminium is a valuable approach to analyse the contribution of antiperspirant spray inhalation to total aluminium exposure as part of the overall risk assessment. We suggest the methodology which can also be applied to other exposure modelling approaches for spray products, and further is adapted to other similar use scenarios.

  13. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    NASA Astrophysics Data System (ADS)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well-grounded theoretical foundation for a new modeling framework of structural integrity. This paper presents the first findings of the research project.

  14. An analysis of the feasibility of carbon management policies as a mechanism to influence water conservation using optimization methods.

    PubMed

    Wright, Andrew; Hudson, Darren

    2014-10-01

    Studies of how carbon reduction policies would affect agricultural production have found that there is a connection between carbon emissions and irrigation. Using county level data we develop an optimization model that accounts for the gross carbon emitted during the production process to evaluate how carbon reducing policies applied to agriculture would affect the choices of what to plant and how much to irrigate by producers on the Texas High Plains. Carbon emissions were calculated using carbon equivalent (CE) calculations developed by researchers at the University of Arkansas. Carbon reduction was achieved in the model through a constraint, a tax, or a subsidy. Reducing carbon emissions by 15% resulted in a significant reduction in the amount of water applied to a crop; however, planted acreage changed very little due to a lack of feasible alternative crops. The results show that applying carbon restrictions to agriculture may have important implications for production choices in areas that depend on groundwater resources for agricultural production. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Evaluation of atmospheric nitrogen deposition model performance in the context of U.S. critical load assessments

    NASA Astrophysics Data System (ADS)

    Williams, Jason J.; Chung, Serena H.; Johansen, Anne M.; Lamb, Brian K.; Vaughan, Joseph K.; Beutel, Marc

    2017-02-01

    Air quality models are widely used to estimate pollutant deposition rates and thereby calculate critical loads and critical load exceedances (model deposition > critical load). However, model operational performance is not always quantified specifically to inform these applications. We developed a performance assessment approach designed to inform critical load and exceedance calculations, and applied it to the Pacific Northwest region of the U.S. We quantified wet inorganic N deposition performance of several widely-used air quality models, including five different Community Multiscale Air Quality Model (CMAQ) simulations, the Tdep model, and 'PRISM x NTN' model. Modeled wet inorganic N deposition estimates were compared to wet inorganic N deposition measurements at 16 National Trends Network (NTN) monitoring sites, and to annual bulk inorganic N deposition measurements at Mount Rainier National Park. Model bias (model - observed) and error (|model - observed|) were expressed as a percentage of regional critical load values for diatoms and lichens. This novel approach demonstrated that wet inorganic N deposition bias in the Pacific Northwest approached or exceeded 100% of regional diatom and lichen critical load values at several individual monitoring sites, and approached or exceeded 50% of critical loads when averaged regionally. Even models that adjusted deposition estimates based on deposition measurements to reduce bias or that spatially-interpolated measurement data, had bias that approached or exceeded critical loads at some locations. While wet inorganic N deposition model bias is only one source of uncertainty that can affect critical load and exceedance calculations, results demonstrate expressing bias as a percentage of critical loads at a spatial scale consistent with calculations may be a useful exercise for those performing calculations. It may help decide if model performance is adequate for a particular calculation, help assess confidence in calculation results, and highlight cases where a non-deterministic approach may be needed.

  16. Grain boundaries in bcc-Fe: a density-functional theory and tight-binding study

    NASA Astrophysics Data System (ADS)

    Wang, Jingliang; Madsen, Georg K. H.; Drautz, Ralf

    2018-02-01

    Grain boundaries (GBs) have a significant influence on material properties. In the present paper, we calculate the energies of eleven low-Σ ({{Σ }}≤slant 13) symmetrical tilt GBs and two twist GBs in ferromagnetic bcc iron using first-principles density functional theory (DFT) calculations. The results demonstrate the importance of a sufficient sampling of initial rigid body translations in all three directions. We show that the relative GB energies can be explained by the miscoordination of atoms at the GB region. While the main features of the studied GB structures were captured by previous empirical interatomic potential calculations, it is shown that the absolute values of GB energies calculated were substantially underestimated. Based on DFT-calculated GB structures and energies, we construct a new d-band orthogonal tight-binding (TB) model for bcc iron. The TB model is validated by its predictive power on all the studied GBs. We apply the TB model to block boundaries in lath martensite and demonstrate that the experimentally observed GB character distribution can be explained from the viewpoint of interface energy.

  17. A nonlinear fracture mechanics approach to the growth of small cracks

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.

    1983-01-01

    An analytical model of crack closure is used to study the crack growth and closure behavior of small cracks in plates and at notches. The calculated crack opening stresses for small and large cracks, together with elastic and elastic plastic fracture mechanics analyses, are used to correlate crack growth rate data. At equivalent elastic stress intensity factor levels, calculations predict that small cracks in plates and at notches should grow faster than large cracks because the applied stress needed to open a small crack is less than that needed to open a large crack. These predictions agree with observed trends in test data. The calculations from the model also imply that many of the stress intensity factor thresholds that are developed in tests with large cracks and with load reduction schemes do not apply to the growth of small cracks. The current calculations are based upon continuum mechanics principles and, thus, some crack size and grain structure exist where the underlying fracture mechanics assumptions become invalid because of material inhomogeneity (grains, inclusions, etc.). Admittedly, much more effort is needed to develop the mechanics of a noncontinuum. Nevertheless, these results indicate the importance of crack closure in predicting the growth of small cracks from large crack data.

  18. Use of equivalent spheres to model the relation between radar reflectivity and optical extinction of ice cloud particles.

    PubMed

    Donovan, David Patrick; Quante, Markus; Schlimme, Ingo; Macke, Andreas

    2004-09-01

    The effect of ice crystal size and shape on the relation between radar reflectivity and optical extinction is examined. Discrete-dipole approximation calculations of 95-GHz radar reflectivity and ray-tracing calculations are applied to ice crystals of various habits and sizes. Ray tracing was used primarily to calculate optical extinction and to provide approximate information on the lidar backscatter cross section. The results of the combined calculations are compared with Mie calculations applied to collections of different types of equivalent spheres. Various equivalent sphere formulations are considered, including equivalent radar-lidar spheres; equivalent maximum dimension spheres; equivalent area spheres, and equivalent volume and equivalent effective radius spheres. Marked differences are found with respect to the accuracy of different formulations, and certain types of equivalent spheres can be used for useful prediction of both the radar reflectivity at 95 GHz and the optical extinction (but not lidar backscatter cross section) over a wide range of particle sizes. The implications of these results on combined lidar-radar ice cloud remote sensing are discussed.

  19. Thermally activated switching at long time scales in exchange-coupled magnetic grains

    NASA Astrophysics Data System (ADS)

    Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.; Fal, T. J.

    2015-10-01

    Rate coefficients of the Arrhenius-Néel form are calculated for thermally activated magnetic moment reversal for dual layer exchange-coupled composite (ECC) media based on the Langer formalism and are applied to study the sweep rate dependence of M H hysteresis loops as a function of the exchange coupling I between the layers. The individual grains are modeled as two exchange-coupled Stoner-Wohlfarth particles from which the minimum energy paths connecting the minimum energy states are calculated using a variant of the string method and the energy barriers and attempt frequencies calculated as a function of the applied field. The resultant rate equations describing the evolution of an ensemble of noninteracting ECC grains are then integrated numerically in an applied field with constant sweep rate R =-d H /d t and the magnetization calculated as a function of the applied field H . M H hysteresis loops are presented for a range of values I for sweep rates 105Oe /s ≤R ≤1010Oe /s and a figure of merit that quantifies the advantages of ECC media is proposed. M H hysteresis loops are also calculated based on the stochastic Landau-Lifshitz-Gilbert equations for 108Oe /s ≤R ≤1010Oe /s and are shown to be in good agreement with those obtained from the direct integration of rate equations. The results are also used to examine the accuracy of certain approximate models that reduce the complexity associated with the Langer-based formalism and which provide some useful insight into the reversal process and its dependence on the coupling strength and sweep rate. Of particular interest is the clustering of minimum energy states that are separated by relatively low-energy barriers into "metastates." It is shown that while approximating the reversal process in terms of "metastates" results in little loss of accuracy, it can reduce the run time of a kinetic Monte Carlo (KMC) simulation of the magnetic decay of an ensemble of dual layer ECC media by 2 -3 orders of magnitude. The essentially exact results presented in this work for two coupled grains are analogous to the Stoner-Wohlfarth model of a single grain and serve as an important precursor to KMC-based simulation studies on systems of interacting dual layer ECC media.

  20. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    DOE PAGES

    MacDonald, M. J.; Vorberger, J.; Gamboa, E. J.; ...

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate lattice strains for all initial crystallite orientations, enablingmore » elastic anisotropy and sample texture effects to be modeled directly. Furthermore, the effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.« less

  1. Many-Body Spectral Functions from Steady State Density Functional Theory.

    PubMed

    Jacob, David; Kurth, Stefan

    2018-03-14

    We propose a scheme to extract the many-body spectral function of an interacting many-electron system from an equilibrium density functional theory (DFT) calculation. To this end we devise an ideal scanning tunneling microscope (STM) setup and employ the recently proposed steady-state DFT formalism (i-DFT) which allows one to calculate the steady current through a nanoscopic region coupled to two biased electrodes. In our setup, one of the electrodes serves as a probe ("STM tip"), which is weakly coupled to the system we want to measure. In the ideal STM limit of vanishing coupling to the tip, the system is restored to quasi-equilibrium and the normalized differential conductance yields the exact equilibrium many-body spectral function. Calculating this quantity from i-DFT, we derive an exact relation expressing the interacting spectral function in terms of the Kohn-Sham one. As illustrative examples, we apply our scheme to calculate the spectral functions of two nontrivial model systems, namely the single Anderson impurity model and the Constant Interaction Model.

  2. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  3. TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less

  4. Substructure Versus Property-Level Dispersed Modes Calculation

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.; Peck, Jeff A.; Bush, T. Jason; Fulcher, Clay W.

    2016-01-01

    This paper calculates the effect of perturbed finite element mass and stiffness values on the eigenvectors and eigenvalues of the finite element model. The structure is perturbed in two ways: at the "subelement" level and at the material property level. In the subelement eigenvalue uncertainty analysis the mass and stiffness of each subelement is perturbed by a factor before being assembled into the global matrices. In the property-level eigenvalue uncertainty analysis all material density and stiffness parameters of the structure are perturbed modified prior to the eigenvalue analysis. The eigenvalue and eigenvector dispersions of each analysis (subelement and property-level) are also calculated using an analytical sensitivity approximation. Two structural models are used to compare these methods: a cantilevered beam model, and a model of the Space Launch System. For each structural model it is shown how well the analytical sensitivity modes approximate the exact modes when the uncertainties are applied at the subelement level and at the property level.

  5. Empirical Estimation of Local Dielectric Constants: Toward Atomistic Design of Collagen Mimetic Peptides

    PubMed Central

    Pike, Douglas H.; Nanda, Vikas

    2017-01-01

    One of the key challenges in modeling protein energetics is the treatment of solvent interactions. This is particularly important in the case of peptides, where much of the molecule is highly exposed to solvent due to its small size. In this study, we develop an empirical method for estimating the local dielectric constant based on an additive model of atomic polarizabilities. Calculated values match reported apparent dielectric constants for a series of Staphylococcus aureus nuclease mutants. Calculated constants are used to determine screening effects on Coulombic interactions and to determine solvation contributions based on a modified Generalized Born model. These terms are incorporated into the protein modeling platform protCAD, and benchmarked on a data set of collagen mimetic peptides for which experimentally determined stabilities are available. Computing local dielectric constants using atomistic protein models and the assumption of additive atomic polarizabilities is a rapid and potentially useful method for improving electrostatics and solvation calculations that can be applied in the computational design of peptides. PMID:25784456

  6. Ray-tracing in three dimensions for calculation of radiation-dose calculations. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, D.R.

    1986-05-27

    This thesis addresses several methods of calculating the radiation-dose distribution for use by technicians or clinicians in radiation-therapy treatment planning. It specifically covers the calculation of the effective pathlength of the radiation beam for use in beam models representing the dose distribution. A two-dimensional method by Bentley and Milan is compared to the method of Strip Trees developed by Duda and Hart and then a three-dimensional algorithm built to perform the calculations in three dimensions. The use of PRISMS conforms easily to the obtained CT Scans and provides a means of only doing two-dimensional ray-tracing while performing three-dimensional dose calculations.more » This method is already being applied and used in actual calculations.« less

  7. Positron scattering from pyridine

    NASA Astrophysics Data System (ADS)

    Stevens, D.; Babij, T. J.; Machacek, J. R.; Buckman, S. J.; Brunger, M. J.; White, R. D.; García, G.; Blanco, F.; Ellis-Gibbings, L.; Sullivan, J. P.

    2018-04-01

    We present a range of cross section measurements for the low-energy scattering of positrons from pyridine, for incident positron energies of less than 20 eV, as well as the independent atom model with the screening corrected additivity rule including interference effects calculation, of positron scattering from pyridine, with dipole rotational excitations accounted for using the Born approximation. Comparisons are made between the experimental measurements and theoretical calculations. For the positronium formation cross section, we also compare with results from a recent empirical model. In general, quite good agreement is seen between the calculations and measurements although some discrepancies remain which may require further investigation. It is hoped that the present study will stimulate development of ab initio level theoretical methods to be applied to this important scattering system.

  8. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  9. A simple mathematical model of society collapse applied to Easter Island

    NASA Astrophysics Data System (ADS)

    Bologna, M.; Flores, J. C.

    2008-02-01

    In this paper we consider a mathematical model for the evolution and collapse of the Easter Island society. Based on historical reports, the available primary resources consisted almost exclusively in the trees, then we describe the inhabitants and the resources as an isolated dynamical system. A mathematical, and numerical, analysis about the Easter Island community collapse is performed. In particular, we analyze the critical values of the fundamental parameters and a demographic curve is presented. The technological parameter, quantifying the exploitation of the resources, is calculated and applied to the case of another extinguished civilization (Copán Maya) confirming the consistency of the adopted model.

  10. Longitudinal Control for Mengshi Autonomous Vehicle via Cloud Model

    NASA Astrophysics Data System (ADS)

    Gao, H. B.; Zhang, X. Y.; Li, D. Y.; Liu, Y. C.

    2018-03-01

    Dynamic robustness and stability control is a requirement for self-driving of autonomous vehicle. Longitudinal control method of autonomous is a key technique which has drawn the attention of industry and academe. In this paper, we present a longitudinal control algorithm based on cloud model for Mengshi autonomous vehicle to ensure the dynamic stability and tracking performance of Mengshi autonomous vehicle. An experiments is applied to test the implementation of the longitudinal control algorithm. Empirical results show that if the longitudinal control algorithm based Gauss cloud model are applied to calculate the acceleration, and the vehicles drive at different speeds, a stable longitudinal control effect is achieved.

  11. Vibration, performance, flutter and forced response characteristics of a large-scale propfan and its aeroelastic model

    NASA Technical Reports Server (NTRS)

    August, Richard; Kaza, Krishna Rao V.

    1988-01-01

    An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.

  12. Steady-state balance model to calculate the indoor climate of livestock buildings, demonstrated for finishing pigs

    NASA Astrophysics Data System (ADS)

    Schauberger, G.; Piringer, M.; Petz, E.

    The indoor climate of livestock buildings is of importance for the well-being and health of animals and their production performance (daily weight gain, milk yield etc). By using a steady-state model for the sensible and latent heat fluxes and the CO2 and odour mass flows, the indoor climate of mechanically ventilated livestock buildings can be calculated. These equations depend on the livestock (number of animals and how they are kept), the insulation of the building and the characteristics of the ventilation system (ventilation rate). Since the model can only be applied to animal houses where the ventilation systems are mechanically controlled (this is the case for a majority of finishing pig units), the calculations were done for an example of a finishing pig unit with 1000 animal places. The model presented used 30 min values of the outdoor parameters temperature and humidity, collected over a 2-year period, as input. The projected environment inside the livestock building was compared with recommended values. The duration of condensation on the inside surfaces was also calculated.

  13. Model-based coefficient method for calculation of N leaching from agricultural fields applied to small catchments and the effects of leaching reducing measures

    NASA Astrophysics Data System (ADS)

    Kyllmar, K.; Mårtensson, K.; Johnsson, H.

    2005-03-01

    A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.

  14. Application of Raytracing Through the High Resolution Numerical Weather Model HIRLAM for the Analysis of European VLBI

    NASA Technical Reports Server (NTRS)

    Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco

    2010-01-01

    An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).

  15. The dispersion releaser technology is an effective method for testing drug release from nanosized drug carriers.

    PubMed

    Janas, Christine; Mast, Marc-Phillip; Kirsamer, Li; Angioni, Carlo; Gao, Fiona; Mäntele, Werner; Dressman, Jennifer; Wacker, Matthias G

    2017-06-01

    The dispersion releaser (DR) is a dialysis-based setup for the analysis of the drug release from nanosized drug carriers. It is mounted into dissolution apparatus2 of the United States Pharmacopoeia. The present study evaluated the DR technique investigating the drug release of the model compound flurbiprofen from drug solution and from nanoformulations composed of the drug and the polymer materials poly (lactic acid), poly (lactic-co-glycolic acid) or Eudragit®RSPO. The drug loaded nanocarriers ranged in size between 185.9 and 273.6nm and were characterized by a monomodal size distribution (PDI<0.1). The membrane permeability constants of flurbiprofen were calculated and mathematical modeling was applied to obtain the normalized drug release profiles. For comparing the sensitivities of the DR and the dialysis bag technique, the differences in the membrane permeation rates were calculated. Finally, different formulation designs of flurbiprofen were sensitively discriminated using the DR technology. The mechanism of drug release from the nanosized carriers was analyzed by applying two mathematical models described previously, the reciprocal powered time model and the three parameter model. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Regression modeling of ground-water flow

    USGS Publications Warehouse

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  17. Atomic temporal interval relations in branching time: calculation and application

    NASA Astrophysics Data System (ADS)

    Anger, Frank D.; Ladkin, Peter B.; Rodriguez, Rita V.

    1991-03-01

    A practical method of reasoning about intervals in a branching-time model which is dense, unbounded, future-branching, without rejoining branches is presented. The discussion is based on heuristic constraint- propagation techniques using the relation algebra of binary temporal relations among the intervals over the branching-time model. This technique has been applied with success to models of intervals over linear time by Allen and others, and is of cubic-time complexity. To extend it to branding-time models, it is necessary to calculate compositions of the relations; thus, the table of compositions for the 'atomic' relations is computed, enabling the rapid determination of the composition of arbitrary relations, expressed as disjunctions or unions of the atomic relations.

  18. A coupled CFD and wake model simulation of helicopter rotor in hover

    NASA Astrophysics Data System (ADS)

    Zhao, Qinghe; Li, Xiaodong

    2018-03-01

    The helicopter rotor wake plays a dominant role since it affects the flow field structure. It is very difficult to predict accurately of the flow-field. The numerical dissipation is so excessive that it eliminates the vortex structure. A hybrid method of CFD and prescribed wake model was constructed by applying the prescribed wake model as much as possible. The wake vortices were described as a single blade tip vortex in this study. The coupling model is used to simulate the flow field. Both non-lifting and lifting cases have been calculated with subcritical and supercritical tip Mach numbers. Surface pressure distributions are presented and compared with experimental data. The calculated results agree well with the experimental data.

  19. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    PubMed

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  20. Electrical and fluid transport in consolidated sphere packs

    NASA Astrophysics Data System (ADS)

    Zhan, Xin; Schwartz, Lawrence M.; Toksöz, M. Nafi

    2015-05-01

    We calculate geometrical and transport properties (electrical conductivity, permeability, specific surface area, and surface conductivity) of a family of model granular porous media from an image based representation of its microstructure. The models are based on the packing described by Finney and cover a wide range of porosities. Finite difference methods are applied to solve for electrical conductivity and hydraulic permeability. Two image processing methods are used to identify the pore-grain interface and to test correlations linking permeability to electrical conductivity. A three phase conductivity model is developed to compute surface conductivity associated with the grain-pore interface. Our results compare well against empirical models over the entire porosity range studied. We conclude by examining the influence of image resolution on our calculations.

  1. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    NASA Astrophysics Data System (ADS)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blijderveen, Maarten van; University of Twente, Department of Thermal Engineering, Drienerlolaan 5, 7522 NB Enschede; Bramer, Eddy A.

    Highlights: Black-Right-Pointing-Pointer We model piloted ignition times of wood and plastics. Black-Right-Pointing-Pointer The model is applied on a packed bed. Black-Right-Pointing-Pointer When the air flow is above a critical level, no ignition can take place. - Abstract: To gain insight in the startup of an incinerator, this article deals with piloted ignition. A newly developed model is described to predict the piloted ignition times of wood, PMMA and PVC. The model is based on the lower flammability limit and the adiabatic flame temperature at this limit. The incoming radiative heat flux, sample thickness and moisture content are some of themore » used variables. Not only the ignition time can be calculated with the model, but also the mass flux and surface temperature at ignition. The ignition times for softwoods and PMMA are mainly under-predicted. For hardwoods and PVC the predicted ignition times agree well with experimental results. Due to a significant scatter in the experimental data the mass flux and surface temperature calculated with the model are hard to validate. The model is applied on the startup of a municipal waste incineration plant. For this process a maximum allowable primary air flow is derived. When the primary air flow is above this maximum air flow, no ignition can be obtained.« less

  3. Scale-invariant curvature fluctuations from an extended semiclassical gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinamonti, Nicola, E-mail: pinamont@dima.unige.it, E-mail: siemssen@dima.unige.it; INFN Sezione di Genova, Via Dodecaneso 33, 16146 Genova; Siemssen, Daniel, E-mail: pinamont@dima.unige.it, E-mail: siemssen@dima.unige.it

    2015-02-15

    We present an extension of the semiclassical Einstein equations which couple n-point correlation functions of a stochastic Einstein tensor to the n-point functions of the quantum stress-energy tensor. We apply this extension to calculate the quantum fluctuations during an inflationary period, where we take as a model a massive conformally coupled scalar field on a perturbed de Sitter space and describe how a renormalization independent, almost-scale-invariant power spectrum of the scalar metric perturbation is produced. Furthermore, we discuss how this model yields a natural basis for the calculation of non-Gaussianities of the considered metric fluctuations.

  4. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  5. Parameters modelling of amaranth grain processing technology

    NASA Astrophysics Data System (ADS)

    Derkanosova, N. M.; Shelamova, S. A.; Ponomareva, I. N.; Shurshikova, G. V.; Vasilenko, O. A.

    2018-03-01

    The article presents a technique that allows calculating the structure of a multicomponent bakery mixture for the production of enriched products, taking into account the instability of nutrient content, and ensuring the fulfilment of technological requirements and, at the same time considering consumer preferences. The results of modelling and analysis of optimal solutions are given by the example of calculating the structure of a three-component mixture of wheat and rye flour with an enriching component, that is, whole-hulled amaranth flour applied to the technology of bread from a mixture of rye and wheat flour on a liquid leaven.

  6. Ionization potential depression and optical spectra in a Debye plasma model

    NASA Astrophysics Data System (ADS)

    Lin, Chengliang; Röpke, Gerd; Reinholz, Heidi; Kraeft, Wolf-Dietrich

    2017-11-01

    We show how optical spectra in dense plasmas are determined by the shift of energy levels as well as the broadening owing to collisions with the plasma particles. In lowest approximation, the interaction with the plasma particles is described by the RPA dielectric function, leading to the Debye shift of the continuum edge. The bound states remain nearly un-shifted, their broadening is calculated in Born approximation. The role of ionization potential depression as well as the Inglis-Teller effect are shown. The model calculations have to be improved going beyond the lowest (RPA) approximation when applying to WDM spectra.

  7. Tight-binding model for borophene and borophane

    NASA Astrophysics Data System (ADS)

    Nakhaee, M.; Ketabi, S. A.; Peeters, F. M.

    2018-03-01

    Starting from the simplified linear combination of atomic orbitals method in combination with first-principles calculations, we construct a tight-binding (TB) model in the two-centre approximation for borophene and hydrogenated borophene (borophane). The Slater and Koster approach is applied to calculate the TB Hamiltonian of these systems. We obtain expressions for the Hamiltonian and overlap matrix elements between different orbitals for the different atoms and present the SK coefficients in a nonorthogonal basis set. An anisotropic Dirac cone is found in the band structure of borophane. We derive a Dirac low-energy Hamiltonian and compare the Fermi velocities with that of graphene.

  8. Modelling of electronic excitation and radiation in the Direct Simulation Monte Carlo Macroscopic Chemistry Method

    NASA Astrophysics Data System (ADS)

    Goldsworthy, M. J.

    2012-10-01

    One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.

  9. Modeling growth and dissolution of inclusions during fusion welding of steels

    NASA Astrophysics Data System (ADS)

    Hong, Tao

    The characteristics of inclusions in the weld metals are critical factors to determine the structure, properties and performance of weldments. The research in the present thesis applied computational modeling to study inclusion behavior considering thermodynamics and kinetics of nucleation, growth and dissolution of inclusion along its trajectory calculated from the heat transfer and fluid flow model in the weld pool. The objective of this research is to predict the characteristics of inclusions, such as composition, size distribution, and number density in the weld metal from different welding parameters and steel compositions. To synthesize the knowledge of thermodynamics and kinetics of nucleation, growth and dissolution of inclusion in the liquid metal, a set of time-temperature-transformation (TTT) diagrams are constructed to represent the effects of time and temperature on the isothermal growth and dissolution behavior of fourteen types of individual inclusions. The non-isothermal behavior of growth and dissolution of inclusions is predicted from their isothermal behavior by constructing continuous-cooling-transformation (CCT) diagrams using Scheil additive rule. A well verified fluid flow and heat transfer model developed at Penn State is used to calculate the temperature and velocity fields in the weld pool for different welding processes. A turbulent model considering enhanced viscosity and thermal conductivity (k-ε model) is applied. The calculations show that there is vigorous circulation of metal in the weld pool. The heat transfer and fluid flow model helps to understand not only the fundamentals of the physical phenomena (luring welding, but also the basis to study the growth and dissolution of inclusions. The calculations of particle tracking of thousands of inclusions show that most inclusions undergo complex gyrations and thermal cycles in the weld pool. The inclusions experience both growth and dissolution during their lifetime. Thermal cycles of thousand of inclusions nucleated in the liquid region are tracked and their growth and dissolution are calculated to estimate the final size distribution and number density of inclusions statistically. The calculations show that welding conditions and weld metal compositions affect the inclusion characteristics significantly. Good agreement between the computed and the experimentally observed inclusion size distribution indicates that the inclusion behavior in the weld pool can be understood from the fundamentals of transport phenomena and transformation kinetics.

  10. New Approach for Nuclear Reaction Model in the Combination of Intra-nuclear Cascade and DWBA

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Iwamoto, O.; Iwamoto, Y.; Sato, T.; Niita, K.

    2014-04-01

    We applied a new nuclear reaction model that is a combination of the intra nuclear cascade model and the distorted wave Born approximation (DWBA) calculation to estimate neutron spectra in reactions induced by protons incident on 7Li and 9Be targets at incident energies below 50 MeV, using the particle and heavy ion transport code system (PHITS). The results obtained by PHITS with the new model reproduce the sharp peaks observed in the experimental double-differential cross sections as a result of taking into account transitions between discrete nuclear states in the DWBA. An excellent agreement was observed between the calculated results obtained using the combination model and experimental data on neutron yields from thick targets in the inclusive (p, xn) reaction.

  11. Comparison of internal wave properties calculated by Boussinesq equations with/without rigid-lid assumption

    NASA Astrophysics Data System (ADS)

    Liu, C. M.

    2017-12-01

    Wave properties predicted by the rigid-lid and the free-surface Boussinesq equations for a two-fluid system are theoretically calculated and compared in this study. Boussinesq model is generally applied to numerically simulate surface waves in coastal regions to provide credible information for disaster prevention and breaker design. As for internal waves, Liu et al. (2008) and Liu (2016) respectively derived a free-surface model and a rigid-lid Boussinesq models for a two-fluid system. The former and the latter models respectively contain four and three key variables which may result in different results and efficiency while simulating. Therefore, present study shows the results theoretically measured by these two models to provide more detailed observation and useful information for motions of internal waves.

  12. A polygon-based modeling approach to assess exposure of resources and assets to wildfire

    Treesearch

    Matthew P. Thompson; Joe Scott; Jeffrey D. Kaiden; Julie W. Gilbertson-Day

    2013-01-01

    Spatially explicit burn probability modeling is increasingly applied to assess wildfire risk and inform mitigation strategy development. Burn probabilities are typically expressed on a per-pixel basis, calculated as the number of times a pixel burns divided by the number of simulation iterations. Spatial intersection of highly valued resources and assets (HVRAs) with...

  13. Application of a number-conserving boson expansion theory to Ginocchio's SO(8) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.h.; Pedrocchi, V.G.; Tamura, T.

    1986-05-01

    A boson expansion theory based on a number-conserving quasiparticle approach is applied to Ginocchio's SO(8) fermion model. Energy spectra and E2 transition rates calculated by using this new boson mapping are presented and compared against the exact fermion values. A comparison with other boson approaches is also given.

  14. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of § 86.1801-12(j), CO2 fleet average exhaust emission standards apply to: (i) 2012 and later model... businesses meeting certain criteria may be exempted from the greenhouse gas emission standards in § 86.1818... standards applicable in a given model year are calculated separately for passenger automobiles and light...

  15. Optimization Model for Reducing Emissions of Greenhouse ...

    EPA Pesticide Factsheets

    The EPA Vehicle Greenhouse Gas (VGHG) model is used to apply various technologies to a defined set of vehicles in order to meet a specified GHG emission target, and to then calculate the costs and benefits of doing so. To facilitate its analysis of the costs and benefits of the control of GHG emissions from cars and trucks.

  16. Modeling damaged wings: Element selection and constraint specification

    NASA Technical Reports Server (NTRS)

    Stronge, W. J.

    1975-01-01

    The NASTRAN analytical program was used for structural design, and no problems were anticipated in applying this program to a damaged structure as long as the deformations were small and the strains remained within the elastic range. In this context, NASTRAN was used to test three-dimensional analytical models of a damaged aircraft wing under static loads. A comparison was made of calculated and experimentally measured strains on primary structural components of an RF-84F wing. This comparison brought out two sensitive areas in modeling semimonocoque structures. The calculated strains were strongly affected by the type of elements used adjacent to the damaged region and by the choice of multipoint constraints sets on the damaged boundary.

  17. Chemical element transport in stellar evolution models

    PubMed Central

    Cassisi, Santi

    2017-01-01

    Stellar evolution computations provide the foundation of several methods applied to study the evolutionary properties of stars and stellar populations, both Galactic and extragalactic. The accuracy of the results obtained with these techniques is linked to the accuracy of the stellar models, and in this context the correct treatment of the transport of chemical elements is crucial. Unfortunately, in many respects calculations of the evolution of the chemical abundance profiles in stars are still affected by sometimes sizable uncertainties. Here, we review the various mechanisms of element transport included in the current generation of stellar evolution calculations, how they are implemented, the free parameters and uncertainties involved, the impact on the models and the observational constraints. PMID:28878972

  18. Chemical element transport in stellar evolution models.

    PubMed

    Salaris, Maurizio; Cassisi, Santi

    2017-08-01

    Stellar evolution computations provide the foundation of several methods applied to study the evolutionary properties of stars and stellar populations, both Galactic and extragalactic. The accuracy of the results obtained with these techniques is linked to the accuracy of the stellar models, and in this context the correct treatment of the transport of chemical elements is crucial. Unfortunately, in many respects calculations of the evolution of the chemical abundance profiles in stars are still affected by sometimes sizable uncertainties. Here, we review the various mechanisms of element transport included in the current generation of stellar evolution calculations, how they are implemented, the free parameters and uncertainties involved, the impact on the models and the observational constraints.

  19. Calculations of wall shear stress in harmonically oscillated turbulent pipe flow using a low-Reynolds-number {kappa}-{epsilon} model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismael, J.O.; Cotton, M.A.

    1996-03-01

    The low-Reynolds-number {kappa}-{epsilon} turbulence model of Launder and Sharma is applied to the calculation of wall shear stress in spatially fully-developed turbulent pipe flow oscillated at small amplitudes. It is believed that the present study represents the first systematic evaluation of the turbulence closure under consideration over a wide range of frequency. Model results are well correlated in terms of the parameter {omega}{sup +} = {omega}{nu}/{bar U}{sub {tau}}{sup 2} at high frequencies, whereas at low frequencies there is an additional Reynolds number dependence. Comparison is made with the experimental data of Finnicum and Hanratty.

  20. A numerical program for steady-state flow of magma-gas mixtures through vertical eruptive conduits

    USGS Publications Warehouse

    Mastin, Larry G.; Ghiorso, Mark S.

    2000-01-01

    This report presents a model that calculates flow properties (pressure, vesicularity, and some 35 other parameters) as a function of vertical position within a volcanic conduit during a steady-state eruption. The model idealizes the magma-gas mixture as a single homogeneousfluid and calculates gas exsolution under the assumption of equilibrium conditions. These are the same assumptions on which classic conduit models (e.g. Wilson and Head, 1981) have been based. They are most appropriate when applied to eruptions of rapidly ascending magma (basaltic lava-fountain eruptions, and Plinian or sub-Plinian eruptions of intermediate or silicic magmas) that contains abundant nucleation sites (microlites, for example) for bubble growth.

  1. Systematics of first 2+ state g factors around mass 80

    NASA Astrophysics Data System (ADS)

    Mertzimekis, T. J.; Stuchbery, A. E.; Benczer-Koller, N.; Taylor, M. J.

    2003-11-01

    The systematics of the first 2+ state g factors in the mass 80 region are investigated in terms of an IBM-II analysis, a pairing-corrected geometrical model, and a shell-model approach. Subshell closure effects at N=38 and overall trends were examined using IBM-II. A large-space shell-model calculation was successful in describing the behavior for N=48 and N=50 nuclei, where single-particle features are prominent. A schematic truncated-space calculation was applied to the lighter isotopes. The variations of the effective boson g factors are discussed in connection with the role of F -spin breaking, and comparisons are made between the mass 80 and mass 180 regions.

  2. Competing quantum orderings in cuprate superconductors: A minimal model

    NASA Astrophysics Data System (ADS)

    Martin, I.; Ortiz, G.; Balatsky, A. V.; Bishop, A. R.

    2001-02-01

    We present a minimal model for cuprate superconductors. At the unrestricted mean-field level, the model produces homogeneous superconductivity at large doping, striped superconductivity in the underdoped regime and various antiferromagnetic phases at low doping and for high temperatures. On the underdoped side, the superconductor is intrinsically inhomogeneous and global phase coherence is achieved through Josephson-like coupling of the superconducting stripes. The model is applied to calculate experimentally measurable ARPES spectra.

  3. Application of a new K-tau model to near wall turbulent flows

    NASA Technical Reports Server (NTRS)

    Thangam, S.; Abid, R.; Speziale, Charles G.

    1991-01-01

    A recently developed K-tau model for near wall turbulent flows is applied to two severe test cases. The turbulent flows considered include the incompressible flat plate boundary layer with the adverse pressure gradients and incompressible flow past a backward facing step. Calculations are performed for this two-equation model using an anisotropic as well as isotropic eddy-viscosity. The model predictions are shown to compare quite favorably with experimental data.

  4. The COsmic-ray Soil Moisture Interaction Code (COSMIC) for use in data assimilation

    NASA Astrophysics Data System (ADS)

    Shuttleworth, J.; Rosolem, R.; Zreda, M.; Franz, T.

    2013-08-01

    Soil moisture status in land surface models (LSMs) can be updated by assimilating cosmic-ray neutron intensity measured in air above the surface. This requires a fast and accurate model to calculate the neutron intensity from the profiles of soil moisture modeled by the LSM. The existing Monte Carlo N-Particle eXtended (MCNPX) model is sufficiently accurate but too slow to be practical in the context of data assimilation. Consequently an alternative and efficient model is needed which can be calibrated accurately to reproduce the calculations made by MCNPX and used to substitute for MCNPX during data assimilation. This paper describes the construction and calibration of such a model, COsmic-ray Soil Moisture Interaction Code (COSMIC), which is simple, physically based and analytic, and which, because it runs at least 50 000 times faster than MCNPX, is appropriate in data assimilation applications. The model includes simple descriptions of (a) degradation of the incoming high-energy neutron flux with soil depth, (b) creation of fast neutrons at each depth in the soil, and (c) scattering of the resulting fast neutrons before they reach the soil surface, all of which processes may have parameterized dependency on the chemistry and moisture content of the soil. The site-to-site variability in the parameters used in COSMIC is explored for 42 sample sites in the COsmic-ray Soil Moisture Observing System (COSMOS), and the comparative performance of COSMIC relative to MCNPX when applied to represent interactions between cosmic-ray neutrons and moist soil is explored. At an example site in Arizona, fast-neutron counts calculated by COSMIC from the average soil moisture profile given by an independent network of point measurements in the COSMOS probe footprint are similar to the fast-neutron intensity measured by the COSMOS probe. It was demonstrated that, when used within a data assimilation framework to assimilate COSMOS probe counts into the Noah land surface model at the Santa Rita Experimental Range field site, the calibrated COSMIC model provided an effective mechanism for translating model-calculated soil moisture profiles into aboveground fast-neutron count when applied with two radically different approaches used to remove the bias between data and model.

  5. Modeling of the metallic port in breast tissue expanders for photon radiotherapy.

    PubMed

    Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui

    2018-03-30

    The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    NASA Astrophysics Data System (ADS)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  7. Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings

    NASA Astrophysics Data System (ADS)

    Hunt, H. E. M.

    1996-05-01

    A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.

  8. Diffuse sorption modeling.

    PubMed

    Pivovarov, Sergey

    2009-04-01

    This work presents a simple solution for the diffuse double layer model, applicable to calculation of surface speciation as well as to simulation of ionic adsorption within the diffuse layer of solution in arbitrary salt media. Based on Poisson-Boltzmann equation, the Gaines-Thomas selectivity coefficient for uni-bivalent exchange on clay, K(GT)(Me(2+)/M(+))=(Q(Me)(0.5)/Q(M)){M(+)}/{Me(2+)}(0.5), (Q is the equivalent fraction of cation in the exchange capacity, and {M(+)} and {Me(2+)} are the ionic activities in solution) may be calculated as [surface charge, mueq/m(2)]/0.61. The obtained solution of the Poisson-Boltzmann equation was applied to calculation of ionic exchange on clays and to simulation of the surface charge of ferrihydrite in 0.01-6 M NaCl solutions. In addition, a new model of acid-base properties was developed. This model is based on assumption that the net proton charge is not located on the mathematical surface plane but diffusely distributed within the subsurface layer of the lattice. It is shown that the obtained solution of the Poisson-Boltzmann equation makes such calculations possible, and that this approach is more efficient than the original diffuse double layer model.

  9. Mathematical modeling of swirled flows in industrial applications

    NASA Astrophysics Data System (ADS)

    Dekterev, A. A.; Gavrilov, A. A.; Sentyabov, A. V.

    2018-03-01

    Swirled flows are widely used in technological devices. Swirling flows are characterized by a wide range of flow regimes. 3D mathematical modeling of flows is widely used in research and design. For correct mathematical modeling of such a flow, it is necessary to use turbulence models, which take into account important features of the flow. Based on the experience of computational modeling of a wide class of problems with swirling flows, recommendations on the use of turbulence models for calculating the applied problems are proposed.

  10. Guidelines for Calculating and Routing a Dam-Break Flood.

    DTIC Science & Technology

    1977-01-01

    flow, Teton Dam . 20. ABSTRACT (Continue an reverse aide If necessary and Identify by block number) This report described procedures necessary to calculate...and route a dam -break flood using an existing generalized unsteady open channel flow model. The recent Teton Dam event was reconstituted to test the...methodology may be obtained from The Hydrologic Engineering Center. The computer program was applied to the Teton Dam data set to demonstrate the level of

  11. Scalable free energy calculation of proteins via multiscale essential sampling

    NASA Astrophysics Data System (ADS)

    Moritsugu, Kei; Terada, Tohru; Kidera, Akinori

    2010-12-01

    A multiscale simulation method, "multiscale essential sampling (MSES)," is proposed for calculating free energy surface of proteins in a sizable dimensional space with good scalability. In MSES, the configurational sampling of a full-dimensional model is enhanced by coupling with the accelerated dynamics of the essential degrees of freedom. Applying the Hamiltonian exchange method to MSES can remove the biasing potential from the coupling term, deriving the free energy surface of the essential degrees of freedom. The form of the coupling term ensures good scalability in the Hamiltonian exchange. As a test application, the free energy surface of the folding process of a miniprotein, chignolin, was calculated in the continuum solvent model. Results agreed with the free energy surface derived from the multicanonical simulation. Significantly improved scalability with the MSES method was clearly shown in the free energy calculation of chignolin in explicit solvent, which was achieved without increasing the number of replicas in the Hamiltonian exchange.

  12. Comparison of different methods used in integral codes to model coagulation of aerosols

    NASA Astrophysics Data System (ADS)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  13. RECENT PROGRESS OF CRACK BRIDGING MODELING OF DUCTILE-PHASE-TOUGHENED W-CU COMPOSITES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setyawan, Wahyu; Henager, Charles H.; Wagner, Karla B.

    2015-04-16

    A crack bridging model using calculated Cu stress-strain curves has been developed to study the toughening of W-Cu composites. A strengthening factor and necking parameters have been added to the model for the ductile-phase bridges to incorporate constraint effects at small bridge sizes. Parametric studies are performed to investigate the effect of these parameters. The calculated maximum applied stress intensity, aKmax, to induce a 1-mm stable crack is compared to the experimental stress intensity at peak load, Kpeak. Without bridge necking, increasing the strengthening factor improves the agreement between aKmax and Kpeak when plotted vs. logarithm of the displacement rate.more » Improvement can also be achieved by allowing necking with a larger failure strain. While the slope is better matched with this latter approach, the calculated value of aKmax is significantly larger than Kpeak.« less

  14. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; Steffen, J. H.; Weltman, A.

    2010-01-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here, we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss the GammeV-CHameleon Afterglow SEarch, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHameleon Afterglow SEarch. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experimentmore » will be able to probe a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  15. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; /Chicago U., EFI /KICP, Chicago; Steffen, J.H.

    2009-11-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss GammeV-CHASE, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHASE. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experiment will be able to probemore » a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  16. Photonic band gap structure simulator

    DOEpatents

    Chen, Chiping; Shapiro, Michael A.; Smirnova, Evgenya I.; Temkin, Richard J.; Sirigiri, Jagadishwar R.

    2006-10-03

    A system and method for designing photonic band gap structures. The system and method provide a user with the capability to produce a model of a two-dimensional array of conductors corresponding to a unit cell. The model involves a linear equation. Boundary conditions representative of conditions at the boundary of the unit cell are applied to a solution of the Helmholtz equation defined for the unit cell. The linear equation can be approximated by a Hermitian matrix. An eigenvalue of the Helmholtz equation is calculated. One computation approach involves calculating finite differences. The model can include a symmetry element, such as a center of inversion, a rotation axis, and a mirror plane. A graphical user interface is provided for the user's convenience. A display is provided to display to a user the calculated eigenvalue, corresponding to a photonic energy level in the Brilloin zone of the unit cell.

  17. An extended model of the Barkhausen effect based on the ABBM model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clatterbuck, D. M.; Garcia, V. J.; Johnson, M. J.

    2000-05-01

    The Barkhausen model of Alessandro et al. [J. Appl. Phys. 68, 2901 (1990)] has been extended to nonstationary domain wall dynamics. The assumptions of the original model limit, its use to situations where the differential permeability, and time derivative of applied field are constant. The later model of Jiles et al. assumes that the Barkhausen activity in a given time interval is proportional to the rate of change of irreversible magnetization which can be calculated from hysteresis models. The extended model presented here incorporates ideas from both of these. It assumes that the pinning field and domain wall velocity behavemore » according to the Alessandro model, but allows the rate of change of the magnetic flux to vary around a moving average which is determined by the shape of the hysteresis curve and the applied magnetic field wave form. As a result, the new model allows for changes in permeability with applied field and can also reproduce the frequency response of experimental Barkhausen signals. (c) 2000 American Institute of Physics.« less

  18. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  19. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  20. Modeling material interfaces with hybrid adhesion method

    DOE PAGES

    Brown, Nicholas Taylor; Qu, Jianmin; Martinez, Enrique

    2017-01-27

    A molecular dynamics simulation approach is presented to approximate layered material structures using discrete interatomic potentials through classical mechanics and the underlying principles of quantum mechanics. This method isolates the energetic contributions of the system into two pure material layers and an interfacial region used to simulate the adhesive properties of the diffused interface. The strength relationship of the adhesion contribution is calculated through small-scale separation calculations and applied to the molecular surfaces through an inter-layer bond criterion. By segregating the contributions into three regions and accounting for the interfacial excess energies through the adhesive surface bonds, it is possiblemore » to model each material with an independent potential while maintaining an acceptable level of accuracy in the calculation of mechanical properties. This method is intended for the atomistic study of the delamination mechanics, typically observed in thin-film applications. Therefore, the work presented in this paper focuses on mechanical tensile behaviors, with observations in the elastic modulus and the delamination failure mode. To introduce the hybrid adhesion method, we apply the approach to an ideal bulk copper sample, where an interface is created by disassociating the force potential in the middle of the structure. Various mechanical behaviors are compared to a standard EAM control model to demonstrate the adequacy of this approach in a simple setting. In addition, we demonstrate the robustness of this approach by applying it on (1) a Cu-Cu 2O interface with interactions between two atom types, and (2) an Al-Cu interface with two dissimilar FCC lattices. These additional examples are verified against EAM and COMB control models to demonstrate the accurate simulation of failure through delamination, and the formation and propagation of dislocations under loads. Finally, the results conclude that by modeling the energy contributions of an interface using hybrid adhesion bonds, we can provide an accurate approximation method for studies of large-scale mechanical properties, as well as the representation of various delamination phenomena at the atomic scale.« less

  1. 76 FR 77563 - Florida Power & Light Company; St. Lucie Plant, Unit No. 1; Exemption

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ....2, because the P-T limits developed for St. Lucie, Unit 1, use a finite element method to determine... Code for calculating K Im factors, and instead applies FEM [finite element modeling] methods for...

  2. Competing Quantum Orderings in Cuprate Superconductors:

    NASA Astrophysics Data System (ADS)

    Martin, I.; Ortiz, G.; Balatsky, A. V.; Bishop, A. R.

    We present a minimal model for cuprate superconductors. At the unrestricted mean-field level, the model produces homogeneous superconductivity at large doping, striped superconductivity in the underdoped regime and various antiferromagnetic phases at low doping and for high temperatures. On the underdoped side, the superconductor is intrinsically inhomogeneous and global phase coherence is achieved through Josephson-like coupling of the superconducting stripes. The model is applied to calculate experimentally measurable ARPES spectra.

  3. 2D and 3D Models of Convective Turbulence and Oscillations in Intermediate-Mass Main-Sequence Stars

    NASA Astrophysics Data System (ADS)

    Guzik, Joyce Ann; Morgan, Taylor H.; Nelson, Nicholas J.; Lovekin, Catherine; Kitiashvili, Irina N.; Mansour, Nagi N.; Kosovichev, Alexander

    2015-08-01

    We present multidimensional modeling of convection and oscillations in main-sequence stars somewhat more massive than the sun, using three separate approaches: 1) Applying the spherical 3D MHD ASH (Anelastic Spherical Harmonics) code to simulate the core convection and radiative zone. Our goal is to determine whether core convection can excite low-frequency gravity modes, and thereby explain the presence of low frequencies for some hybrid gamma Dor/delta Sct variables for which the envelope convection zone is too shallow for the convective blocking mechanism to drive g modes; 2) Using the 3D planar ‘StellarBox’ radiation hydrodynamics code to model the envelope convection zone and part of the radiative zone. Our goals are to examine the interaction of stellar pulsations with turbulent convection in the envelope, excitation of acoustic modes, and the role of convective overshooting; 3) Applying the ROTORC 2D stellar evolution and dynamics code to calculate evolution with a variety of initial rotation rates and extents of core convective overshooting. The nonradial adiabatic pulsation frequencies of these nonspherical models will be calculated using the 2D pulsation code NRO of Clement. We will present new insights into gamma Dor and delta Sct pulsations gained by multidimensional modeling compared to 1D model expectations.

  4. Calculation and Analysis of Magnetic Gradient Tensor Components of Global Magnetic Models

    NASA Astrophysics Data System (ADS)

    Schiffler, M.; Queitsch, M.; Schneider, M.; Goepel, A.; Stolz, R.; Krech, W.; Meyer, H. G.; Kukowski, N.

    2014-12-01

    Global Earth's magnetic field models like the International Geomagnetic Reference Field (IGRF), the World Magnetic Model (WMM) or the High Definition Geomagnetic Model (HDGM) are harmonic analysis regressions to available magnetic observations stored as spherical harmonic coefficients. Input data combine recordings from magnetic observatories, airborne magnetic surveys and satellite data. The advance of recent magnetic satellite missions like SWARM and its predecessors like CHAMP offer high resolution measurements while providing a full global coverage. This deserves expansion of the theoretical framework of harmonic synthesis to magnetic gradient tensor components. Measurement setups for Full Tensor Magnetic Gradiometry equipped with high sensitive gradiometers like the JeSSY STAR system can directly measure the gradient tensor components, which requires precise knowledge about the background regional gradients which can be calculated with this extension. In this study we develop the theoretical framework for calculation of the magnetic gradient tensor components from the harmonic series expansion and apply our approach to the IGRF and HDGM. The gradient tensor component maps for entire Earth's surface produced for the IGRF show low gradients reflecting the variation from the dipolar character, whereas maps for the HDGM (up to degree N=729) reveal new information about crustal structure, especially across the oceans, and deeply situated ore bodies. From the gradient tensor components, the rotational invariants, the Eigenvalues, and the normalized source strength (NSS) are calculated. The NSS focuses on shallower and stronger anomalies. Euler deconvolution using either the tensor components or the NSS applied to the HDGM reveals an estimate of the average source depth for the entire magnetic crust as well as individual plutons and ore bodies. The NSS reveals the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European Craton.

  5. Embedding Fragment ab Initio Model Potentials in CASSCF/CASPT2 Calculations of Doped Solids: Implementation and Applications.

    PubMed

    Swerts, Ben; Chibotaru, Liviu F; Lindh, Roland; Seijo, Luis; Barandiaran, Zoila; Clima, Sergiu; Pierloot, Kristin; Hendrickx, Marc F A

    2008-04-01

    In this article, we present a fragment model potential approach for the description of the crystalline environment as an extension of the use of embedding ab initio model potentials (AIMPs). The biggest limitation of the embedding AIMP method is the spherical nature of its model potentials. This poses problems as soon as the method is applied to crystals containing strongly covalently bonded structures with highly nonspherical electron densities. The newly proposed method addresses this problem by keeping the full electron density as its model potential, thus allowing one to group sets of covalently bonded atoms into fragments. The implementation in the MOLCAS 7.0 quantum chemistry package of the new method, which we call the embedding fragment ab inito model potential method (embedding FAIMP), is reported here, together with results of CASSCF/CASPT2 calculations. The developed methodology is applied for two test problems: (i) the investigation of the lowest ligand field states (2)A1 and (2)B1 of the Cr(V) defect in the YVO4 crystal and (ii) the investigation of the lowest ligand field and ligand-metal charge transfer (LMCT) states at the Mn(II) substitutional impurity doped into CaCO3. Comparison with similar calculations involving AIMPs for all environmental atoms, including those from covalently bounded units, shows that the FAIMP treatment of the YVO4 units surrounding the CrO4(3-) cluster increases the excitation energy (2)B1 → (2)A1 by ca. 1000 cm(-1) at the CASSCF level of calculation. In the case of the Mn(CO3)6(10-) cluster, the FAIMP treatment of the CO3(2-) units of the environment give smaller corrections, of ca. 100 cm(-1), for the ligand-field excitation energies, which is explained by the larger ligands of this cluster. However, the correction for the energy of the lowest LMCT transition is found to be ca. 600 cm(-1) for the CASSCF and ca. 1300 cm(-1) for the CASPT2 calculation.

  6. A fuzzy logic approach to modeling the underground economy in Taiwan

    NASA Astrophysics Data System (ADS)

    Yu, Tiffany Hui-Kuang; Wang, David Han-Min; Chen, Su-Jane

    2006-04-01

    The size of the ‘underground economy’ (UE) is valuable information in the formulation of macroeconomic and fiscal policy. This study applies fuzzy set theory and fuzzy logic to model Taiwan's UE over the period from 1960 to 2003. Two major factors affecting the size of the UE, the effective tax rate and the degree of government regulation, are used. The size of Taiwan's UE is scaled and compared with those of other models. Although our approach yields different estimates, similar patterns and leading are exhibited throughout the period. The advantage of applying fuzzy logic is twofold. First, it can avoid the complex calculations in conventional econometric models. Second, fuzzy rules with linguistic terms are easy for human to understand.

  7. A geometric model for evaluating the effects of inter-fraction rectal motion during prostate radiotherapy

    NASA Astrophysics Data System (ADS)

    Pavel-Mititean, Luciana M.; Rowbottom, Carl G.; Hector, Charlotte L.; Partridge, Mike; Bortfeld, Thomas; Schlegel, Wolfgang

    2004-06-01

    A geometric model is presented which allows calculation of the dosimetric consequences of rectal motion in prostate radiotherapy. Variations in the position of the rectum are measured by repeat CT scanning during the courses of treatment of five patients. Dose distributions are calculated by applying the same conformal treatment plan to each imaged fraction and rectal dose-surface histograms produced. The 2D model allows isotropic expansion and contraction in the plane of each CT slice. By summing the dose to specific volume elements tracked by the model, composite dose distributions are produced that explicitly include measured inter-fraction motion for each patient. These are then used to estimate effective dose-surface histograms (DSHs) for the entire treatment. Results are presented showing the magnitudes of the measured target and rectal motion and showing the effects of this motion on the integral dose to the rectum. The possibility of using such information to calculate normal tissue complication probabilities (NTCP) is demonstrated and discussed.

  8. Intrinsic frame transport for a model of nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Cozzini, S.; Rull, L. F.; Ciccotti, G.; Paolini, G. V.

    1997-02-01

    We present a computer simulation study of the dynamical properties of a nematic liquid crystal model. The diffusional motion of the nematic director is taken into account in our calculations in order to give a proper estimate of the transport coefficients. Differently from other groups we do not attempt to stabilize the director through rigid constraints or applied external fields. We instead define an intrinsic frame which moves along with the director at each step of the simulation. The transport coefficients computed in the intrinsic frame are then compared against the ones calculated in the fixed laboratory frame, to show the inadequacy of the latter for systems with less than 500 molecules. Using this general scheme on the Gay-Berne liquid crystal model, we evidence the natural motion of the director and attempt to quantify its intrinsic time scale and size dependence. Through extended simulations of systems of different size we calculate the diffusion and viscosity coefficients of this model and compare our results with values previously obtained with fixed director.

  9. Elastic scattering and breakup reactions of the exotic nucleus 8B on nuclear targets

    NASA Astrophysics Data System (ADS)

    Lukyanov, V. K.; Kadrev, D. N.; Antonov, A. N.; Zemlyanaya, E. V.; Lukyanov, K. V.; Gaidarov, M. K.; Spasova, K.

    2018-05-01

    Microscopic calculations of the optical potentials (OPs) and elastic scattering cross sections of the proton-rich nucleus 8B on 12C, 58Ni and 208Pb targets are presented. The density distributions of 8B obtained within the variational Monte Carlo (VMC) model and the three-cluster model (3CM) are used to construct the optical potentials (OP). The real part of the hybrid OP (ReOP) is calculated using the folding model with the direct and exchange terms included, while the imaginary part (ImOP) is obtained on the base of the high energy approximation (HEA). In addition, the cluster model, in which 8B consists of a proton halo and a 7Be core is applied to calculate the breakup cross sections of 8B on 9Be, 12C and 197Au targets, as well as the momentum distributions of 7Be fragments. A comparison with the available experimental data is made and a good agreement is obtained.

  10. TH-AB-BRA-07: PENELOPE-Based GPU-Accelerated Dose Calculation System Applied to MRI-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y; Mazur, T; Green, O

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: We first translated PENELOPE from FORTRAN to C++ and validated that the translation produced equivalent results. Then we adapted the C++ code to CUDA in a workflow optimized for GPU architecture. We expanded upon the original code to include voxelized transportmore » boosted by Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, we incorporated the vendor-provided MRIdian head model into the code. We performed a set of experimental measurements on MRIdian to examine the accuracy of both the head model and gPENELOPE, and then applied gPENELOPE toward independent validation of patient doses calculated by MRIdian’s KMC. Results: We achieve an average acceleration factor of 152 compared to the original single-thread FORTRAN implementation with the original accuracy preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen (1), mediastinum (1) and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: We developed a Monte Carlo simulation platform based on a GPU-accelerated version of PENELOPE. We validated that both the vendor provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  11. How is the weather? Forecasting inpatient glycemic control

    PubMed Central

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M

    2017-01-01

    Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125

  12. Application of the MERIT survey in the multi-criteria quality assessment of occupational health and safety management.

    PubMed

    Korban, Zygmunt

    2015-01-01

    Occupational health and safety management systems apply audit examinations as an integral element of these systems. The examinations are used to verify whether the undertaken actions are in compliance with the accepted regulations, whether they are implemented in a suitable way and whether they are effective. One of the earliest solutions of that type applied in the mining industry in Poland involved the application of audit research based on the MERIT survey (Management Evaluation Regarding Itemized Tendencies). A mathematical model applied in the survey facilitates the determination of assessment indexes WOPi for each of the assessed problem areas, which, among other things, can be used to set up problem area rankings and to determine an aggregate (synthetic) assessment. In the paper presented here, the assessment indexes WOPi were used to calculate a development measure, and the calculation process itself was supplemented with sensitivity analysis.

  13. No-core configuration-interaction model for the isospin- and angular-momentum-projected states

    NASA Astrophysics Data System (ADS)

    Satuła, W.; Båczyk, P.; Dobaczewski, J.; Konieczka, M.

    2016-08-01

    Background: Single-reference density functional theory is very successful in reproducing bulk nuclear properties like binding energies, radii, or quadrupole moments throughout the entire periodic table. Its extension to the multireference level allows for restoring symmetries and, in turn, for calculating transition rates. Purpose: We propose a new variant of the no-core-configuration-interaction (NCCI) model treating properly isospin and rotational symmetries. The model is applicable to any nucleus irrespective of its mass and neutron- and proton-number parity. It properly includes polarization effects caused by an interplay between the long- and short-range forces acting in the atomic nucleus. Methods: The method is based on solving the Hill-Wheeler-Griffin equation within a model space built of linearly dependent states having good angular momentum and properly treated isobaric spin. The states are generated by means of the isospin and angular-momentum projection applied to a set of low-lying (multi)particle-(multi)hole deformed Slater determinants calculated using the self-consistent Skyrme-Hartree-Fock approach. Results: The theory is applied to calculate energy spectra in N ≈Z nuclei that are relevant from the point of view of a study of superallowed Fermi β decays. In particular, a new set of the isospin-symmetry-breaking corrections to these decays is given. Conclusions: It is demonstrated that the NCCI model is capable of capturing main features of low-lying energy spectra in light and medium-mass nuclei using relatively small model space and without any local readjustment of its low-energy coupling constants. Its flexibility and a range of applicability makes it an interesting alternative to the conventional nuclear shell model.

  14. Modeling The Shock Initiation of PBX-9501 in ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, L; Springer, H K; Mace, J

    The SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has determined the 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate the code predictions. The SMIS tests use a powder gun to shoot scaled NATO standard fragments at a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. The SMIS real-world shot scenario creates a unique test-bed because many of the fragments arrivemore » at the impact plate off-center and at an angle of impact. The goal of this model validation experiments is to demonstrate the predictive capability of the Tarver-Lee Ignition and Growth (I&G) reactive flow model [2] in this fully 3-dimensional regime of Shock to Detonation Transition (SDT). The 3-dimensional Arbitrary Lagrange Eulerian hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations accurately reproduce the 'Go/No-Go' threshold of the Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied in a predictive fashion for the response of heterogeneous high explosives in the SDT regime.« less

  15. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  16. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  17. Application of classical models of chirality to optical rectification

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Ou; Gong, Li-Jing; Li, Chun-Fei

    2008-08-01

    Classical models of chirality are used to investigate the optical rectification effect in chiral molecular media. Calculation of the zero frequency first hyperpolarizabilities of chiral molecules with different structures is performed and applied to the derivation of a dc electric-dipole polarization. The expression of second-order nonlinear static-electric-dipole susceptibilities is obtained by theoretical derivation in the isotropic chiral thin films. The microscopic mechanism producing optical rectification is analyzed in view of this calculation. We find that optical rectification is derived from interaction between the electric field gradient (spatial dispersion) and chiral molecules in optically active liquids and solution by our calculation, which is consistent with the result given by Woźniak and Wagnière [Opt. Commun. 114, 131 (1995)]: The optical rectification depends on the fourth-order electric-dipole susceptibilities.

  18. Method of experimental and calculation determination of dissipative properties of carbon

    NASA Astrophysics Data System (ADS)

    Kazakova, Olga I.; Smolin, Igor Yu.; Bezmozgiy, Iosif M.

    2017-12-01

    This paper describes the process of definition of relations between the damping ratio and strain/state levels in a material. For these purposes, the experimental-calculation approach was applied. The experimental research was performed on plane composite specimens. The tests were accompanied by finite element modeling using the ANSYS software. Optimization was used as a tool for FEM property setting and for finding the above-mentioned relations. A difference between the calculation and experimental results was accepted as objective functions of this optimization. The optimization cycle was implemented using the pSeven DATADVANCE software platform. The developed approach makes it possible to determine the relations between the damping ratio and strain/state levels in the material, which can be used for computer modeling of the structure response under dynamic loading.

  19. Calculations vs. measurements of remnant dose rates for SNS spent structures

    NASA Astrophysics Data System (ADS)

    Popova, I. I.; Gallmeier, F. X.; Trotter, S.; Dayton, M.

    2018-06-01

    Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction. Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.

  20. Calculations vs. measurements of remnant dose rates for SNS spent structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Irina I.; Gallmeier, Franz X.; Trotter, Steven M.

    Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction.more » Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.« less

  1. A Comparison Between Modeled and Measured Clear-Sky Radiative Shortwave Fluxes in Arctic Environments, with Special Emphasis on Diffuse Radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnard, James C.; Flynn, Donna M.

    2002-10-08

    The ability of the SBDART radiative transfer model to predict clear-sky diffuse and direct normal broadband shortwave irradiances is investigated. Model calculations of these quantities are compared with data from the Atmospheric Radiation Measurement (ARM) program’s Southern Great Plains (SGP) and North Slope of Alaska (NSA) sites. The model tends to consistently underestimate the direct normal irradiances at both sites by about 1%. In regards to clear-sky diffuse irradiance, the model overestimates this quantity at the SGP site in a manner similar to what has been observed in other studies (Halthore and Schwartz, 2000). The difference between the diffuse SBDARTmore » calculations and Halthore and Schwartz’s MODTRAN calculations is very small, thus demonstrating that SBDART performs similarly to MODTRAN. SBDART is then applied to the NSA site, and here it is found that the discrepancy between the model calculations and corrected diffuse measurements (corrected for daytime offsets, Dutton et al., 2001) is 0.4 W/m2 when averaged over the 12 cases considered here. Two cases of diffuse measurements from a shaded “black and white” pyranometer are also compared with the calculations and the discrepancy is again minimal. Thus, it appears as if the “diffuse discrepancy” that exists at the SGP site does not exist at the NSA sites. We cannot yet explain why the model predicts diffuse radiation well at one site but not at the other.« less

  2. Applying the relaxation model of interfacial heat transfer to calculate the liquid outflow with supercritical initial parameters

    NASA Astrophysics Data System (ADS)

    Alekseev, M. V.; Vozhakov, I. S.; Lezhnin, S. I.; Pribaturin, N. A.

    2017-09-01

    A comparative numerical simulation of the supercritical fluid outflow on the thermodynamic equilibrium and non-equilibrium relaxation models of phase transition for different times of relaxation has been performed. The model for the fixed relaxation time based on the experimentally determined radius of liquid droplets was compared with the model of dynamically changing relaxation time, calculated by the formula (7) and depending on local parameters. It is shown that the relaxation time varies significantly depending on the thermodynamic conditions of the two-phase medium in the course of outflowing. The application of the proposed model with dynamic relaxation time leads to qualitatively correct results. The model can be used for both vaporization and condensation processes. It is shown that the model can be improved on the basis of processing experimental data on the distribution of the droplet sizes formed during the breaking up of the liquid jet.

  3. KIDS Nuclear Energy Density Functional: 1st Application in Nuclei

    NASA Astrophysics Data System (ADS)

    Gil, Hana; Papakonstantinou, Panagiota; Hyun, Chang Ho; Oh, Yongseok

    We apply the KIDS (Korea: IBS-Daegu-Sungkyunkwan) nuclear energy density functional model, which is based on the Fermi momentum expansion, to the study of properties of lj-closed nuclei. The parameters of the model are determined by the nuclear properties at the saturation density and theoretical calculations on pure neutron matter. For applying the model to the study of nuclei, we rely on the Skyrme force model, where the Skyrme force parameters are determined through the KIDS energy density functional. Solving Hartree-Fock equations, we obtain the energies per particle and charge radii of closed magic nuclei, namely, 16O, 28O, 40Ca, 48Ca, 60Ca, 90Zr, 132Sn, and 208Pb. The results are compared with the observed data and further improvement of the model is shortly mentioned.

  4. Numerical Tests for the Problem of U-Pu Fuel Burnup in Fuel Rod and Polycell Models Using the MCNP Code

    NASA Astrophysics Data System (ADS)

    Muratov, V. G.; Lopatkin, A. V.

    An important aspect in the verification of the engineering techniques used in the safety analysis of MOX-fuelled reactors, is the preparation of test calculations to determine nuclide composition variations under irradiation and analysis of burnup problem errors resulting from various factors, such as, for instance, the effect of nuclear data uncertainties on nuclide concentration calculations. So far, no universally recognized tests have been devised. A calculation technique has been developed for solving the problem using the up-to-date calculation tools and the latest versions of nuclear libraries. Initially, in 1997, a code was drawn up in an effort under ISTC Project No. 116 to calculate the burnup in one VVER-1000 fuel rod, using the MCNP Code. Later on, the authors developed a computation technique which allows calculating fuel burnup in models of a fuel rod, or a fuel assembly, or the whole reactor. It became possible to apply it to fuel burnup in all types of nuclear reactors and subcritical blankets.

  5. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less

  6. Sao Paulo potential as a tool for calculating S factors of fusion reactions in dense stellar matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasques, L. R.; Beard, M.; Wiescher, M.

    2007-10-15

    The goal of this paper is to test and justify the use of the Sao Paulo potential model for calculating astrophysical S factors for reactions involving stable and neutron-rich nuclei. In particular, we focus on the theoretical description of S factors at low energies. This is important for evaluating the reaction rates in dense stellar matter. We calculate the S factors for a number of reactions ({sup 16}O+{sup 16}O, {sup 20}O+{sup 20}O, {sup 20}O+{sup 26}Ne, {sup 20}O+{sup 32}Mg, {sup 26}Ne+{sup 26}Ne, {sup 26}Ne+{sup 32}Mg, {sup 32}Mg+{sup 32}Mg, {sup 22}O+{sup 22}O, {sup 24}O+{sup 24}O) with the Sao Paulo potential in themore » framework of a one-dimensional barrier penetration model. This approach can be easily applied for many other reactions involving different isotopes. To test the consistency of the model predictions, we compare our calculations with those performed within the coupled-channels and fermionic molecular dynamics models. Calculated S factors are parametrized by a simple analytic formula. The main properties and uncertainties of reaction rates (appropriate to dense matter in cores of massive white dwarfs and crusts of accreting neutron stars) are outlined.« less

  7. SS-HORSE method for studying resonances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blokhintsev, L. D.; Mazur, A. I.; Mazur, I. A., E-mail: 008043@pnu.edu.ru

    A new method for analyzing resonance states based on the Harmonic-Oscillator Representation of Scattering Equations (HORSE) formalism and analytic properties of partial-wave scattering amplitudes is proposed. The method is tested by applying it to the model problem of neutral-particle scattering and can be used to study resonance states on the basis of microscopic calculations performed within various versions of the shell model.

  8. Cross Sections From Scalar Field Theory

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Dick, Frank; Norman, Ryan B.; Nasto, Rachel

    2008-01-01

    A one pion exchange scalar model is used to calculate differential and total cross sections for pion production through nucleon- nucleon collisions. The collisions involve intermediate delta particle production and decay to nucleons and a pion. The model provides the basic theoretical framework for scalar field theory and can be applied to particle production processes where the effects of spin can be neglected.

  9. Large-scale configuration interaction description of the structure of nuclei around 100Sn and 208Pb

    NASA Astrophysics Data System (ADS)

    Qi, Chong

    2016-08-01

    In this contribution I would like to discuss briefly the recent developments of the nuclear configuration interaction shell model approach. As examples, we apply the model to calculate the structure and decay properties of low-lying states in neutron-deficient nuclei around 100Sn and 208Pb that are of great experimental and theoretical interests.

  10. Constraining f(R) gravity in solar system, cosmology and binary pulsar systems

    NASA Astrophysics Data System (ADS)

    Liu, Tan; Zhang, Xing; Zhao, Wen

    2018-02-01

    The f (R) gravity can be cast into the form of a scalar-tensor theory, and scalar degree of freedom can be suppressed in high-density regions by the chameleon mechanism. In this article, for the general f (R) gravity, using a scalar-tensor representation with the chameleon mechanism, we calculate the parametrized post-Newtonian parameters γ and β, the effective gravitational constant Geff, and the effective cosmological constant Λeff. In addition, for the general f (R) gravity, we also calculate the rate of orbital period decay of the binary system due to gravitational radiation. Then we apply these results to specific f (R) models (Hu-Sawicki model, Tsujikawa model and Starobinsky model) and derive the constraints on the model parameters by combining the observations in solar system, cosmological scales and the binary systems.

  11. Numerical modeling of reverse recovery characteristic in silicon pin diodes

    NASA Astrophysics Data System (ADS)

    Yamashita, Yusuke; Tadano, Hiroshi

    2018-07-01

    A new numerical reverse recovery model of silicon pin diode is proposed by the approximation of the reverse recovery waveform as a simple shape. This is the first model to calculate the reverse recovery characteristics using numerical equations without adjusted by fitting equations and fitting parameters. In order to verify the validity and the accuracy of the numerical model, the calculation result from the model is verified through the device simulation result. In 1980, he joined Toyota Central R&D Labs, Inc., where he was involved in the research and development of power devices such as SIT, IGBT, diodes and power MOSFETs. Since 2013 he has been a professor at the Graduate School of Pure and Applied Science, University of Tsukuba, Tsukuba, Japan. His current research interest is high-efficiency power conversion circuits for electric vehicles using advanced power devices.

  12. Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code

    NASA Astrophysics Data System (ADS)

    Sabotinov, Luben; Chevrier, Patrick

    The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.

  13. Calculation of the surface tension of liquid Ga-based alloys

    NASA Astrophysics Data System (ADS)

    Dogan, Ali; Arslan, Hüseyin

    2018-05-01

    As known, Eyring and his collaborators have applied the structure theory to the properties of binary liquid mixtures. In this work, the Eyring model has been extended to calculate the surface tension of liquid Ga-Bi, Ga-Sn and Ga-In binary alloys. It was found that the addition of Sn, In and Bi into Ga leads to significant decrease in the surface tension of the three Ga-based alloy systems, especially for that of Ga-Bi alloys. The calculated surface tension values of these alloys exhibit negative deviation from the corresponding ideal mixing isotherms. Moreover, a comparison between the calculated results and corresponding literature data indicates a good agreement.

  14. Adsorption of organic molecules on mineral surfaces studied by first-principle calculations: A review.

    PubMed

    Zhao, Hongxia; Yang, Yong; Shu, Xin; Wang, Yanwei; Ran, Qianping

    2018-04-09

    First-principle calculations, especially by the density functional theory (DFT) methods, are becoming a power technique to study molecular structure and properties of organic/inorganic interfaces. This review introduces some recent examples on the study of adsorption models of organic molecules or oligomers on mineral surfaces and interfacial properties obtained from first-principles calculations. The aim of this contribution is to inspire scientists to benefit from first-principle calculations and to apply the similar strategies when studying and tailoring interfacial properties at the atomistic scale, especially for those interested in the design and development of new molecules and new products. Copyright © 2017. Published by Elsevier B.V.

  15. Contact mechanics for layered materials with randomly rough surfaces.

    PubMed

    Persson, B N J

    2012-03-07

    The contact mechanics model of Persson is applied to layered materials. We calculate the M function, which relates the surface stress to the surface displacement, for a layered material, where the top layer (thickness d) has different elastic properties than the semi-infinite solid below. Numerical results for the contact area as a function of the magnification are presented for several cases. As an application, we calculate the fluid leak rate for laminated rubber seals.

  16. Numerical renormalization group calculation of impurity internal energy and specific heat of quantum impurity models

    NASA Astrophysics Data System (ADS)

    Merker, L.; Costi, T. A.

    2012-08-01

    We introduce a method to obtain the specific heat of quantum impurity models via a direct calculation of the impurity internal energy requiring only the evaluation of local quantities within a single numerical renormalization group (NRG) calculation for the total system. For the Anderson impurity model we show that the impurity internal energy can be expressed as a sum of purely local static correlation functions and a term that involves also the impurity Green function. The temperature dependence of the latter can be neglected in many cases, thereby allowing the impurity specific heat Cimp to be calculated accurately from local static correlation functions; specifically via Cimp=(∂Eionic)/(∂T)+(1)/(2)(∂Ehyb)/(∂T), where Eionic and Ehyb are the energies of the (embedded) impurity and the hybridization energy, respectively. The term involving the Green function can also be evaluated in cases where its temperature dependence is non-negligible, adding an extra term to Cimp. For the nondegenerate Anderson impurity model, we show by comparison with exact Bethe ansatz calculations that the results recover accurately both the Kondo induced peak in the specific heat at low temperatures as well as the high-temperature peak due to the resonant level. The approach applies to multiorbital and multichannel Anderson impurity models with arbitrary local Coulomb interactions. An application to the Ohmic two-state system and the anisotropic Kondo model is also given, with comparisons to Bethe ansatz calculations. The approach could also be of interest within other impurity solvers, for example, within quantum Monte Carlo techniques.

  17. SU-F-T-43: Prediction of Dose Increments by Brain Metastases Resection Cavity Shrinkage Model with I-125 and Cs-131 LDR Seed Implantations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, D; Braunstein, S; Sneed, P

    Purpose: This work aims to determine dose variability via a brain metastases resection cavity shrinkage model (RC-SM) with I-125 or Cs-131 LDR seed implantations. Methods: The RC-SM was developed to represent sequential volume changes of 95 consecutive brain metastases patients. All patients underwent serial surveillance MR and change in cavity volume was recorded for each patient. For the initial resection cavity, a prolate-ellipsoid cavity model was suggested and applied volume shrinkage rates to correspond to 1.7, 3.6, 5.9, 11.7, and 20.5 months after craniotomy. Extra-ring structure (6mm) was added on a surface of the resection volume and the same shrinkagemore » rates were applied. Total 31 LDR seeds were evenly distributed on the surface of the resection cavity. The Amersham 6711 I-125 seed model (Oncura, Arlington Heights, IL) and the Model Cs-1 Rev2 Cs-131 seed model (IsoRay, Richland, WA) were used for TG-43U1 dose calculation and in-house-programed 3D-volumetric dose calculation system was used for resection cavity rigid model (RC-RM) and the RC-SM dose calculation. Results: The initial resection cavity volume shrunk to 25±6%, 35±6.8%, 42±7.7%, 47±9.5%, and 60±11.6%, with respect to sequential MR images post craniotomy, and the shrinkage rate (SR) was calculated as SR=56.41Xexp(−0.2024Xt)+33.99 and R-square value was 0.98. The normal brain dose as assessed via the dose to the ring structure with the RC-SM showed 29.34% and 27.95% higher than the RC-RM, I-125 and Cs-131, respectively. The dose differences between I-125 and Cs-131 seeds within the same models, I-125 cases were 9.17% and 10.35% higher than Cs-131 cases, the RC-RM and the RC-SM, respectively. Conclusion: A realistic RC-SM should be considered during LDR brain seed implementation and post-implement planning to prevent potential overdose. The RC-SM calculation shows that Cs-131 is more advantageous in sparing normal brain as the resection cavity volume changes with the LDR seeds implementation.« less

  18. A Maneuvering Flight Noise Model for Helicopter Mission Planning

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric; Rau, Robert; May, Benjamin; Hobbs, Christopher

    2015-01-01

    A new model for estimating the noise radiation during maneuvering flight is developed in this paper. The model applies the Quasi-Static Acoustic Mapping (Q-SAM) method to a database of acoustic spheres generated using the Fundamental Rotorcraft Acoustics Modeling from Experiments (FRAME) technique. A method is developed to generate a realistic flight trajectory from a limited set of waypoints and is used to calculate the quasi-static operating condition and corresponding acoustic sphere for the vehicle throughout the maneuver. By using a previously computed database of acoustic spheres, the acoustic impact of proposed helicopter operations can be rapidly predicted for use in mission-planning. The resulting FRAME-QS model is applied to near-horizon noise measurements collected for the Bell 430 helicopter undergoing transient pitch up and roll maneuvers, with good agreement between the measured data and the FRAME-QS model.

  19. Low rank approach to computing first and higher order derivatives using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-07-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less

  20. A microwave backscattering model for precipitation

    NASA Astrophysics Data System (ADS)

    Ermis, Seda

    A geophysical microwave backscattering model for space borne and ground-based remote sensing of precipitation is developed and used to analyze backscattering measurements from rain and snow type precipitation. Vector Radiative Transfer (VRT) equations for a multilayered inhomogeneous medium are applied to the precipitation region for calculation of backscattered intensity. Numerical solution of the VRT equation for multiple layers is provided by the matrix doubling method to take into account close range interactions between particles. In previous studies, the VRT model was used to calculate backscattering from a rain column on a sea surface. In the model, Mie scattering theory for closely spaced scatterers was used to determine the phase matrix for each sublayer characterized by a set of parameters. The scatterers i.e. rain drops within the sublayers were modelled as spheres with complex permittivities. The rain layer was bounded by rough boundaries; the interface between the cloud and the rain column as well as the interface between the sea surface and the rain were all analyzed by using the integral equation model (IEM). Therefore, the phase matrix for the entire rain column was generated by the combination of surface and volume scattering. Besides Mie scattering, in this study, we use T-matrix approach to examine the effect of the shape to the backscattered intensities since larger raindrops are most likely oblique in shape. Analyses show that the effect of obliquity of raindrops to the backscattered wave is related with size of the scatterers and operated frequency. For the ground-based measurement system, the VRT model is applied to simulate the precipitation column on horizontal direction. Therefore, the backscattered reflectivities for each unit range of volume are calculated from the backscattering radar cross sections by considering radar range and effective illuminated area of the radar beam. The volume scattering phase matrices for each range interval are calculated by Mie scattering theory. VRT equations are solved by matrix doubling method to compute phase matrix for entire radar beam. Model results are validated with measured data by X-band dual polarization Phase Tilt Weather Radar (PTWR) for snow, rain, wet hail type precipitation. The geophysical parameters given the best fit with measured reflectivities are used in previous models i.e. Rayleigh Approximation and Mie scattering and compared with the VRT model. Results show that reflectivities calculated by VRT models are differed up to 10 dB from the Rayleigh approximation model and up to 5 dB from the Mie Scattering theory due to both multiple scattering and attenuation losses for the rain rates as high as 80 mm/h.

  1. SIGMA: A Knowledge-Based Simulation Tool Applied to Ecosystem Modeling

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Keller, Richard; Lawless, James G. (Technical Monitor)

    1994-01-01

    The need for better technology to facilitate building, sharing and reusing models is generally recognized within the ecosystem modeling community. The Scientists' Intelligent Graphical Modelling Assistant (SIGMA) creates an environment for model building, sharing and reuse which provides an alternative to more conventional approaches which too often yield poorly documented, awkwardly structured model code. The SIGMA interface presents the user a list of model quantities which can be selected for computation. Equations to calculate the model quantities may be chosen from an existing library of ecosystem modeling equations, or built using a specialized equation editor. Inputs for dim equations may be supplied by data or by calculation from other equations. Each variable and equation is expressed using ecological terminology and scientific units, and is documented with explanatory descriptions and optional literature citations. Automatic scientific unit conversion is supported and only physically-consistent equations are accepted by the system. The system uses knowledge-based semantic conditions to decide which equations in its library make sense to apply in a given situation, and supplies these to the user for selection. "Me equations and variables are graphically represented as a flow diagram which provides a complete summary of the model. Forest-BGC, a stand-level model that simulates photosynthesis and evapo-transpiration for conifer canopies, was originally implemented in Fortran and subsequenty re-implemented using SIGMA. The SIGMA version reproduces daily results and also provides a knowledge base which greatly facilitates inspection, modification and extension of Forest-BGC.

  2. Hybrid MD-Nernst Planck Model of Alpha-hemolysin Conductance Properties

    NASA Technical Reports Server (NTRS)

    Cozmuta, Ioana; O'Keefer, James T.; Bose, Deepak; Stolc, Viktor

    2006-01-01

    Motivated by experiments in which an applied electric field translocates polynucleotides through an alpha-hemolysin protein channel causing ionic current transient blockade, a hybrid simulation model is proposed to predict the conductance properties of the open channel. Time scales corresponding to ion permeation processes are reached using the Poisson-Nemst-Planck (PNP) electro-diffusion model in which both solvent and local ion concentrations are represented as a continuum. The diffusion coefficients of the ions (K(+) and Cl(-)) input in the PNP model are, however, calculated from all-atom molecular dynamics (MD). In the MD simulations, a reduced representation of the channel is used. The channel is solvated in a 1 M KCI solution, and an external electric field is applied. The pore specific diffusion coefficients for both ionic species are reduced 5-7 times in comparison to bulk values. Significant statistical variations (17-45%) of the pore-ions diffusivities are observed. Within the statistics, the ionic diffusivities remain invariable for a range of external applied voltages between 30 and 240mV. In the 2D-PNP calculations, the pore stem is approximated by a smooth cylinder of radius approx. 9A with two constriction blocks where the radius is reduced to approx. 6A. The electrostatic potential includes the contribution from the atomistic charges. The MD-PNP model shows that the atomic charges are responsible for the rectifying behaviour and for the slight anion selectivity of the a-hemolysin pore. Independent of the hierarchy between the anion and cation diffusivities, the anionic contribution to the total ionic current will dominate. The predictions of the MD-PNP model are in good agreement with experimental data and give confidence in the present approach of bridging time scales by combining a microscopic and macroscopic model.

  3. First Human Brain Imaging by the jPET-D4 Prototype With a Pre-Computed System Matrix

    NASA Astrophysics Data System (ADS)

    Yamaya, Taiga; Yoshida, Eiji; Obi, Takashi; Ito, Hiroshi; Yoshikawa, Kyosan; Murayama, Hideo

    2008-10-01

    The jPET-D4 is a novel brain PET scanner which aims to achieve not only high spatial resolution but also high scanner sensitivity by using 4-layer depth-of-interaction (DOI) information. The dimensions of a system matrix for the jPET-D4 are 3.3 billion (lines-of-response) times 5 million (image elements) when a standard field-of-view (FOV) of 25 cm diameter is sampled with a (1.5 mm)3 voxel . The size of the system matrix is estimated as 117 petabytes (PB) with the accuracy of 8 bytes per element. An on-the-fly calculation is usually used to deal with such a huge system matrix. However we cannot avoid extension of the calculation time when we improve the accuracy of system modeling. In this work, we implemented an alternative approach based on pre-calculation of the system matrix. A histogram-based 3D OS-EM algorithm was implemented on a desktop workstation with 32 GB memory installed. The 117 PB system matrix was compressed under the limited amount of computer memory by (1) eliminating zero elements, (2) applying the DOI compression (DOIC) method and (3) applying rotational symmetry and an axial shift property of the crystal arrangement. Spanning, which degrades axial resolution, was not applied. The system modeling and the DOIC method, which had been validated in 2D image reconstruction, were expanded into 3D implementation. In particular, a new system model including the DOIC transformation was introduced to suppress resolution loss caused by the DOIC method. Experimental results showed that the jPET-D4 has almost uniform spatial resolution of better than 3 mm over the FOV. Finally the first human brain images were obtained with the jPET-D4.

  4. CL-20/DNB co-crystal based PBX with PEG: molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiang; Gao, Pei; Xiao, Ji Jun; Zhao, Feng; Xiao, He Ming

    2016-12-01

    Molecular dynamics simulation was carried out for CL-20/DNB co-crystal based PBX (polymer-bonded explosive) blended with polymer PEG (polyethylene glycol). In this paper, the miscibility of the PBX models is investigated through the calculated binding energy. Pair correlation function (PCF) analysis is applied to study the interaction of the interface structures in the PBX models. The mechanical properties of PBXs are also discussed to understand the change of the mechanical properties after adding the polymer. Moreover, the calculated diffusion coefficients of the interfacial explosive molecules are used to discuss the dispersal ability of CL-20 and DNB molecules in the interface layer.

  5. An analysis method for multi-component airfoils in separated flow

    NASA Technical Reports Server (NTRS)

    Rao, B. M.; Duorak, F. A.; Maskew, B.

    1980-01-01

    The multi-component airfoil program (Langley-MCARF) for attached flow is modified to accept the free vortex sheet separation-flow model program (Analytical Methods, Inc.-CLMAX). The viscous effects are incorporated into the calculation by representing the boundary layer displacement thickness with an appropriate source distribution. The separation flow model incorporated into MCARF was applied to single component airfoils. Calculated pressure distributions for angles of attack up to the stall are in close agreement with experimental measurements. Even at higher angles of attack beyond the stall, correct trends of separation, decrease in lift coefficients, and increase in pitching moment coefficients are predicted.

  6. Finite element modeling as a tool for predicting the fracture behavior of robocast scaffolds.

    PubMed

    Miranda, Pedro; Pajares, Antonia; Guiberteau, Fernando

    2008-11-01

    The use of finite element modeling to calculate the stress fields in complex scaffold structures and thus predict their mechanical behavior during service (e.g., as load-bearing bone implants) is evaluated. The method is applied to identifying the fracture modes and estimating the strength of robocast hydroxyapatite and beta-tricalcium phosphate scaffolds, consisting of a three-dimensional lattice of interpenetrating rods. The calculations are performed for three testing configurations: compression, tension and shear. Different testing orientations relative to the calcium phosphate rods are considered for each configuration. The predictions for the compressive configurations are compared to experimental data from uniaxial compression tests.

  7. [Mathematical modeling of the kinematics of a pilot's head while catapulting into an air stream].

    PubMed

    Kharchenko, V I; Golovleva, N V; Konakhevich, Iu G; Liapin, V A; Mar'in, A V

    1987-01-01

    The trajectories of head movements in the helmet and velocities of impact contact with the seat and anterior of the cockpit were calculated as applied to every stage of the catapulting process and mass-inertia parameters of helmets taken into account. Kinematic models were used to describe biomechanic parameters of the head-neck system. Special attention was given to the case of catapulting to the air flow. The effect upon the nod of aerodynamic forces acting on the human body and the catapult ejection seat at air flow velocities of 700-800 and 1300 km/hr was calculated.

  8. Cluster structure and Coulomb shift in two-center mirror systems

    NASA Astrophysics Data System (ADS)

    Nakao, M.; Umehara, H.; Sonoda, S.; Ebata, S.; Ito, M.

    2017-11-01

    The α + 14C elastic scattering and the nuclear structure of its compound systems, 18O = α + 14C, are analyzed on the basis of the semi-microscopic model. The α + 14C interaction potential is constructed from the double folding (DF) model with the effective nucleon-nucleon interaction of the density-dependent Michigan 3-range Yukawa. The DF potential is applied to the α+14C elastic scattering in the energy range of Eα/Aα = 5.5 8.8 MeV, and the observed differential cross sections are reasonably reproduced. The energy spectra of 18O are calculated by employing the orthogonality condition model (OCM) plus the absorbing boundary condition (ABC). The OCM + ABC calculation predicts the formation of the 0+ resonance around E = 3MeV with respect to the α threshold, which seems to correspond to the resonance identified in the recent experiment. We also apply the OCM + ABC calculation to the mirror system, such as 18Ne = α+14O, and the Coulomb shift of 18O - 18Ne is evaluated. We have found that the Coulomb shift is clearly reduced in the excited 0+ state due to the development of the α cluster structure. This result strongly supports that the Coulomb shift is a candidate of new probe to identify the clustering phenomena.

  9. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  10. Finite Element Based HWB Centerbody Structural Optimization and Weight Prediction

    NASA Technical Reports Server (NTRS)

    Gern, Frank H.

    2012-01-01

    This paper describes a scalable structural model suitable for Hybrid Wing Body (HWB) centerbody analysis and optimization. The geometry of the centerbody and primary wing structure is based on a Vehicle Sketch Pad (VSP) surface model of the aircraft and a FLOPS compatible parameterization of the centerbody. Structural analysis, optimization, and weight calculation are based on a Nastran finite element model of the primary HWB structural components, featuring centerbody, mid section, and outboard wing. Different centerbody designs like single bay or multi-bay options are analyzed and weight calculations are compared to current FLOPS results. For proper structural sizing and weight estimation, internal pressure and maneuver flight loads are applied. Results are presented for aerodynamic loads, deformations, and centerbody weight.

  11. Efficient approach to obtain free energy gradient using QM/MM MD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices

    2015-12-31

    The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less

  12. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  13. Development of DPD coarse-grained models: From bulk to interfacial properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solano Canchaya, José G.; Dequidt, Alain, E-mail: alain.dequidt@univ-bpclermont.fr; Goujon, Florent

    2016-08-07

    A new Bayesian method was recently introduced for developing coarse-grain (CG) force fields for molecular dynamics. The CG models designed for dissipative particle dynamics (DPD) are optimized based on trajectory matching. Here we extend this method to improve transferability across thermodynamic conditions. We demonstrate the capability of the method by developing a CG model of n-pentane from constant-NPT atomistic simulations of bulk liquid phases and we apply the CG-DPD model to the calculation of the surface tension of the liquid-vapor interface over a large range of temperatures. The coexisting densities, vapor pressures, and surface tensions calculated with different CG andmore » atomistic models are compared to experiments. Depending on the database used for the development of the potentials, it is possible to build a CG model which performs very well in the reproduction of the surface tension on the orthobaric curve.« less

  14. Application of the three-component bidirectional reflectance distribution function model to Monte Carlo calculation of spectral effective emissivities of nonisothermal blackbody cavities.

    PubMed

    Prokhorov, Alexander; Prokhorova, Nina I

    2012-11-20

    We applied the bidirectional reflectance distribution function (BRDF) model consisting of diffuse, quasi-specular, and glossy components to the Monte Carlo modeling of spectral effective emissivities for nonisothermal cavities. A method for extension of a monochromatic three-component (3C) BRDF model to a continuous spectral range is proposed. The initial data for this method are the BRDFs measured in the plane of incidence at a single wavelength and several incidence angles and directional-hemispherical reflectance measured at one incidence angle within a finite spectral range. We proposed the Monte Carlo algorithm for calculation of spectral effective emissivities for nonisothermal cavities whose internal surface is described by the wavelength-dependent 3C BRDF model. The results obtained for a cylindroconical nonisothermal cavity are discussed and compared with results obtained using the conventional specular-diffuse model.

  15. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, M. J., E-mail: macdonm@umich.edu; SLAC National Accelerator Laboratory, Menlo Park, California 94025; Vorberger, J.

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit [Higginbotham, J. Appl. Phys. 115, 174906 (2014)]. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate latticemore » strains for all initial crystallite orientations, enabling elastic anisotropy and sample texture effects to be modeled directly. The effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.« less

  16. Power flows and Mechanical Intensities in structural finite element analysis

    NASA Technical Reports Server (NTRS)

    Hambric, Stephen A.

    1989-01-01

    The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.

  17. Dynamical discrete/continuum linear response shells theory of solvation: convergence test for NH4+ and OH- ions in water solution using DFT and DFTB methods.

    PubMed

    de Lima, Guilherme Ferreira; Duarte, Hélio Anderson; Pliego, Josefredo R

    2010-12-09

    A new dynamical discrete/continuum solvation model was tested for NH(4)(+) and OH(-) ions in water solvent. The method is similar to continuum solvation models in a sense that the linear response approximation is used. However, different from pure continuum models, explicit solvent molecules are included in the inner shell, which allows adequate treatment of specific solute-solvent interactions present in the first solvation shell, the main drawback of continuum models. Molecular dynamics calculations coupled with SCC-DFTB method are used to generate the configurations of the solute in a box with 64 water molecules, while the interaction energies are calculated at the DFT level. We have tested the convergence of the method using a variable number of explicit water molecules and it was found that even a small number of waters (as low as 14) are able to produce converged values. Our results also point out that the Born model, often used for long-range correction, is not reliable and our method should be applied for more accurate calculations.

  18. Nonlinear refraction and reflection travel time tomography

    USGS Publications Warehouse

    Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.

    1998-01-01

    We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.

  19. Ab Initio Modeling of the Electronic Absorption Spectrum of Previtamin D in Solution

    NASA Astrophysics Data System (ADS)

    Zhu, Tianyang

    To study the solvent effects of water on the previtamin D absorption spectrum, we use the quantum mechanics (QM)/molecular mechanics (MM) method combined with replica-exchange molecular dynamics (REMD). The QM method is applied for the previtamin D molecule and the MM method is used for the water molecules. To enhance conformational sampling of the flexible previtamin D molecule we apply REMD. Based on the REMD structures, we calculate the macroscopic ensemble of the absorption spectrum in solution by time-dependent density functional theory (TDDFT). Comparison between the calculated spectrum in the gas phase and in the solution reveals minor influences of the solvent on the absorption spectrum. In the conventional molecule dynamics simulation, the previtamin D molecule can be trapped by local minimum and cannot overcome energetics barriers when it is calculated at the room temperature. In addition, the higher temperature calculation for the molecule in REMD allows to overcome energetics barriers and to change the structure to other rotational isomers, then switch to the lower temperature and gives a more complete result in the configuration space for the lower temperature.

  20. An investigation on the modelling of kinetics of thermal decomposition of hazardous mercury wastes.

    PubMed

    Busto, Yailen; M G Tack, Filip; Peralta, Luis M; Cabrera, Xiomara; Arteaga-Pérez, Luis E

    2013-09-15

    The kinetics of mercury removal from solid wastes generated by chlor-alkali plants were studied. The reaction order and model-free method with an isoconversional approach were used to estimate the kinetic parameters and reaction mechanism that apply to the thermal decomposition of hazardous mercury wastes. As a first approach to the understanding of thermal decomposition for this type of systems (poly-disperse and multi-component), a novel scheme of six reactions was proposed to represent the behaviour of mercury compounds in the solid matrix during the treatment. An integration-optimization algorithm was used in the screening of nine mechanistic models to develop kinetic expressions that best describe the process. The kinetic parameters were calculated by fitting each of these models to the experimental data. It was demonstrated that the D₁-diffusion mechanism appeared to govern the process at 250°C and high residence times, whereas at 450°C a combination of the diffusion mechanism (D₁) and the third order reaction mechanism (F3) fitted the kinetics of the conversions. The developed models can be applied in engineering calculations to dimension the installations and determine the optimal conditions to treat a mercury containing sludge. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  2. Measurement and Analysis of the Temperature Gradient of Blackbody Cavities, for Use in Radiation Thermometry

    NASA Astrophysics Data System (ADS)

    De Lucas, Javier; Segovia, José Juan

    2018-05-01

    Blackbody cavities are the standard radiation sources widely used in the fields of radiometry and radiation thermometry. Its effective emissivity and uncertainty depend to a large extent on the temperature gradient. An experimental procedure based on the radiometric method for measuring the gradient is followed. Results are applied to particular blackbody configurations where gradients can be thermometrically estimated by contact thermometers and where the relationship between both basic methods can be established. The proposed procedure may be applied to commercial blackbodies if they are modified allowing secondary contact temperature measurement. In addition, the established systematic may be incorporated as part of the actions for quality assurance in routine calibrations of radiation thermometers, by using the secondary contact temperature measurement for detecting departures from the real radiometrically obtained gradient and the effect on the uncertainty. On the other hand, a theoretical model is proposed to evaluate the effect of temperature variations on effective emissivity and associated uncertainty. This model is based on a gradient sample chosen following plausible criteria. The model is consistent with the Monte Carlo method for calculating the uncertainty of effective emissivity and complements others published in the literature where uncertainty is calculated taking into account only geometrical variables and intrinsic emissivity. The mathematical model and experimental procedure are applied and validated using a commercial type three-zone furnace, with a blackbody cavity modified to enable a secondary contact temperature measurement, in the range between 400 °C and 1000 °C.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramírez-Morales, A.; Martínez-Orozco, J. C.; Rodríguez-Vargas, I.

    The main characteristics of the quantum confined Stark effect (QCSE) are studied theoretically in quantum wells of Gaussian profile. The semi-empirical tight-binding model and the Green function formalism are applied in the numerical calculations. A comparison of the QCSE in quantum wells with different kinds of confining potential is presented.

  4. Operator evolution for ab initio electric dipole transitions of 4He

    DOE PAGES

    Schuster, Micah D.; Quaglioni, Sofia; Johnson, Calvin W.; ...

    2015-07-24

    A goal of nuclear theory is to make quantitative predictions of low-energy nuclear observables starting from accurate microscopic internucleon forces. A major element of such an effort is applying unitary transformations to soften the nuclear Hamiltonian and hence accelerate the convergence of ab initio calculations as a function of the model space size. The consistent simultaneous transformation of external operators, however, has been overlooked in applications of the theory, particularly for nonscalar transitions. We study the evolution of the electric dipole operator in the framework of the similarity renormalization group method and apply the renormalized matrix elements to the calculationmore » of the 4He total photoabsorption cross section and electric dipole polarizability. All observables are calculated within the ab initio no-core shell model. Furthermore, we find that, although seemingly small, the effects of evolved operators on the photoabsorption cross section are comparable in magnitude to the correction produced by including the chiral three-nucleon force and cannot be neglected.« less

  5. Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan

    The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. Here, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limitsmore » and the respective optimal thickness in the high mobility limit. We also compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.« less

  6. Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis

    DOE PAGES

    Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan; ...

    2017-08-31

    The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. Here, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limitsmore » and the respective optimal thickness in the high mobility limit. We also compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.« less

  7. Scale invariance in chaotic time series: Classical and quantum examples

    NASA Astrophysics Data System (ADS)

    Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro

    Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.

  8. A Novel Two-Compartment Model for Calculating Bone Volume Fractions and Bone Mineral Densities From Computed Tomography Images.

    PubMed

    Lin, Hsin-Hon; Peng, Shin-Lei; Wu, Jay; Shih, Tian-Yu; Chuang, Keh-Shih; Shih, Cheng-Ting

    2017-05-01

    Osteoporosis is a disease characterized by a degradation of bone structures. Various methods have been developed to diagnose osteoporosis by measuring bone mineral density (BMD) of patients. However, BMDs from these methods were not equivalent and were incomparable. In addition, partial volume effect introduces errors in estimating bone volume from computed tomography (CT) images using image segmentation. In this study, a two-compartment model (TCM) was proposed to calculate bone volume fraction (BV/TV) and BMD from CT images. The TCM considers bones to be composed of two sub-materials. Various equivalent BV/TV and BMD can be calculated by applying corresponding sub-material pairs in the TCM. In contrast to image segmentation, the TCM prevented the influence of the partial volume effect by calculating the volume percentage of sub-material in each image voxel. Validations of the TCM were performed using bone-equivalent uniform phantoms, a 3D-printed trabecular-structural phantom, a temporal bone flap, and abdominal CT images. By using the TCM, the calculated BV/TVs of the uniform phantoms were within percent errors of ±2%; the percent errors of the structural volumes with various CT slice thickness were below 9%; the volume of the temporal bone flap was close to that from micro-CT images with a percent error of 4.1%. No significant difference (p >0.01) was found between the areal BMD of lumbar vertebrae calculated using the TCM and measured using dual-energy X-ray absorptiometry. In conclusion, the proposed TCM could be applied to diagnose osteoporosis, while providing a basis for comparing various measurement methods.

  9. A data assimilation technique to account for the nonlinear dependence of scattering microwave observations of precipitation

    NASA Astrophysics Data System (ADS)

    Haddad, Z. S.; Steward, J. L.; Tseng, H.-C.; Vukicevic, T.; Chen, S.-H.; Hristova-Veleva, S.

    2015-06-01

    Satellite microwave observations of rain, whether from radar or passive radiometers, depend in a very crucial way on the vertical distribution of the condensed water mass and on the types and sizes of the hydrometeors in the volume resolved by the instrument. This crucial dependence is nonlinear, with different types and orders of nonlinearity that are due to differences in the absorption/emission and scattering signatures at the different instrument frequencies. Because it is not monotone as a function of the underlying condensed water mass, the nonlinearity requires great care in its representation in the observation operator, as the inevitable uncertainties in the numerous precipitation variables are not directly convertible into an additive white uncertainty in the forward calculated observations. In particular, when attempting to assimilate such data into a cloud-permitting model, special care needs to be applied to describe and quantify the expected uncertainty in the observations operator in order not to turn the implicit white additive uncertainty on the input values into complicated biases in the calculated radiances. One approach would be to calculate the means and covariances of the nonlinearly calculated radiances given an a priori joint distribution for the input variables. This would be a very resource-intensive proposal if performed in real time. We propose a representation of the observation operator based on performing this moment calculation off line, with a dimensionality reduction step to allow for the effective calculation of the observation operator and the associated covariance in real time during the assimilation. The approach is applicable to other remotely sensed observations that depend nonlinearly on model variables, including wind vector fields. The approach has been successfully applied to the case of tropical cyclones, where the organization of the system helps in identifying the dimensionality-reducing variables.

  10. Circuit models applied to the design of a novel uncooled infrared focal plane array structure

    NASA Astrophysics Data System (ADS)

    Shi, Shali; Chen, Dapeng; Li, Chaobo; Jiao, Binbin; Ou, Yi; Jing, Yupeng; Ye, Tianchun; Guo, Zheying; Zhang, Qingchuan; Wu, Xiaoping

    2007-05-01

    This paper describes a circuit model applied to the simulation of the thermal response frequency of a novel substrate-free single-layer bi-material cantilever microstructure used as the focal plane array (FPA) in an uncooled opto-mechanical infrared imaging system. In order to obtain a high detection of the IR object, gold (Au) is coated alternately on the silicon nitride (SiNx) cantilevers of the pixels (Shi S et al Sensors and Actuators A at press), whereas the thermal response frequency decreases (Zhao Y 2002 Dissertation University of California, Berkeley). A circuit model for such a cantilever microstructure is proposed to be applied to evaluate the thermal response performance. The pixel's thermal frequency (1/τth) is calculated to be 10 Hz under the optimized design parameters, which is compatible with the response of optical readout systems and human eyes.

  11. Digital model analysis of the principal artesian aquifer, Savannah, Georgia area

    USGS Publications Warehouse

    Counts, H.B.; Krause, R.E.

    1977-01-01

    A digital model of the principal artesian aquifer has been developed for the Savannah, Georgia, area. The model simulates the response of the aquifer system to various hydrologic stresses. Model results of the water levels and water-level changes are shown on maps. Computations may be extended in time, indicating changes in pumpage were applied to the system and probable results calculated. Drawdown or water-level differences were computed, showing comparisons of different water management alternatives. (Woodard-USGS)

  12. Competing Quantum Orderings in Cuprate Superconductors: A Minimal Model

    NASA Astrophysics Data System (ADS)

    Martin, Ivar; Ortiz, Gerardo; Balatsky, A. V.; Bishop, A. R.

    2001-03-01

    We present a minimal model for cuprate superconductors. At the unrestricted mean-field level, the model produces homogeneous superconductivity at large doping, striped superconductivity in the underdoped regime and various antiferromagnetic phases at low doping and for high temperatures. On the underdoped side, the superconductor is intrinsically inhomogeneous and global phase coherence is achieved through Josephson-like coupling of the superconducting stripes. The model is applied to calculate experimentally measurable ARPES spectra, and local density of states measurable by STM.

  13. A new algorithm for modeling friction in dynamic mechanical systems

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1988-01-01

    A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.

  14. Modeling of Permeability Structure Using Pore Pressure and Borehole Strain Monitoring

    NASA Astrophysics Data System (ADS)

    Kano, Y.; Ito, H.

    2011-12-01

    Hydraulic or transport property, especially permeability, of the rock affect the behavior of the fault during earthquake rupture and also interseismic period. The methods to determine permeability underground are hydraulic test utilizing borehole and packer or core measurement in laboratory. Another way to know the permeability around a borehole is to examine responses of pore pressure to natural loading such as barometric pressure change at surface or earth tides. Using response to natural deformation is conventional method for water resource research. The scale of measurement is different among in-situ hydraulic test, response method, and core measurement. It is not clear that the relationship between permeability values form each method for an inhomogeneous medium such as a fault zone. Supposing the measurement of the response to natural loading, we made a model calculation of permeability structure around a fault zone. The model is 2 dimensional and constructed with vertical high-permeability layer in uniform low-permeability zone. We assume the upper and lower boundaries are drained and no-flow condition. We calculated the flow and deformation of the model for step and cyclic loading by numerically solving a two-dimensional diffusion equation. The model calculation shows that the width of the high-permeability zone and contrast of the permeability between high- and low- permeability zones control the contribution of the low-permeability zone. We made a calculation with combinations of permeability and fault width to evaluate the sensitivity of the parameters to in-situ measurement of permeability. We applied the model calculation to the field results of in-situ packer test, and natural response of water level and strain monitoring carried out in the Kamioka mine. The model calculation shows that knowledge of permeability in host rock is also important to obtain permeability of fault zone itself. The model calculations help to design long-term pore pressure monitoring, in-situ hydraulic test, and core measurement using drill holes to better understand fault zone hydraulic properties.

  15. Porous Media Approach for Modeling Closed Cell Foam

    NASA Technical Reports Server (NTRS)

    Ghosn, Louis J.; Sullivan, Roy M.

    2006-01-01

    In order to minimize boil off of the liquid oxygen and liquid hydrogen and to prevent the formation of ice on its exterior surface, the Space Shuttle External Tank (ET) is insulated using various low-density, closed-cell polymeric foams. Improved analysis methods for these foam materials are needed to predict the foam structural response and to help identify the foam fracture behavior in order to help minimize foam shedding occurrences. This presentation describes a continuum based approach to modeling the foam thermo-mechanical behavior that accounts for the cellular nature of the material and explicitly addresses the effect of the internal cell gas pressure. A porous media approach is implemented in a finite element frame work to model the mechanical behavior of the closed cell foam. The ABAQUS general purpose finite element program is used to simulate the continuum behavior of the foam. The soil mechanics element is implemented to account for the cell internal pressure and its effect on the stress and strain fields. The pressure variation inside the closed cells is calculated using the ideal gas laws. The soil mechanics element is compatible with an orthotropic materials model to capture the different behavior between the rise and in-plane directions of the foam. The porous media approach is applied to model the foam thermal strain and calculate the foam effective coefficient of thermal expansion. The calculated foam coefficients of thermal expansion were able to simulate the measured thermal strain during heat up from cryogenic temperature to room temperature in vacuum. The porous media approach was applied to an insulated substrate with one inch foam and compared to a simple elastic solution without pore pressure. The porous media approach is also applied to model the foam mechanical behavior during subscale laboratory experiments. In this test, a foam layer sprayed on a metal substrate is subjected to a temperature variation while the metal substrate is stretched to simulate the structural response of the tank during operation. The thermal expansion mismatch between the foam and the metal substrate and the thermal gradient in the foam layer causes high tensile stresses near the metal/foam interface that can lead to delamination.

  16. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. T. Clark; M. J. Russell; R. E. Spears

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less

  17. An Automated Pipeline for Engineering Many-Enzyme Pathways: Computational Sequence Design, Pathway Expression-Flux Mapping, and Scalable Pathway Optimization.

    PubMed

    Halper, Sean M; Cetnar, Daniel P; Salis, Howard M

    2018-01-01

    Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.

  18. Free Wake Analysis of Helicopter Rotor Blades in Hover Using a Finite Volume Technique

    DTIC Science & Technology

    1988-10-01

    inboard, and root) which were replaced by a far wake model after four revolutions. Murman and Stremel 1121 calculated j two-dimensional unsteady wake...distributed to a fixed mesh, on which the velocities were calculated by a finite difference solution of Laplace’s equation. Stremel [131 applied this two...Analysis of a Hovering Rotor," Vertica, Vol. 6, No. 2, 1982. 12. Murman, E.M., and Stremel , P.M., "A Vortex Wake Capturing Method Po- tential Flow

  19. Trial densities for the extended Thomas-Fermi model

    NASA Astrophysics Data System (ADS)

    Yu, An; Jimin, Hu

    1996-02-01

    A new and simplified form of nuclear densities is proposed for the extended Thomas-Fermi method (ETF) and applied to calculate the ground-state properties of several spherical nuclei, with results comparable or even better than other conventional density profiles. With the expectation value method (EVM) for microscopic corrections we checked our new densities for spherical nuclei. The binding energies of ground states almost reproduce the Hartree-Fock (HF) calculations exactly. Further applications to nuclei far away from the β-stability line are discussed.

  20. Calculation of Protein Heat Capacity from Replica-Exchange Molecular Dynamics Simulations with Different Implicit Solvent Models

    DTIC Science & Technology

    2008-10-30

    rigorous Poisson-based methods generally apply a Lee-Richards mo- lecular surface.9 This surface is considered the de facto description for continuum...definition and calculation of the Born radii. To evaluate the Born radii, two approximations are invoked. The first is the Coulomb field approximation (CFA...energy term, and depending on the particular GB formulation, higher-order non- Coulomb correction terms may be added to the Born radii to account for the

  1. Planar dielectric waveguides in rotation are optical fibers: comparison with the classical model.

    PubMed

    Peña García, Antonio; Pérez-Ocón, Francisco; Jiménez, José Ramón

    2008-01-21

    A novel and simpler method to calculate the main parameters in fiber optics is presented. This method is based in a planar dielectric waveguide in rotation and, as an example, it is applied to calculate the turning points and the inner caustic in an optical fiber with a parabolic refractive index. It is shown that the solution found using this method agrees with the standard (and more complex) method, whose solutions for these points are also summarized in this paper.

  2. A transverse Kelvin-Helmholtz instability in a magnetized plasma

    NASA Technical Reports Server (NTRS)

    Kintner, P.; Dangelo, N.

    1977-01-01

    An analysis is conducted of the transverse Kelvin-Helmholtz instability in a magnetized plasma for unstable flute modes. The analysis makes use of a two-fluid model. Details regarding the instability calculation are discussed, taking into account the ion continuity and momentum equations, the solution of a zero-order and a first-order component, and the properties of the solution. It is expected that the linear calculation conducted will apply to situations in which the plasma has experienced no more than a few growth periods.

  3. Impact of interpatient variability on organ dose estimates according to MIRD schema: Uncertainty and variance-based sensitivity analysis.

    PubMed

    Zvereva, Alexandra; Kamp, Florian; Schlattl, Helmut; Zankl, Maria; Parodi, Katia

    2018-05-17

    Variance-based sensitivity analysis (SA) is described and applied to the radiation dosimetry model proposed by the Committee on Medical Internal Radiation Dose (MIRD) for the organ-level absorbed dose calculations in nuclear medicine. The uncertainties in the dose coefficients thus calculated are also evaluated. A Monte Carlo approach was used to compute first-order and total-effect SA indices, which rank the input factors according to their influence on the uncertainty in the output organ doses. These methods were applied to the radiopharmaceutical (S)-4-(3- 18 F-fluoropropyl)-L-glutamic acid ( 18 F-FSPG) as an example. Since 18 F-FSPG has 11 notable source regions, a 22-dimensional model was considered here, where 11 input factors are the time-integrated activity coefficients (TIACs) in the source regions and 11 input factors correspond to the sets of the specific absorbed fractions (SAFs) employed in the dose calculation. The SA was restricted to the foregoing 22 input factors. The distributions of the input factors were built based on TIACs of five individuals to whom the radiopharmaceutical 18 F-FSPG was administered and six anatomical models, representing two reference, two overweight, and two slim individuals. The self-absorption SAFs were mass-scaled to correspond to the reference organ masses. The estimated relative uncertainties were in the range 10%-30%, with a minimum and a maximum for absorbed dose coefficients for urinary bladder wall and heart wall, respectively. The applied global variance-based SA enabled us to identify the input factors that have the highest influence on the uncertainty in the organ doses. With the applied mass-scaling of the self-absorption SAFs, these factors included the TIACs for absorbed dose coefficients in the source regions and the SAFs from blood as source region for absorbed dose coefficients in highly vascularized target regions. For some combinations of proximal target and source regions, the corresponding cross-fire SAFs were found to have an impact. Global variance-based SA has been for the first time applied to the MIRD schema for internal dose calculation. Our findings suggest that uncertainties in computed organ doses can be substantially reduced by performing an accurate determination of TIACs in the source regions, accompanied by the estimation of individual source region masses along with the usage of an appropriate blood distribution in a patient's body and, in a few cases, the cross-fire SAFs from proximal source regions. © 2018 American Association of Physicists in Medicine.

  4. Evaluating the multimedia fate of organic chemicals: A level III fugacity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackay, D.; Paterson, S.

    A multimedia model is developed and applied to selected organic chemicals in evaluative and real regional environments. The model employs the fugacity concept and treats four bulk compartments: air, water, soil, and bottom sediment, which consist of subcompartments of varying proportions of air, water, and mineral and organic matter. Chemical equilibrium is assumed to apply within (but not between) each bulk compartment. Expressions are included for emissions, advective flows, degrading reactions, and interphase transport by diffusive and non-diffusive processes. Input to the model consists of a description of the environment, the physical-chemical and reaction properties of the chemical, and emissionmore » rates. For steady-state conditions the solution is a simple algebraic expression. The model is applied to six chemicals in the region of southern Ontario and the calculated fate and concentrations are compared with observations. The results suggest that the model may be used to determine the processes that control the environmental fate of chemicals in a region and provide approximate estimates of relative media concentrations.« less

  5. A finite element model of the human head for auditory bone conduction simulation.

    PubMed

    Taschke, Henning; Hudde, Herbert

    2006-01-01

    In order to investigate the mechanisms of bone conduction, a finite element model of the human head was developed. The most important steps of the modelling process are described. The model was excited by means of percutaneously applied forces in order to get a deeper insight into the way the parts of the peripheral hearing organ and the surrounding tissue vibrate. The analysis is done based on the division of the bone conduction mechanisms into components. The frequency-dependent patterns of vibration of the components are analyzed. Furthermore, the model allows for the calculation of the contribution of each component to the overall bone-conducted sound. The components interact in a complicated way, which strongly depends on the nature of the excitation and the spatial region to which it is applied.

  6. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    PubMed

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  7. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    NASA Astrophysics Data System (ADS)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  8. Quantum corrections of the truncated Wigner approximation applied to an exciton transport model.

    PubMed

    Ivanov, Anton; Breuer, Heinz-Peter

    2017-04-01

    We modify the path integral representation of exciton transport in open quantum systems such that an exact description of the quantum fluctuations around the classical evolution of the system is possible. As a consequence, the time evolution of the system observables is obtained by calculating the average of a stochastic difference equation which is weighted with a product of pseudoprobability density functions. From the exact equation of motion one can clearly identify the terms that are also present if we apply the truncated Wigner approximation. This description of the problem is used as a basis for the derivation of a new approximation, whose validity goes beyond the truncated Wigner approximation. To demonstrate this we apply the formalism to a donor-acceptor transport model.

  9. Kalman filter control of a model of spatiotemporal cortical dynamics

    PubMed Central

    Schiff, Steven J; Sauer, Tim

    2007-01-01

    Recent advances in Kalman filtering to estimate system state and parameters in nonlinear systems have offered the potential to apply such approaches to spatiotemporal nonlinear systems. We here adapt the nonlinear method of unscented Kalman filtering to observe the state and estimate parameters in a computational spatiotemporal excitable system that serves as a model for cerebral cortex. We demonstrate the ability to track spiral wave dynamics, and to use an observer system to calculate control signals delivered through applied electrical fields. We demonstrate how this strategy can control the frequency of such a system, or quench the wave patterns, while minimizing the energy required for such results. These findings are readily testable in experimental applications, and have the potential to be applied to the treatment of human disease. PMID:18310806

  10. Biospheric effects of a large extraterrestrial impact: Case study of the Cretaceous/Tertiary boundary crater

    NASA Technical Reports Server (NTRS)

    Pope, Kevin O.

    1994-01-01

    The Chicxulub Crater in Yucatan, Mexico, is the primary candidate for the impact that caused mass extinctions at the Cretaceous/Tertiary boundary. The target rocks at Chicxulub contain 750 to 1500 m of anhydrite (CaSO4), which was vaporized upon impact, creating a large sulfuric acid aerosol cloud. In this study we apply a hydrocode model of asteroid impact to calculate the amount of sulfuric acid produced. We then apply a radiative transfer model to determine the atmospheric effects. Results include 6 to 9 month period of darkness followed by 12 to 26 years of cooling.

  11. Acidity in DMSO from the embedded cluster integral equation quantum solvation model.

    PubMed

    Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M

    2014-04-01

    The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.

  12. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  13. A numerical study of tropospheric ozone in the springtime in East Asia

    NASA Astrophysics Data System (ADS)

    Zhang, Meigen; Xu, Yongfu; Itsushi, Uno; Hajime, Akimoto

    2004-04-01

    The Models-3 Community Multi-scale Air Quality modeling system (CMAQ) coupled with the Regional Atmospheric Modeling System (RAMS) is applied to East Asia to study the transport and photochemical transformation of tropospheric ozone in March 1998. The calculated mixing ratios of ozone and carbon monoxide are compared with ground level observations at three remote sites in Japan and it is found that the model reproduces the observed features very well. Examination of several high episodes of ozone and carbon monoxide indicates that these elevated levels are found in association with continental outflow, demonstrating the critical role of the rapid transport of carbon monoxide and other ozone precursors from the continental boundary layer. In comparison with available ozonesonde data, it is found that the model-calculated ozone concentrations are generally in good agreement with the measurements, and the stratospheric contribution to surface ozone mixing ratios is quite limited.

  14. Estimation of design space for an extrusion-spheronization process using response surface methodology and artificial neural network modelling.

    PubMed

    Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza

    2016-09-01

    The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  16. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    NASA Astrophysics Data System (ADS)

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  17. Calculation of NaCl, KCl and LiCl Salts Activity Coefficients in Polyethylene Glycol (PEG4000)-Water System Using Modified PHSC Equation of State, Extended Debye-Hückel Model and Pitzer Model

    NASA Astrophysics Data System (ADS)

    Marjani, Azam

    2016-07-01

    For biomolecules and cell particles purification and separation in biological engineering, besides the chromatography as mostly applied process, aqueous two-phase systems (ATPS) are of the most favorable separation processes that are worth to be investigated in thermodynamic theoretically. In recent years, thermodynamic calculation of ATPS properties has attracted much attention due to their great applications in chemical industries such as separation processes. These phase calculations of ATPS have inherent complexity due to the presence of ions and polymers in aqueous solution. In this work, for target ternary systems of polyethylene glycol (PEG4000)-salt-water, thermodynamic investigation for constituent systems with three salts (NaCl, KCl and LiCl) has been carried out as PEG is the most favorable polymer in ATPS. The modified perturbed hard sphere chain (PHSC) equation of state (EOS), extended Debye-Hückel and Pitzer models were employed for calculation of activity coefficients for the considered systems. Four additional statistical parameters were considered to ensure the consistency of correlations and introduced as objective functions in the particle swarm optimization algorithm. The results showed desirable agreement to the available experimental data, and the order of recommendation of studied models is PHSC EOS > extended Debye-Hückel > Pitzer. The concluding remark is that the all the employed models are reliable in such calculations and can be used for thermodynamic correlation/predictions; however, by using an ion-based parameter calculation method, the PHSC EOS reveals both reliability and universality of applications.

  18. Suits reflectance models for wheat and cotton - Theoretical and experimental tests

    NASA Technical Reports Server (NTRS)

    Chance, J. E.; Lemaster, E. W.

    1977-01-01

    Plant canopy reflectance models developed by Suits are tested for cotton and Penjamo winter wheat. Properties of the models are discussed, and the concept of model depth is developed. The models' predicted exchange symmetry for specular irradiance with respect to sun polar angle and observer polar angle agreed with field data for cotton and wheat. Model calculations and experimental data for wheat reflectance vs sun angle disagreed. Specular reflectance from 0.50 to 1.10 micron shows fair agreement between the model and wheat measurements. An Appendix includes the physical and optical parameters for wheat necessary to apply Suits' models.

  19. An ice-cream cone model for coronal mass ejections

    NASA Astrophysics Data System (ADS)

    Xue, X. H.; Wang, C. B.; Dou, X. K.

    2005-08-01

    In this study, we use an ice-cream cone model to analyze the geometrical and kinematical properties of the coronal mass ejections (CMEs). Assuming that in the early phase CMEs propagate with near-constant speed and angular width, some useful properties of CMEs, namely the radial speed (v), the angular width (α), and the location at the heliosphere, can be obtained considering the geometrical shapes of a CME as an ice-cream cone. This model is improved by (1) using an ice-cream cone to show the near real configuration of a CME, (2) determining the radial speed via fitting the projected speeds calculated from the height-time relation in different azimuthal angles, (3) not only applying to halo CMEs but also applying to nonhalo CMEs.

  20. Subsonic Wing Optimization for Handling Qualities Using ACSYNT

    NASA Technical Reports Server (NTRS)

    Soban, Danielle Suzanne

    1996-01-01

    The capability to accurately and rapidly predict aircraft stability derivatives using one comprehensive analysis tool has been created. The PREDAVOR tool has the following capabilities: rapid estimation of stability derivatives using a vortex lattice method, calculation of a longitudinal handling qualities metric, and inherent methodology to optimize a given aircraft configuration for longitudinal handling qualities, including an intuitive graphical interface. The PREDAVOR tool may be applied to both subsonic and supersonic designs, as well as conventional and unconventional, symmetric and asymmetric configurations. The workstation-based tool uses as its model a three-dimensional model of the configuration generated using a computer aided design (CAD) package. The PREDAVOR tool was applied to a Lear Jet Model 23 and the North American XB-70 Valkyrie.

  1. [Cost of therapy for neurodegenerative diseases. Applying an activity-based costing system].

    PubMed

    Sánchez-Rebull, María-Victoria; Terceño Gómez, Antonio; Travé Bautista, Angeles

    2013-01-01

    To apply the activity based costing (ABC) model to calculate the cost of therapy for neurodegenerative disorders in order to improve hospital management and allocate resources more efficiently. We used the case study method in the Francolí long-term care day center. We applied all phases of an ABC system to quantify the cost of the activities developed in the center. We identified 60 activities; the information was collected in June 2009. The ABC system allowed us to calculate the average cost per patient with respect to the therapies received. The most costly and commonly applied technique was psycho-stimulation therapy. Focusing on this therapy and on others related to the admissions process could lead to significant cost savings. ABC costing is a viable method for costing activities and therapies in long-term day care centers because it can be adapted to their structure and standard practice. This type of costing allows the costs of each activity and therapy, or combination of therapies, to be determined and aids measures to improve management. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.

  2. Quantum Mechanics/Molecular Mechanics Method Combined with Hybrid All-Atom and Coarse-Grained Model: Theory and Application on Redox Potential Calculations.

    PubMed

    Shen, Lin; Yang, Weitao

    2016-04-12

    We developed a new multiresolution method that spans three levels of resolution with quantum mechanical, atomistic molecular mechanical, and coarse-grained models. The resolution-adapted all-atom and coarse-grained water model, in which an all-atom structural description of the entire system is maintained during the simulations, is combined with the ab initio quantum mechanics and molecular mechanics method. We apply this model to calculate the redox potentials of the aqueous ruthenium and iron complexes by using the fractional number of electrons approach and thermodynamic integration simulations. The redox potentials are recovered in excellent accordance with the experimental data. The speed-up of the hybrid all-atom and coarse-grained water model renders it computationally more attractive. The accuracy depends on the hybrid all-atom and coarse-grained water model used in the combined quantum mechanical and molecular mechanical method. We have used another multiresolution model, in which an atomic-level layer of water molecules around redox center is solvated in supramolecular coarse-grained waters for the redox potential calculations. Compared with the experimental data, this alternative multilayer model leads to less accurate results when used with the coarse-grained polarizable MARTINI water or big multipole water model for the coarse-grained layer.

  3. Relating Ab Initio Mechanical Behavior of Intergranular Glassy Films in Γ-Si3N4 to Continuum Scales

    NASA Astrophysics Data System (ADS)

    Ouyang, L.; Chen, J.; Ching, W.; Misra, A.

    2006-05-01

    Nanometer thin intergranular glassy films (IGFs) form in polycrystalline ceramics during sintering at high temperatures. The structure and properties of these IGFs are significantly changed by doping with rare earth elements. We have performed highly accurate large-scale ab initio calculations of the mechanical properties of both undoped and Yittria doped (Y-IGF) model by theoretical uniaxial tensile experiments. Uniaxial strain was applied by incrementally stretching the super cell in one direction, while the other two dimensions were kept constant. At each strain, all atoms in the model were fully relaxed using Vienna Ab initio Simulation Package VASP. The relaxed model at a given strain serves as the starting position for the next increment of strain. This process is carried on until the total energy (TE) and stress data show that the "sample" is fully fractured. Interesting differences are seen between the stress-strain response of undoped and Y-doped models. For the undoped model, the stress-strain behavior indicates that the initial atomic structure of the IGF is such that there is negligible coupling between the x- and the y-z directions. However, once the behavior becomes non- linear the lateral stresses increase, indicating that the atomic structure evolves with loading [1]. To relate the ab initio calculations to the continuum scales we analyze the atomic-scale deformation field under this uniaxial loading [1]. The applied strain in the x-direction is mostly accommodated by the IGF part of the model and the crystalline part experiences almost negligible strain. As the overall strain on the sample is incrementally increased, the local strain field evolves such that locations proximal to the softer spots attract higher strains. As the load progresses, the strain concentration spots coalesce and eventually form persistent strain localization zone across the IGF. The deformation pattern obtained through ab initio calculations indicates that it is possible to construct discrete grain-scale models that may be used to bridge these calculations to the continuum scale for finite element analysis. Reference: 1. J. Chen, L. Ouyang, P. Rulis, A. Misra, W. Y. Ching, Phys. Rev. Lett. 95, 256103 (2005)

  4. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expandedmore » to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  5. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    PubMed Central

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold

    2016-01-01

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian’s kmc. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems. PMID:27370123

  6. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model.

    PubMed

    Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold

    2016-07-01

    The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.

  7. Dosimetric calculations for uranium miners for epidemiological studies.

    PubMed

    Marsh, J W; Blanchardon, E; Gregoratto, D; Hofmann, W; Karcher, K; Nosske, D; Tomásek, L

    2012-05-01

    Epidemiological studies on uranium miners are being carried out to quantify the risk of cancer based on organ dose calculations. Mathematical models have been applied to calculate the annual absorbed doses to regions of the lung, red bone marrow, liver, kidney and stomach for each individual miner arising from exposure to radon gas, radon progeny and long-lived radionuclides (LLR) present in the uranium ore dust and to external gamma radiation. The methodology and dosimetric models used to calculate these organ doses are described and the resulting doses for unit exposure to each source (radon gas, radon progeny and LLR) are presented. The results of dosimetric calculations for a typical German miner are also given. For this miner, the absorbed dose to the central regions of the lung is dominated by the dose arising from exposure to radon progeny, whereas the absorbed dose to the red bone marrow is dominated by the external gamma dose. The uncertainties in the absorbed dose to regions of the lung arising from unit exposure to radon progeny are also discussed. These dose estimates are being used in epidemiological studies of cancer in uranium miners.

  8. Analysis and calculation of macrosegregation in a casting ingot. MPS solidification model. Volume 1: Formulation and analysis

    NASA Technical Reports Server (NTRS)

    Maples, A. L.; Poirier, D. R.

    1980-01-01

    The physical and numerical formulation of a model for the horizontal solidification of a binary alloy is described. It can be applied in an ingot. The major purpose of the model is to calculate macrosegregation in a casting ingot which results from flow of interdendritic liquid during solidification. The flow, driven by solidification contractions and by gravity acting on density gradients in the interdendritic liquid, was modeled as flow through a porous medium. The symbols used are defined. The physical formulation of the problem leading to a set of equations which can be used to obtain: (1) the pressure field; (2) the velocity field: (3) mass flow and (4) solute flow in the solid plus liquid zone during solidification is presented. With these established, the model calculates macrosegregation after solidification is complete. The numerical techniques used to obtain solution on a computational grid are presented. Results, evaluation of the results, and recommendations for future development of the model are given. The macrosegregation and flow field predictions for tin-lead, aluminum-copper, and tin-bismuth alloys are included as well as comparisons of some of the predictions with published predictions or with empirical data.

  9. A method of calculation on the airloading of vertical axis wind turbine

    NASA Astrophysics Data System (ADS)

    Azuma, A.; Kimura, S.

    A new method of analyzing the aerodynamic characteristics of the Darrieus Vertical-Axis Wind Turbine (VAWT) by applying the local circulation method is described. The validity of this method is confirmed by analyzing the air load acting on a curved blade. The azimuthwise variation of spanwise airloading, torque, and longitudinal forces are accurately calculated for a variety of operational conditions. The results are found to be in good agreement with experimental ones obtained elsewhere. It is concluded that the present approach can calculate the aerodynamic characteristics of the VAWT with much less computational time than that used by the free vortex model.

  10. Three-dimensional stress intensity factor analysis of a surface crack in a high-speed bearing

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Hsu, Yingchun

    1990-01-01

    The boundary element method is applied to calculate the stress intensity factors of a surface crack in the rotating inner raceway of a high-speed roller bearing. The three-dimensional model consists of an axially stressed surface cracked plate subjected to a moving Hertzian contact loading. A multidomain formulation and singular crack-tip elements were employed to calculate the stress intensity factors accurately and efficiently for a wide range of configuration parameters. The results can provide the basis for crack growth calculations and fatigue life predictions of high-performance rolling element bearings that are used in aircraft engines.

  11. Counting the number of Feynman graphs in QCD

    NASA Astrophysics Data System (ADS)

    Kaneko, T.

    2018-05-01

    Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.

  12. The calculated influence of atmospheric conditions on solar cell ISC under direct and global solar irradiances

    NASA Technical Reports Server (NTRS)

    Mueller, Robert L.

    1987-01-01

    Calculations of the influence of atmospheric conditions on solar cell short-circuit current (Isc) are made using a recently developed computer model for solar spectral irradiance distribution. The results isolate the dependence of Isc on changes in the spectral irradiance distribution without the direct influence of the total irradiance level. The calculated direct normal irradiance and percent diffuse irradiance are given as a reference to indicate the expected irradiance levels. This method can be applied to the calibration of photovoltaic reference cells. Graphic examples are provided for amorphous silicon and monocrystalline silicon solar cells under direct normal and global normal solar irradiances.

  13. Extension and applications of switching model: Range theory, multiple scattering model of Goudsmit-Saunderson, and lateral spread treatment of Marwick-Sigmund

    NASA Astrophysics Data System (ADS)

    Ikegami, Seiji

    2017-09-01

    The switching model (PSM) developed in the previous paper is extended to obtain an ;extended switching model (ESM). In the ESM, the mixt electronic-and-nuclear energy-loss region, in addition to the electronic and nuclear energy-loss regions in PSM, is taken into account analytically and appropriately. This model is combined with a small-angle multiple scattering range theory considering both nuclear and electronic stopping effects developed by Marwick-Sigmund and Valdes-Arista to formulate a improved range theory. The ESM is also combined with the multiple scattering theory with non-small angle approximation by Goudsmit-Saunderson. Furthermore, we applied ESM to lateral spread model of Marwick-Sigmund. Numerical calculations of the entire distribution functions including one of the mixt region are roughly and approximately possible. However, exact numerical calculation may be impossible. Consequently, several preliminary numerical calculations of the electronic, mixt, and nuclear regions are performed to examine their underlying behavior with respect to the incident energy, the scattering angle, the outgoing projectile intensity, and the target thickness. We show the numerical results not only of PSM and but also of ESM. Both numerical results are shown in the present paper for the first time. Since the theoretical relations are constructed using reduced variables, the calculations are made only on the case of C colliding on C.

  14. Analysis of typical WWER-1000 severe accident scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Yu.S.; Shchekoldin, V.V.; Borisov, L.N.

    2004-07-01

    At present in EDO 'Gidropress' there is a certain experience of performing the analyses of severe accidents of reactor plant with WWER with application of domestic and foreign codes. Important data were also obtained by the results of calculation modeling of integrated experiments with fuel assembly melting comprising a real fuel. Systematization and consideration of these data in development and assimilation of codes are extremely important in connection with large uncertainty still existing in understanding and adequate description of phenomenology of severe accidents. The presented report gives a comparison of analysis results of severe accidents of reactor plant with WWER-1000more » for two typical scenarios made by using American MELCOR code and the Russian RATEG/SVECHA/HEFEST code. The results of calculation modeling are compared using above codes with the data of experiment FPT1 with fuel assembly melting comprising a real fuel, which has been carried out at the facility Phebus (France). The obtained results are considered in the report from the viewpoint of: - adequacy of results of calculation modeling of separate phenomena during severe accidents of RP with WWER by using the above codes; - influence of uncertainties (degree of details of calculation models, choice of parameters of models etc.); - choice of those or other setup variables (options) in the used codes; - necessity of detailed modeling of processes and phenomena as applied to design justification of safety of RP with WWER. (authors)« less

  15. The multi-scattering model for calculations of positron spatial distribution in the multilayer stacks, useful for conventional positron measurements

    NASA Astrophysics Data System (ADS)

    Dryzek, Jerzy; Siemek, Krzysztof

    2013-08-01

    The spatial distribution of positrons emitted from radioactive isotopes into stacks or layered samples is a subject of the presented report. It was found that Monte Carlo (MC) simulations using GEANT4 code are not able to describe correctly the experimental data of the positron fractions in stacks. The mathematical model was proposed for calculations of the implantation profile or positron fractions in separated layers or foils being components of a stack. The model takes into account only two processes, i.e., the positron absorption and backscattering at interfaces. The mathematical formulas were applied in the computer program called LYS-1 (layers profile analysis). The theoretical predictions of the model were in the good agreement with the results of the MC simulations for the semi infinite sample. The experimental verifications of the model were performed on the symmetrical and non-symmetrical stacks of different foils. The good agreement between the experimental and calculated fractions of positrons in components of a stack was achieved. Also the experimental implantation profile obtained using the depth scanning of positron implantation technique is very well described by the theoretical profile obtained within the proposed model. The LYS-1 program allows us also to calculate the fraction of positrons which annihilate in the source, which can be useful in the positron spectroscopy.

  16. Approach for validating actinide and fission product compositions for burnup credit criticality safety analyses

    DOE PAGES

    Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...

    2014-11-01

    This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less

  17. Application of Taguchi L32 orthogonal array design to optimize copper biosorption by using Spaghnum moss.

    PubMed

    Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil

    2014-09-01

    In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Demons and superconductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ihm, J.; Cohen, M.L.; Tuan, S.F.

    1981-04-01

    Model calculations are used to explore the role of demons (acoustic plasmons involving light and heavy mass carriers) in superconductivity. Heavy d electrons and light s and p electrons in a transition metal are used for discussion, but the calculation presented is more general, and the results can be applied to other systems. The analysis is based on the dielectric-function approach and the Bardeen-Cooper-Schrieffer theory. The dielectric function includes intraband and interband s-d scattering, and a tight-binding model is used to examine the role of s-d hybridization. The demon contribution generally reduces the Coulomb interaction between the electrons. Under suitablemore » conditions, the model calculations indicate that the electron-electron interaction via demons can be attractive, but the results also suggest that this mechanism is probably not dominant in transition metals and transition-metal compounds. An attractive interband contribution is found, and it is proposed that this effect may lead to pairing in suitable systems.« less

  19. A model relating radiated power and impurity concentrations during Ne, N and Ar injection in Tore Supra

    NASA Astrophysics Data System (ADS)

    Hogan, J.; Demichelis, C.; Monier-Garbet, P.; Guirlet, R.; Hess, W.; Schunke, B.

    2000-10-01

    A model combining the MIST (core symmetric) and BBQ (SOL asymmetric) codes is used to study the relation between impurity density and radiated power for representative cases from Tore Supra experiments on strong radiation regimes using the ergodic divertor. Transport predictions of external radiation are compared with observation to estimate the absolute impurity density. BBQ provides the incoming distribution of recycling impurity charge states for the radial transport calculation. The shots studied use the ergodic divertor and high ICRH power. Power is first applied and then the extrinsic impurity (Ne, N or Ar) is injected. Separate time dependent intrinsic (C and O) impurity transport calculations match radiation levels before and during the high power and impurity injection phases. Empirical diffusivities are sought to reproduce the UV (CV R, I lines), CVI Lya, OVIII Lya, Zeff, and horizontal bolometer data. The model has been used to calculate the relative radiative efficiency (radiated power / extrinsically contributed electron) for the sample database.

  20. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  1. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE PAGES

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; ...

    2017-03-06

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  2. Detection efficiency calculation for photons, electrons and positrons in a well detector. Part I: Analytical model

    NASA Astrophysics Data System (ADS)

    Pommé, S.

    2009-06-01

    An analytical model is presented to calculate the total detection efficiency of a well-type radiation detector for photons, electrons and positrons emitted from a radioactive source at an arbitrary position inside the well. The model is well suited to treat a typical set-up with a point source or cylindrical source and vial inside a NaI well detector, with or without lead shield surrounding it. It allows for fast absolute or relative total efficiency calibrations for a wide variety of geometrical configurations and also provides accurate input for the calculation of coincidence summing effects. Depending on its accuracy, it may even be applied in 4π-γ counting, a primary standardisation method for activity. Besides an accurate account of photon interactions, precautions are taken to simulate the special case of 511 keV annihilation quanta and to include realistic approximations for the range of (conversion) electrons and β -- and β +-particles.

  3. The Relaxation Matrix for Symmetric Tops with Inversion Symmetry. II; Line Mixing Effects in the V1 Band of NH3

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Q.

    2016-01-01

    Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.

  4. The elasticity and failure of fluid-filled cellular solids: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Warner, M.; Thiel, B. L.; Donald, A. M.

    2000-02-01

    We extend and apply theories of filled foam elasticity and failure to recently available data on foods. The predictions of elastic modulus and failure mode dependence on internal pressure and on wall integrity are borne out by photographic evidence of distortion and failure under compressive loading and under the localized stress applied by a knife blade, and by mechanical data on vegetables differing only in their turgor pressure. We calculate the dry modulus of plate-like cellular solids and the cross over between dry-like and fully fluid-filled elastic response. The bulk elastic properties of limp and aging cellular solids are calculated for model systems and compared with our mechanical data, which also show two regimes of response. The mechanics of an aged, limp beam is calculated, thus offering a practical procedure for comparing experiment and theory. This investigation also thereby offers explanations of the connection between turgor pressure and crispness and limpness of cellular materials.

  5. The elasticity and failure of fluid-filled cellular solids: theory and experiment.

    PubMed

    Warner, M; Thiel, B L; Donald, A M

    2000-02-15

    We extend and apply theories of filled foam elasticity and failure to recently available data on foods. The predictions of elastic modulus and failure mode dependence on internal pressure and on wall integrity are borne out by photographic evidence of distortion and failure under compressive loading and under the localized stress applied by a knife blade, and by mechanical data on vegetables differing only in their turgor pressure. We calculate the dry modulus of plate-like cellular solids and the cross over between dry-like and fully fluid-filled elastic response. The bulk elastic properties of limp and aging cellular solids are calculated for model systems and compared with our mechanical data, which also show two regimes of response. The mechanics of an aged, limp beam is calculated, thus offering a practical procedure for comparing experiment and theory. This investigation also thereby offers explanations of the connection between turgor pressure and crispness and limpness of cellular materials.

  6. The elasticity and failure of fluid-filled cellular solids: Theory and experiment

    PubMed Central

    Warner, M.; Thiel, B. L.; Donald, A. M.

    2000-01-01

    We extend and apply theories of filled foam elasticity and failure to recently available data on foods. The predictions of elastic modulus and failure mode dependence on internal pressure and on wall integrity are borne out by photographic evidence of distortion and failure under compressive loading and under the localized stress applied by a knife blade, and by mechanical data on vegetables differing only in their turgor pressure. We calculate the dry modulus of plate-like cellular solids and the cross over between dry-like and fully fluid-filled elastic response. The bulk elastic properties of limp and aging cellular solids are calculated for model systems and compared with our mechanical data, which also show two regimes of response. The mechanics of an aged, limp beam is calculated, thus offering a practical procedure for comparing experiment and theory. This investigation also thereby offers explanations of the connection between turgor pressure and crispness and limpness of cellular materials. PMID:10660680

  7. Failure Criteria for FRP Laminates in Plane Stress

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2003-01-01

    A new set of six failure criteria for fiber reinforced polymer laminates is described. Derived from Dvorak's fracture mechanics analyses of cracked plies and from Puck's action plane concept, the physically-based criteria, denoted LaRC03, predict matrix and fiber failure accurately without requiring curve-fitting parameters. For matrix failure under transverse compression, the fracture plane is calculated by maximizing the Mohr-Coulomb effective stresses. A criterion for fiber kinking is obtained by calculating the fiber misalignment under load, and applying the matrix failure criterion in the coordinate frame of the misalignment. Fracture mechanics models of matrix cracks are used to develop a criterion for matrix in tension and to calculate the associated in-situ strengths. The LaRC03 criteria are applied to a few examples to predict failure load envelopes and to predict the failure mode for each region of the envelope. The analysis results are compared to the predictions using other available failure criteria and with experimental results. Predictions obtained with LaRC03 correlate well with the experimental results.

  8. Crack propagation modelling for high strength steel welded structural details

    NASA Astrophysics Data System (ADS)

    Mecséri, B. J.; Kövesdi, B.

    2017-05-01

    Nowadays the barrier of applying HSS (High Strength Steel) material in bridge structures is their low fatigue strength related to yield strength. This paper focuses on the fatigue behaviour of a structural details (a gusset plate connection) made from NSS and HSS material, which is frequently used in bridges in Hungary. An experimental research program is carried out at the Budapest University of Technology and Economics to investigate the fatigue lifetime of this structural detail type through the same test specimens made from S235 and S420 steel grades. The main aim of the experimental research program is to study the differences in the crack propagation and the fatigue lifetime between normal and high strength steel structures. Based on the observed fatigue crack pattern the main direction and velocity of the crack propagation is determined. In parallel to the tests finite element model (FEM) are also developed, which model can handle the crack propagation. Using the measured strain data in the tests and the calculated values from the FE model, the approximation of the material parameters of the Paris law are calculated step-by-step, and their calculated values are evaluated. The same material properties are determined for NSS and also for HSS specimens as well, and the differences are discussed. In the current paper, the results of the experiments, the calculation method of the material parameters and the calculated values are introduced.

  9. Main Sources and Doses of Space Radiation during Mars Missions and Total Radiation Risk for Cosmonauts

    NASA Astrophysics Data System (ADS)

    Mitrikas, Victor; Aleksandr, Shafirkin; Shurshakov, Vyacheslav

    This work contains calculation data of generalized doses and dose equivalents in critical organs and tissues of cosmonauts produces by galactic cosmic rays (GCR), solar cosmic rays (SCR) and the Earth’s radiation belts (ERB) that will impact crewmembers during a flight to Mars, while staying in the landing module and on the Martian surface, and during the return to Earth. Also calculated total radiation risk values during whole life of cosmonauts after the flight are presented. Radiation risk (RR) calculations are performed on the basis of a radiobiological model of radiation damage to living organisms, while taking into account reparation processes acting during continuous long-term exposure at various dose rates and under acute recurrent radiation impact. The calculations of RR are performed for crewmembers of various ages implementing a flight to Mars over 2 - 3 years in maximum and minimum of the solar cycle. The total carcinogenic and non-carcinogenic RR and possible life-span shortening are estimated on the basis of a model of the radiation death probability for mammals. This model takes into account the decrease in compensatory reserve of an organism as well as the increase in mortality rate and descent of the subsequent lifetime of the cosmonaut. The analyzed dose distributions in the shielding and body areas are applied to making model calculations of tissue equivalent spherical and anthropomorphic phantoms.

  10. Assessing local population vulnerability to wind energy development with branching process models: an application to wind energy development

    USGS Publications Warehouse

    Erickson, Richard A.; Eager, Eric A.; Stanton, Jessica C.; Beston, Julie A.; Diffendorfer, James E.; Thogmartin, Wayne E.

    2015-01-01

    Quantifying the impact of anthropogenic development on local populations is important for conservation biology and wildlife management. However, these local populations are often subject to demographic stochasticity because of their small population size. Traditional modeling efforts such as population projection matrices do not consider this source of variation whereas individual-based models, which include demographic stochasticity, are computationally intense and lack analytical tractability. One compromise between approaches is branching process models because they accommodate demographic stochasticity and are easily calculated. These models are known within some sub-fields of probability and mathematical ecology but are not often applied in conservation biology and applied ecology. We applied branching process models to quantitatively compare and prioritize species locally vulnerable to the development of wind energy facilities. Specifically, we examined species vulnerability using branching process models for four representative species: A cave bat (a long-lived, low fecundity species), a tree bat (short-lived, moderate fecundity species), a grassland songbird (a short-lived, high fecundity species), and an eagle (a long-lived, slow maturation species). Wind turbine-induced mortality has been observed for all of these species types, raising conservation concerns. We simulated different mortality rates from wind farms while calculating local extinction probabilities. The longer-lived species types (e.g., cave bats and eagles) had much more pronounced transitions from low extinction risk to high extinction risk than short-lived species types (e.g., tree bats and grassland songbirds). High-offspring-producing species types had a much greater variability in baseline risk of extinction than the lower-offspring-producing species types. Long-lived species types may appear stable until a critical level of incidental mortality occurs. After this threshold, the risk of extirpation for a local population may rapidly increase with only minimal increases in wind mortality. Conservation biologists and wildlife managers may need to consider this mortality pattern when issuing take permits and developing monitoring protocols for wind facilities. We also describe how our branching process models may be generalized across a wider range of species for a larger assessment project and then describe how our methods may be applied to other stressors in addition to wind.

  11. Fully Capitated Payment Breakeven Rate for a Mid-Size Pediatric Practice.

    PubMed

    Farmer, Steven A; Shalowitz, Joel; George, Meaghan; McStay, Frank; Patel, Kavita; Perrin, James; Moghtaderi, Ali; McClellan, Mark

    2016-08-01

    Payers are implementing alternative payment models that attempt to align payment with high-value care. This study calculates the breakeven capitated payment rate for a midsize pediatric practice and explores how several different staffing scenarios affect the rate. We supplemented a literature review and data from >200 practices with interviews of practice administrators, physicians, and payers to construct an income statement for a hypothetical, independent, midsize pediatric practice in fee-for-service. The practice was transitioned to full capitation to calculate the breakeven capitated rate, holding all practice parameters constant. Panel size, overhead, physician salary, and staffing ratios were varied to assess their impact on the breakeven per-member per-month (PMPM) rate. Finally, payment rates from an existing health plan were applied to the practice. The calculated breakeven PMPM was $24.10. When an economic simulation allowed core practice parameters to vary across a broad range, 80% of practices broke even with a PMPM of $35.00. The breakeven PMPM increased by 12% ($3.00) when the staffing ratio increased by 25% and increased by 23% ($5.50) when the staffing ratio increased by 38%. The practice was viable, even with primary care medical home staffing ratios, when rates from a real-world payer were applied. Practices are more likely to succeed in capitated models if pediatricians understand how these models alter practice finances. Staffing changes that are common in patient-centered medical home models increased the breakeven capitated rate. The degree to which team-based care will increase panel size and offset increased cost is unknown. Copyright © 2016 by the American Academy of Pediatrics.

  12. Cost-effectiveness analysis in melanoma detection: A transition model applied to dermoscopy.

    PubMed

    Tromme, Isabelle; Legrand, Catherine; Devleesschauwer, Brecht; Leiter, Ulrike; Suciu, Stefan; Eggermont, Alexander; Sacré, Laurine; Baurain, Jean-François; Thomas, Luc; Beutels, Philippe; Speybroeck, Niko

    2016-11-01

    The main aim of this study is to demonstrate how our melanoma disease model (MDM) can be used for cost-effectiveness analyses (CEAs) in the melanoma detection field. In particular, we used the data of two cohorts of Belgian melanoma patients to investigate the cost-effectiveness of dermoscopy. A MDM, previously constructed to calculate the melanoma burden, was slightly modified to be suitable for CEAs. Two cohorts of patients entered into the model to calculate morbidity, mortality and costs. These cohorts were constituted by melanoma patients diagnosed by dermatologists adequately, or not adequately, trained in dermoscopy. Effectiveness and costs were calculated for each cohort and compared. Effectiveness was expressed in quality-adjusted life years (QALYs), a composite measure depending on melanoma-related morbidity and mortality. Costs included costs of treatment and follow-up as well as costs of detection in non-melanoma patients and costs of excision and pathology of benign lesions excised to rule out melanoma. The result of our analysis concluded that melanoma diagnosis by dermatologists adequately trained in dermoscopy resulted in both a gain of QALYs (less morbidity and/or mortality) and a reduction in costs. This study demonstrates how our MDM can be used in CEAs in the melanoma detection field. The model and the methodology suggested in this paper were applied to two cohorts of Belgian melanoma patients. Their analysis concluded that adequate dermoscopy training is cost-effective. The results should be confirmed by a large-scale randomised study. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Extended optical model for fission

    DOE PAGES

    Sin, M.; Capote, R.; Herman, M. W.; ...

    2016-03-07

    A comprehensive formalism to calculate fission cross sections based on the extension of the optical model for fission is presented. It can be used for description of nuclear reactions on actinides featuring multi-humped fission barriers with partial absorption in the wells and direct transmission through discrete and continuum fission channels. The formalism describes the gross fluctuations observed in the fission probability due to vibrational resonances, and can be easily implemented in existing statistical reaction model codes. The extended optical model for fission is applied for neutron induced fission cross-section calculations on 234,235,238U and 239Pu targets. A triple-humped fission barrier ismore » used for 234,235U(n,f), while a double-humped fission barrier is used for 238U(n,f) and 239Pu(n,f) reactions as predicted by theoretical barrier calculations. The impact of partial damping of class-II/III states, and of direct transmission through discrete and continuum fission channels, is shown to be critical for a proper description of the measured fission cross sections for 234,235,238U(n,f) reactions. The 239Pu(n,f) reaction can be calculated in the complete damping approximation. Calculated cross sections for 235,238U(n,f) and 239Pu(n,f) reactions agree within 3% with the corresponding cross sections derived within the Neutron Standards least-squares fit of available experimental data. Lastly, the extended optical model for fission can be used for both theoretical fission studies and nuclear data evaluation.« less

  14. An investigation into the numerical prediction of boundary layer transition using the K.Y. Chien turbulence model

    NASA Technical Reports Server (NTRS)

    Stephens, Craig A.; Crawford, Michael E.

    1990-01-01

    Assessments were made of the simulation capabilities of transition models developed at the University of Minnesota, as applied to the Launder-Sharma and Lam-Bremhorst two-equation turbulence models, and at The University of Texas at Austin, as applied to the K. Y. Chien two-equation turbulence model. A major shortcoming in the use of the basic K. Y. Chien turbulence model for low-Reynolds number flows was identified. The problem with the Chien model involved premature start of natural transition and a damped response as the simulation moved to fully turbulent flow at the end of transition. This is in contrast to the other two-equation turbulence models at comparable freestream turbulence conditions. The damping of the transition response of the Chien turbulence model leads to an inaccurate estimate of the start and end of transition for freestream turbulence levels greater than 1.0 percent and to difficulty in calculating proper model constants for the transition model.

  15. Warm Forming of Aluminum Alloys using a Coupled Thermo-Mechanical Anisotropic Material Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abedrabbo, Nader; Pourboghrat, Farhang; Carsley, John E.

    Temperature-dependant anisotropic material models for two types of automotive aluminum alloys (5754-O and 5182-O) were developed and implemented in LS-Dyna as a user material subroutine (UMAT) for coupled thermo-mechanical finite element analysis (FEA) of warm forming of aluminum alloys. The anisotropy coefficients of the Barlat YLD2000 plane stress yield function for both materials were calculated for the range of temperatures 25 deg. C-260 deg. C. Curve fitting was used to calculate the anisotropy coefficients of YLD2000 and the flow stress as a function of temperature. This temperature-dependent material model was successfully applied to the coupled thermo-mechanical analysis of stretching ofmore » aluminum sheets and results were compared with experiments.« less

  16. Dielectric properties of graphene/MoS2 heterostructures from ab initio calculations and electron energy-loss experiments

    NASA Astrophysics Data System (ADS)

    Mohn, Michael J.; Hambach, Ralf; Wachsmuth, Philipp; Giorgetti, Christine; Kaiser, Ute

    2018-06-01

    High-energy electronic excitations of graphene and MoS2 heterostructures are investigated by momentum-resolved electron energy-loss spectroscopy in the range of 1 to 35 eV. The interplay of excitations on different sheets is understood in terms of long-range Coulomb interactions and is simulated using a combination of ab initio and dielectric model calculations. In particular, the layered electron-gas model is extended to thick layers by including the spatial dependence of the dielectric response in the direction perpendicular to the sheets. We apply this model to the case of graphene/MoS2/graphene heterostructures and discuss the possibility of extracting the dielectric properties of an encapsulated monolayer from measurements of the entire stack.

  17. Model Prediction Results for 2007 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin

    2008-02-01

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.

  18. 3D thermal modeling of TRISO fuel coupled with neutronic simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianwei; Uddin, Rizwan

    2010-01-01

    The Very High Temperature Gas Reactor (VHTR) is widely considered as one of the top candidates identified in the Next Generation Nuclear Power-plant (NGNP) Technology Roadmap under the U.S . Depanment of Energy's Generation IV program. TRlSO particle is a common element among different VHTR designs and its performance is critical to the safety and reliability of the whole reactor. A TRISO particle experiences complex thermo-mechanical changes during reactor operation in high temperature and high burnup conditions. TRISO fuel performance analysis requires evaluation of these changes on micro scale. Since most of these changes are temperature dependent, 3D thermal modelingmore » of TRISO fuel is a crucial step of the whole analysis package. In this paper, a 3D numerical thermal model was developed to calculate temperature distribution inside TRISO and pebble under different scenarios. 3D simulation is required because pebbles or TRISOs are always subjected to asymmetric thermal conditions since they are randomly packed together. The numerical model was developed using finite difference method and it was benchmarked against ID analytical results and also results reported from literature. Monte-Carlo models were set up to calculate radial power density profile. Complex convective boundary condition was applied on the pebble outer surface. Three reactors were simulated using this model to calculate temperature distribution under different power levels. Two asymmetric boundary conditions were applied to the pebble to test the 3D capabilities. A gas bubble was hypothesized inside the TRISO kernel and 3D simulation was also carried out under this scenario. Intuition-coherent results were obtained and reported in this paper.« less

  19. Asteroseismic modelling of the solar-like star β Hydri

    NASA Astrophysics Data System (ADS)

    Doğan, G.; Brandão, I. M.; Bedding, T. R.; Christensen-Dalsgaard, J.; Cunha, M. S.; Kjeldsen, H.

    2010-07-01

    We present the results of modelling the subgiant star β Hydri using seismic observational constraints. We have computed several grids of stellar evolutionary tracks using the Aarhus STellar Evolution Code (ASTEC, Christensen-Dalsgaard in Astrophys. Space Sci. 316:13, 2008a), with and without helium diffusion and settling. For those models on each track that are located at the observationally determined position of β Hydri in the Hertzsprung-Russell (HR) diagram, we have calculated the oscillation frequencies using the Aarhus adiabatic pulsation package (ADIPLS, Christensen-Dalsgaard in Astrophys. Space Sci. 316:113, 2008b). Applying the near-surface corrections to the calculated frequencies using the empirical law presented by Kjeldsen et al. (Astrophys. J. 683:L175, 2008), we have compared the corrected model frequencies with the observed frequencies of the star. We show that after correcting the frequencies for the near-surface effects, we have a fairly good fit for both l=0 and l=2 frequencies. We also have good agreement between the observed and calculated l=1 mode frequencies, although there is room for improvement in order to fit all the observed mixed modes simultaneously.

  20. A new method to optimize natural convection heat sinks

    NASA Astrophysics Data System (ADS)

    Lampio, K.; Karvinen, R.

    2017-08-01

    The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.

  1. On the estimation of heating effects in the atmosphere because of seismic activities

    NASA Astrophysics Data System (ADS)

    Meister, Claudia-Veronika; Hoffmann, Dieter H. H.

    2014-05-01

    The dielectric model for waves in the Earth's ionosphere is further developed and applied to possible electro-magnetic phenomena in seismic regions. In doing so, in comparison to the well-known dielectric wave model by R.O. Dendy [Plasma dynamics, Oxford University Press, 1990] for homogeneous systems, the stratification of the atmosphere is taken into account. Moreover, within the frame of many-fluid magnetohydrodynamics also the momentum transfer between the charged and neutral particles is considered. Discussed are the excitation of Alfvén and magnetoacoustic waves, but also their variations by the neutral gas winds. Further, also other current driven waves like Farley-Buneman ones are studied. In the work, models of the altitudinal scales of the plasma parameters and the electromagnetic wave field are derived. In case of the electric wave field, a method is given to calculate the altitudinal scale based on the Poisson equation for the electric field and the magnetohydrodynamic description of the particles. Further, expressions are derived to estimate density, pressure, and temperatur changes in the E-layer because of the generation of the electromagnetic waves. Last not least, formulas are obtained to determine the dispersion and polarisation of the excited electromagnetic waves. These are applied to find quantitative results for the turbulent heating of the ionospheric E-layer. Concerning the calculation of the dispersion relation, in comparison to a former work by Meister et al. [Contr. Plasma Phys. 53 (4-5), 406-413, 2013], where a numerical double-iteration method was suggested to obtain results for the wave dispersion relations, now further analytical calculations are performed. In doing so, different polynomial dependencies of the wave frequencies from the wave vectors are treated. This helped to restrict the numerical calculations to only one iteration process.

  2. EXTERNAL COMPTON SCATTERING IN BLAZAR JETS AND THE LOCATION OF THE GAMMA-RAY EMITTING REGION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finke, Justin D., E-mail: justin.finke@nrl.navy.mil

    2016-10-20

    I study the location of the γ -ray emission in blazar jets by creating a Compton-scattering approximation that is valid for all anisotropic radiation fields in the Thomson through Klein–Nishina regimes, is highly accurate, and can speed up numerical calculations by up to a factor of ∼10. I apply this approximation to synchrotron self-Compton, external Compton scattering of photons from the accretion disk, broad line region (BLR), and dust torus. I use a stratified BLR model and include detailed Compton-scattering calculations of a spherical and flattened BLR. I create two dust torus models, one where the torus is an annulusmore » and one where it is an extended disk. I present detailed calculations of the photoabsorption optical depth using my detailed BLR and dust torus models, including the full angle dependence. I apply these calculations to the emission from a relativistically moving blob traveling through these radiation fields. The ratio of γ -ray to optical flux produces a predictable pattern that could help locate the γ -ray emission region. I show that the bright flare from 3C 454.3 in 2010 November detected by the Fermi Large Area Telescope is unlikely to originate from a single blob inside the BLR. This is because it moves outside the BLR in a time shorter than the flare duration, although emission by multiple blobs inside the BLR is possible. Also, γ -rays are unlikely to originate from outside of the BLR, due to the scattering of photons from an extended dust torus, since the cooling timescale would be too long to explain the observed short variability.« less

  3. Thermal noise calculation method for precise estimation of the signal-to-noise ratio of ultra-low-field MRI with an atomic magnetometer.

    PubMed

    Yamashita, Tatsuya; Oida, Takenori; Hamada, Shoji; Kobayashi, Tetsuo

    2012-02-01

    In recent years, there has been considerable interest in developing an ultra-low-field magnetic resonance imaging (ULF-MRI) system using an optically pumped atomic magnetometer (OPAM). However, a precise estimation of the signal-to-noise ratio (SNR) of ULF-MRI has not been carried out. Conventionally, to calculate the SNR of an MR image, thermal noise, also called Nyquist noise, has been estimated by considering a resistor that is electrically equivalent to a biological-conductive sample and is connected in series to a pickup coil. However, this method has major limitations in that the receiver has to be a coil and that it cannot be applied directly to a system using OPAM. In this paper, we propose a method to estimate the thermal noise of an MRI system using OPAM. We calculate the thermal noise from the variance of the magnetic sensor output produced by current-dipole moments that simulate thermally fluctuating current sources in a biological sample. We assume that the random magnitude of the current dipole in each volume element of the biological sample is described by the Maxwell-Boltzmann distribution. The sensor output produced by each current-dipole moment is calculated either by an analytical formula or a numerical method based on the boundary element method. We validate the proposed method by comparing our results with those obtained by conventional methods that consider resistors connected in series to a pickup coil using single-layered sphere, multi-layered sphere, and realistic head models. Finally, we apply the proposed method to the ULF-MRI model using OPAM as the receiver with multi-layered sphere and realistic head models and estimate their SNR. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Possibility designing XNOR and NAND molecular logic gates by using single benzene ring

    NASA Astrophysics Data System (ADS)

    Abbas, Mohammed A.; Hanoon, Falah H.; Al-Badry, Lafy F.

    2017-09-01

    This study focused on examining electronic transport through single benzene ring and suggested how such ring can be employed to design XNOR and NAND molecular logic gates. The single benzene ring was threaded by a magnetic flux. The magnetic flux and applied gate voltages were considered as the key tuning parameter in the XNOR and NAND gates operation. All the calculations are achieved by using steady-state theoretical model, which is based on the time-dependent Hamiltonian model. The transmission probability and the electric current are calculated as functions of electron energy and bias voltage, respectively. The application of the anticipated results can be a base for the progress of molecular electronics.

  5. Numerical Modelling of Mechanical Properties of C-Pd Film by Homogenization Technique and Finite Element Method

    NASA Astrophysics Data System (ADS)

    Rymarczyk, Joanna; Kowalczyk, Piotr; Czerwosz, Elzbieta; Bielski, Włodzimierz

    2011-09-01

    The nanomechanical properties of nanostructural carbonaceous-palladium films are studied. The nanoindentation experiments are numerically using the Finite Element Method. The homogenization theory is applied to compute the properties of the composite material used as the input data for nanoindentation calculations.

  6. DEAN: A Program for Dynamic Engine Analysis.

    DTIC Science & Technology

    1985-01-01

    hardware and memory limitations. DIGTEM (ref. 4), a recently written code allows steady-state as well as transient calculations to be performed. DIGTEM has...Computer Program for Generating Dynamic Turbofan Engine Models ( DIGTEM )," NASA TM-83446. 5. Carnahan, B., Luther, H.A., and Wilkes, J.O., Applied Numerical

  7. A New Kinetic Simulation Model with Self-Consistent Calculation of Regolith Layer Charging for Moon-Plasma Interactions

    NASA Astrophysics Data System (ADS)

    Han, D.; Wang, J.

    2015-12-01

    The moon-plasma interactions and the resulting surface charging have been subjects of extensive recent investigations. While many particle-in-cell (PIC) based simulation models have been developed, all existing PIC simulation models treat the surface of the Moon as a boundary condition to the plasma flow. In such models, the surface of the Moon is typically limited to simple geometry configurations, the surface floating potential is calculated from a simplified current balance condition, and the electric field inside the regolith layer cannot be resolved. This paper presents a new full particle PIC model to simulate local scale plasma flow and surface charging. A major feature of this new model is that the surface is treated as an "interface" between two mediums rather than a boundary, and the simulation domain includes not only the plasma but also the regolith layer and the bedrock underneath it. There are no limitations on the surface shape. An immersed-finite-element field solver is applied which calculates the regolith surface floating potential and the electric field inside the regolith layer directly from local charge deposition. The material property of the regolith layer is also explicitly included in simulation. This new model is capable of providing a self-consistent solution to the plasma flow field, lunar surface charging, the electric field inside the regolith layer and the bedrock for realistic surface terrain. This new model is applied to simulate lunar surface-plasma interactions and surface charging under various ambient plasma conditions. The focus is on the lunar terminator region, where the combined effects from the low sun elevation angle and the localized plasma wake generated by plasma flow over a rugged terrain can generate strongly differentially charged surfaces and complex dust dynamics. We discuss the effects of the regolith properties and regolith layer charging on the plasma flow field, dust levitation, and dust transport.

  8. Model for the prediction of subsurface strata movement due to underground mining

    NASA Astrophysics Data System (ADS)

    Cheng, Jianwei; Liu, Fangyuan; Li, Siyuan

    2017-12-01

    The problem of ground control stability due to large underground mining operations is often associated with large movements and deformations of strata. It is a complicated problem, and can induce severe safety or environmental hazards either at the surface or in strata. Hence, knowing the subsurface strata movement characteristics, and making any subsidence predictions in advance, are desirable for mining engineers to estimate any damage likely to affect the ground surface or subsurface strata. Based on previous research findings, this paper broadly applies a surface subsidence prediction model based on the influence function method to subsurface strata, in order to predict subsurface stratum movement. A step-wise prediction model is proposed, to investigate the movement of underground strata. The model involves a dynamic iteration calculation process to derive the movements and deformations for each stratum layer; modifications to the influence method function are also made for more precise calculations. The critical subsidence parameters, incorporating stratum mechanical properties and the spatial relationship of interest at the mining level, are thoroughly considered, with the purpose of improving the reliability of input parameters. Such research efforts can be very helpful to mining engineers’ understanding of the moving behavior of all strata over underground excavations, and assist in making any damage mitigation plan. In order to check the reliability of the model, two methods are carried out and cross-validation applied. One is to use a borehole TV monitor recording to identify the progress of subsurface stratum bedding and caving in a coal mine, the other is to conduct physical modelling of the subsidence in underground strata. The results of these two methods are used to compare with theoretical results calculated by the proposed mathematical model. The testing results agree well with each other, and the acceptable accuracy and reliability of the proposed prediction model are thus validated.

  9. High Temperature Gas Energy Transfer.

    DTIC Science & Technology

    1982-08-15

    will be made. A theoretical model has been applied to the calculation of energy transfer amounts between molecules as a function of molecular size... theoretical analysis was given of shock tube data for high temperature gas reactions. The data were analyzed to show that colli- sional energy transfer...Systems by I. Oref and B. S. Rabiovitch. In this report a simple theoretical model describing energy transfer probabilities is given. Conservation of

  10. Wide-Field Imaging System and Rapid Direction of Optical Zoom (WOZ)

    DTIC Science & Technology

    2011-03-25

    COMSOL Multiphysics, and ZEMAX optical design. The multiphysics design tool is nearing completion. We have demonstrated the ability to create a model in...and mechanical modeling to calculate the deformation resulting from the applied voltages. Finally, the deformed surface can be exported to ZEMAX via...MatLab. From ZEMAX , various analyses can be conducted to determine important parameters such as focal point, aberrations, and wavefront distortion

  11. Wide-field Imaging System and Rapid Direction of Optical Zoom (WOZ)

    DTIC Science & Technology

    2010-12-24

    The modeling tools are based on interaction between three commercial software packages: SolidWorks, COMSOL Multiphysics, and ZEMAX optical design...deformation resulting from the applied voltages. Finally, the deformed surface can be exported to ZEMAX via MatLab. From ZEMAX , various analyses can...results to extract from ZEMAX to support the optimization remains to be determined. Figure 1 shows the deformation calculated using a model of an

  12. On a perturbed Sparre Andersen risk model with multi-layer dividend strategy

    NASA Astrophysics Data System (ADS)

    Yang, Hu; Zhang, Zhimin

    2009-10-01

    In this paper, we consider a perturbed Sparre Andersen risk model, in which the inter-claim times are generalized Erlang(n) distributed. Under the multi-layer dividend strategy, piece-wise integro-differential equations for the discounted penalty functions are derived, and a recursive approach is applied to express the solutions. A numerical example to calculate the ruin probabilities is given to illustrate the solution procedure.

  13. Radar Cross Section Prediction for Coated Perfect Conductors with Arbitrary Geometries.

    DTIC Science & Technology

    1986-01-01

    equivalent electric and magnetic surface currents as the desired unknowns. Triangular patch modelling is ap- plied to the boundary surfaces. The method of...matrix inversion for the unknown surface current coefficients. Huygens’ principle is again applied to calculate the scattered electric field produced...differential equations with the equivalent electric and magnetic surface currents as the desired unknowns. Triangular patch modelling is ap- plied to the

  14. Developing a Conceptually Equivalent Type 2 Diabetes Risk Score for Indian Gujaratis in the UK

    PubMed Central

    Patel, Naina; Stone, Margaret; Barber, Shaun; Gray, Laura; Davies, Melanie; Khunti, Kamlesh

    2016-01-01

    Aims. To apply and assess the suitability of a model consisting of commonly used cross-cultural translation methods to achieve a conceptually equivalent Gujarati language version of the Leicester self-assessment type 2 diabetes risk score. Methods. Implementation of the model involved multiple stages, including pretesting of the translated risk score by conducting semistructured interviews with a purposive sample of volunteers. Interviews were conducted on an iterative basis to enable findings to inform translation revisions and to elicit volunteers' ability to self-complete and understand the risk score. Results. The pretest stage was an essential component involving recruitment of a diverse sample of 18 Gujarati volunteers, many of whom gave detailed suggestions for improving the instructions for the calculation of the risk score and BMI table. Volunteers found the standard and level of Gujarati accessible and helpful in understanding the concept of risk, although many of the volunteers struggled to calculate their BMI. Conclusions. This is the first time that a multicomponent translation model has been applied to the translation of a type 2 diabetes risk score into another language. This project provides an invaluable opportunity to share learning about the transferability of this model for translation of self-completed risk scores in other health conditions. PMID:27703985

  15. Modelling clustering of vertically aligned carbon nanotube arrays.

    PubMed

    Schaber, Clemens F; Filippov, Alexander E; Heinlein, Thorsten; Schneider, Jörg J; Gorb, Stanislav N

    2015-08-06

    Previous research demonstrated that arrays of vertically aligned carbon nanotubes (VACNTs) exhibit strong frictional properties. Experiments indicated a strong decrease of the friction coefficient from the first to the second sliding cycle in repetitive measurements on the same VACNT spot, but stable values in consecutive cycles. VACNTs form clusters under shear applied during friction tests, and self-organization stabilizes the mechanical properties of the arrays. With increasing load in the range between 300 µN and 4 mN applied normally to the array surface during friction tests the size of the clusters increases, while the coefficient of friction decreases. To better understand the experimentally obtained results, we formulated and numerically studied a minimalistic model, which reproduces the main features of the system with a minimum of adjustable parameters. We calculate the van der Waals forces between the spherical friction probe and bunches of the arrays using the well-known Morse potential function to predict the number of clusters, their size, instantaneous and mean friction forces and the behaviour of the VACNTs during consecutive sliding cycles and at different normal loads. The data obtained by the model calculations coincide very well with the experimental data and can help in adapting VACNT arrays for biomimetic applications.

  16. A Simple Geometrical Model for Calculation of the Effective Emissivity in Blackbody Cylindrical Cavities

    NASA Astrophysics Data System (ADS)

    De Lucas, Javier

    2015-03-01

    A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.

  17. Energy transfer simulation for radiantly heated and cooled enclosures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, K.S.; Zhang, P.

    1996-11-01

    This paper presents the development of a three-dimensional mathematical model to compute heat transfer within a radiantly heated or cooled room, which then calculates the mass-averaged room air temperature and the wall surface temperature distributions. The radiation formulation used in the model accommodates arbitrary placement of walls and objects within the room. The convection model utilizes Nusselt number correlations published in the open literature. The complete energy transfer model is validated by comparing calculated room temperatures to temperatures measured in a radiantly heated room. This three-dimensional model may be applied to a building to assist the heating/cooling system design engineermore » in sizing a radiant heating/cooling system. By coupling this model with a thermal comfort model, the comfort levels throughout the room can be easily and efficiently mapped for a given radiant heater/cooler location. In addition, obstacles such as airplanes, trucks, furniture, and partitions can be easily incorporated to determine their effect on the radiant heating system performance.« less

  18. Research on Modeling of Propeller in a Turboprop Engine

    NASA Astrophysics Data System (ADS)

    Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong

    2015-05-01

    In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.

  19. A precise integration method for solving coupled vehicle-track dynamics with nonlinear wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Gao, Q.; Tan, S. J.; Zhong, W. X.

    2012-10-01

    A new method is proposed as a solution for the large-scale coupled vehicle-track dynamic model with nonlinear wheel-rail contact. The vehicle is simplified as a multi-rigid-body model, and the track is treated as a three-layer beam model. In the track model, the rail is assumed to be an Euler-Bernoulli beam supported by discrete sleepers. The vehicle model and the track model are coupled using Hertzian nonlinear contact theory, and the contact forces of the vehicle subsystem and the track subsystem are approximated by the Lagrange interpolation polynomial. The response of the large-scale coupled vehicle-track model is calculated using the precise integration method. A more efficient algorithm based on the periodic property of the track is applied to calculate the exponential matrix and certain matrices related to the solution of the track subsystem. Numerical examples demonstrate the computational accuracy and efficiency of the proposed method.

  20. Model construction and superconductivity analysis of organic conductors β-(BDA-TTP)2MF6 (M = P, As, Sb and Ta) based on first-principles band calculation

    NASA Astrophysics Data System (ADS)

    Aizawa, H.; Kuroki, K.; Yasuzuka, S.; Yamada, J.

    2012-11-01

    We perform a first-principles band calculation for a group of quasi-two-dimensional organic conductors β-(BDA-TTP)2MF6 (M = P, As, Sb and Ta). The ab-initio calculation shows that the density of states is correlated with the bandwidth of the singly occupied (highest) molecular orbital, while it is not necessarily correlated with the unit-cell volume. The direction of the major axis of the cross section of the Fermi surface lies in the Γ-B-direction, which differs from that obtained by the extended Hückel calculation. Then, we construct a tight-binding model which accurately reproduces the ab-initio band structure. The obtained transfer energies give a smaller dimerization than in the extended Hückel band. As to the difference in the anisotropy of the Fermi surface, the transfer energies along the inter-stacking direction are smaller than those obtained in the extended Hückel calculation. Assuming spin-fluctuation-mediated superconductivity, we apply random phase approximation to a two-band Hubbard model. This two-band Hubbard model is composed of the tight-binding model derived from the first-principles band structure and an on-site (intra-molecule) repulsive interaction taken as a variable parameter. The obtained superconducting gap changes sign four times along the Fermi surface like in a d-wave gap, and the nodal direction is different from that obtained in the extended Hückel model. Anion dependence of Tc is qualitatively consistent with the experimental observation.

  1. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  2. Direct detection of metal-insulator phase transitions using the modified Backus-Gilbert method

    NASA Astrophysics Data System (ADS)

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    2018-03-01

    The detection of the (semi)metal-insulator phase transition can be extremely difficult if the local order parameter which characterizes the ordered phase is unknown. In some cases, it is even impossible to define a local order parameter: the most prominent example of such system is the spin liquid state. This state was proposed to exist in the Hubbard model on the hexagonal lattice in a region between the semimetal phase and the antiferromagnetic insulator phase. The existence of this phase has been the subject of a long debate. In order to detect these exotic phases we must use alternative methods to those used for more familiar examples of spontaneous symmetry breaking. We have modified the Backus-Gilbert method of analytic continuation which was previously used in the calculation of the pion quasiparticle mass in lattice QCD. The modification of the method consists of the introduction of the Tikhonov regularization scheme which was used to treat the ill-conditioned kernel. This modified Backus-Gilbert method is applied to the Euclidean propagators in momentum space calculated using the hybrid Monte Carlo algorithm. In this way, it is possible to reconstruct the full dispersion relation and to estimate the mass gap, which is a direct signal of the transition to the insulating state. We demonstrate the utility of this method in our calculations for the Hubbard model on the hexagonal lattice. We also apply the method to the metal-insulator phase transition in the Hubbard-Coulomb model on the square lattice.

  3. Monte Carlo renormalization-group study of the Baxter-Wu model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novotny, M.A.; Landau, D.P.; Swendsen, R.H.

    1982-07-01

    The effectiveness of a Monte Carlo renormalization-group method is studied by applying it to the Baxter-Wu model (Ising spins on a triangular lattice with three-spin interactions). The calculations yield three relevent eigenvalues in good agreement with exact or conjectured results. We demonstrate that the method is capable of distinguishing between models expected to be in the same universality class, when one of them (four-state Potts) exhibits logarithmic corrections to the usual power-law singularities and the other (Baxter-Wu) does not.

  4. Application of new nuclear de-excitation model of PHITS for prediction of isomer yield and prompt gamma-ray production

    NASA Astrophysics Data System (ADS)

    Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji

    2014-06-01

    A new nuclear de-excitation model, intended for accurate simulation of isomeric transition of excited nuclei, was incorporated into PHITS and applied to various situations to clarify the impact of the model. The case studies show that precise treatment of gamma de-excitation and consideration for isomer production are important for various applications such as detector performance prediction, radiation shielding calculations and the estimation of radioactive inventory including isomers.

  5. A model of solar energetic particles for use in calculating LET spectra developed from ONR-604 data

    NASA Technical Reports Server (NTRS)

    Chen, J.; Chenette, D.; Guzik, T. G.; Garcia-Munoz, M.; Pyle, K. R.; Sang, Y.; Wefel, J. P.

    1994-01-01

    A model of solar energetic particles (SEP) has been developed and is applied to solar flares during the 1990/1991 CRRES mission using data measured by the University of Chicago instrument, ONR-604. The model includes the time-dependent behavior, heavy-ion content, energy spectrum and fluence, and can accurately represent the observed SEP events in the energy range between 40 to 500 MeV/nucleon. Results are presented for the March and June, 1991 flare periods.

  6. Public Health Analysis Transport Optimization Model v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyeler, Walt; Finley, Patrick; Walser, Alex

    PHANTOM models logistic functions of national public health systems. The system enables public health officials to visualize and coordinate options for public health surveillance, diagnosis, response and administration in an integrated analytical environment. Users may simulate and analyze system performance applying scenarios that represent current conditions or future contingencies what-if analyses of potential systemic improvements. Public health networks are visualized as interactive maps, with graphical displays of relevant system performance metrics as calculated by the simulation modeling components.

  7. Notional Scoring for Technical Review Weighting As Applied to Simulation Credibility Assessment

    NASA Technical Reports Server (NTRS)

    Hale, Joseph Peter; Hartway, Bobby; Thomas, Danny

    2008-01-01

    NASA's Modeling and Simulation Standard requires a credibility assessment for critical engineering data produced by models and simulations. Credibility assessment is thus a "qualifyingfactor" in reporting results from simulation-based analysis. The degree to which assessors should be independent of the simulation developers, users and decision makers is a recurring question. This paper provides alternative "weighting algorithms" for calculating the value-added for independence of the levels of technical review defined for the NASA Modeling and Simulation Standard.

  8. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  9. A Monte Carlo-finite element model for strain energy controlled microstructural evolution - 'Rafting' in superalloys

    NASA Technical Reports Server (NTRS)

    Gayda, J.; Srolovitz, D. J.

    1989-01-01

    This paper presents a specialized microstructural lattice model, MCFET (Monte Carlo finite element technique), which simulates microstructural evolution in materials in which strain energy has an important role in determining morphology. The model is capable of accounting for externally applied stress, surface tension, misfit, elastic inhomogeneity, elastic anisotropy, and arbitrary temperatures. The MCFET analysis was found to compare well with the results of analytical calculations of the equilibrium morphologies of isolated particles in an infinite matrix.

  10. First principles calculations for liquids and solids using maximally localized Wannier functions

    NASA Astrophysics Data System (ADS)

    Swartz, Charles W., VI

    The field of condensed matter computational physics has seen an explosion of applicability over the last 50+ years. Since the very first calculations with ENIAC and MANIAC the field has continued to pushed the boundaries of what is possible; from the first large-scale molecular dynamics simulation, to the implementation of Density Functional Theory and large scale Car-Parrinello molecular dynamics, to million-core turbulence calculations by Standford. These milestones represent not only technological advances but theoretical breakthroughs and algorithmic improvements as well. The work in this thesis was completed in the hopes of furthering such advancement, even by a small fraction. Here we will focus mainly on the calculation of electronic and structural properties of solids and liquids, where we shall implement a wide range of novel approaches that are both computational efficient and physically enlightening. To this end we routinely will work with maximally localized Wannier functions (MLWFs) which have recently seen a revival in mainstream scientific literature. MLWFs present us with interesting opportunity to calculate a localized orbital within the planewave formalism of atomistic simulations. Such a localization will prove to be invaluable in the construction of layer-based superlattice models, linear scaling hybrid functional schemes and model quasiparticle calculations. In the first application of MLWF we will look at modeling functional piezoelectricity in superlattices. Based on the locality principle of insulating superlattices, we apply the method of Wu et al to the piezoelectric strains of individual layers under iifixed displacement field. For a superlattice of arbitrary stacking sequence an accurate model is acquired for predicting piezoelectricity. By applying the model in the superlattices where ferroelectric and antiferrodistortive modes are in competition, functional piezoelectricity can be achieved. A strong nonlinear effect is observed and can be further engineered in the PbTiO3 /SrTiO3 superlattice and an interface enhancement of piezoelectricity is found in the BaTiO3 /CaTiO3 superlattice. The second project will look at The ionization potential distributions of hydrated hydroxide and hydronium which are computed within a many-body approach for electron excitations using configurations generated by ab initio molecular dynamics. The experimental features are well reproduced and found to be closely related to the molecular excitations. In the stable configurations, the ionization potential is mainly perturbed by solvent water molecules within the first solvation shell. On the other hand, electron excitation is delocalized on both proton receiving and donating complex during proton transfer, which shifts the excitation energies and broadens the spectra for both hydrated ions. The third project represents a work in progress, where we also make use of the previous electron excitation theory applied to ab initio x-ray emission spectroscopy. In this case we make use of a novel method to include the ultrafast core-hole electron dynamics present in such situations. At present we have shown only strong qualitative agreement with experiment.

  11. Wear Calculation Approach for Sliding - Friction Pairs

    NASA Astrophysics Data System (ADS)

    Springis, G.; Rudzitis, J.; Lungevics, J.; Berzins, K.

    2017-05-01

    One of the most important things how to predict the service life of different products is always connected with the choice of adequate method. With the development of production technologies and measuring devices and with ever increasing precision one can get the appropriate data to be used in analytic calculations. Historically one can find several theoretical wear calculation methods but still there are no exact wear calculation model that could be applied to all cases of wear processes because of difficulties connected with a variety of parameters that are involved in wear process of two or several surfaces. Analysing the wear prediction theories that could be classified into definite groups one can state that each of them has shortcomings that might impact the results thus making unnecessary theoretical calculations. The offered wear calculation method is based on the theories of different branches of science. It includes the description of 3D surface micro-topography using standardized roughness parameters, explains the regularities of particle separation from the material in the wear process using fatigue theory and takes into account material’s physical and mechanical characteristics and definite conditions of product’s working time. The proposed wear calculation model could be of value for prediction of the exploitation time for sliding friction pairs thus allowing the best technologies to be chosen for many mechanical details.

  12. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    PubMed

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  13. Mathematical modelling of methanogenic reactor start-up: Importance of volatile fatty acids degrading population.

    PubMed

    Jabłoński, Sławomir J; Łukaszewicz, Marcin

    2014-12-01

    Development of balanced community of microorganisms is one of the obligatory for stable anaerobic digestion. Application of mathematical models might be helpful in development of reliable procedures during the process start-up period. Yet, the accuracy of forecast depends on the quality of input and parameters. In this study, the specific anaerobic activity (SAA) tests were applied in order to estimate microbial community structure. Obtained data was applied as input conditions for mathematical model of anaerobic digestion. The initial values of variables describing the amount of acetate and propionate utilizing microorganisms could be calculated on the basis of SAA results. The modelling based on those optimized variables could successfully reproduce the behavior of a real system during the continuous fermentation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Voltage dependency of transmission probability of aperiodic DNA molecule

    NASA Astrophysics Data System (ADS)

    Wiliyanti, V.; Yudiarsah, E.

    2017-07-01

    Characteristics of electron transports in aperiodic DNA molecules have been studied. Double stranded DNA model with the sequences of bases, GCTAGTACGTGACGTAGCTAGGATATGCCTGA, in one chain and its complements on the other chains has been used. Tight binding Hamiltonian is used to model DNA molecules. In the model, we consider that on-site energy of the basis has a linearly dependency on the applied electric field. Slater-Koster scheme is used to model electron hopping constant between bases. The transmission probability of electron from one electrode to the next electrode is calculated using a transfer matrix technique and scattering matrix method simultaneously. The results show that, generally, higher voltage gives a slightly larger value of the transmission probability. The applied voltage seems to shift extended states to lower energy. Meanwhile, the value of the transmission increases with twisting motion frequency increment.

  15. Development and Current Status of the “Cambridge” Loudness Models

    PubMed Central

    2014-01-01

    This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375

  16. Thermal-capillary analysis of small-scale floating zones Steady-state calculations

    NASA Technical Reports Server (NTRS)

    Duranceau, J. L.; Brown, R. A.

    1986-01-01

    Galerkin finite element analysis of a thermal-capillary model of the floating zone crystal growth process is used to predict the dependence of molten zone shape on operating conditions for the growth of small silicon boules. The model accounts for conduction-dominated heat transport in the melt, feed rod and growing crystal and for radiation between these phases, the ambient and a heater. Surface tension acting on the shape of the melt/gas meniscus counteracts gravity to set the shape of the molten zone. The maximum diameter of the growing crystal is set by the dewetting of the melt from the feed rod when the crystal radius is large. Calculations with small Bond number show the increased zone lengths possible for growth in a microgravity environment. The sensitivity of the method to the shape and intensity of the applied heating distribution is demonstrated. The calculations are compared with experimental observations.

  17. A simplified analytical random walk model for proton dose calculation

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.

  18. Simplified approach to the mixed time-averaging semiclassical initial value representation for the calculation of dense vibrational spectra

    NASA Astrophysics Data System (ADS)

    Buchholz, Max; Grossmann, Frank; Ceotto, Michele

    2018-03-01

    We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.

  19. Effective lattice Hamiltonian for monolayer tin disulfide: Tailoring electronic structure with electric and magnetic fields

    NASA Astrophysics Data System (ADS)

    Yu, Jin; van Veen, Edo; Katsnelson, Mikhail I.; Yuan, Shengjun

    2018-06-01

    The electronic properties of monolayer tin dilsulfide (ML -Sn S2 ), a recently synthesized metal dichalcogenide, are studied by a combination of first-principles calculations and tight-binding (TB) approximation. An effective lattice Hamiltonian based on six hybrid s p -like orbitals with trigonal rotation symmetry are proposed to calculate the band structure and density of states for ML -Sn S2 , which demonstrates good quantitative agreement with relativistic density-functional-theory calculations in a wide energy range. We show that the proposed TB model can be easily applied to the case of an external electric field, yielding results consistent with those obtained from full Hamiltonian results. In the presence of a perpendicular magnetic field, highly degenerate equidistant Landau levels are obtained, showing typical two-dimensional electron gas behavior. Thus, the proposed TB model provides a simple way in describing properties in ML -Sn S2 .

  20. Constitutive model for porous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weston, A.M.; Lee, E.L.

    1982-01-01

    A simple pressure versus porosity compaction model is developed to calculate the response of granular porous bed materials to shock impact. The model provides a scheme for calculating compaction behavior when relatively limited material data are available. While the model was developed to study porous explosives and propellants, it has been applied to a much wider range of materials. The early development of porous material models, such as that of Hermann, required empirical dynamic compaction data. Erkman and Edwards successfully applied the early theory to unreacted porous high explosives using a Gruneisen equation of state without yield behavior and withoutmore » trapped gas in the pores. Butcher included viscoelastic rate dependance in pore collapse. The theoretical treatment of Carroll and Holt is centered on the collapse of a circular pore and includes radial inertia terms and a complex set of stress, strain and strain rate constitutive parameters. Unfortunately data required for these parameters are generally not available. The model described here is also centered on the collapse of a circular pore, but utilizes a simpler elastic-plastic static equilibrium pore collapse mechanism without strain rate dependence, or radial inertia terms. It does include trapped gas inside the pore, a solid material flow stress that creates both a yield point and a variation in solid material pressure with radius. The solid is described by a Mie-Gruneisen type EOS. Comparisons show that this model will accurately estimate major mechanical features which have been observed in compaction experiments.« less

  1. Segmentation of kidney using C-V model and anatomy priors

    NASA Astrophysics Data System (ADS)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  2. Verification, Validation, and Solution Quality in Computational Physics: CFD Methods Applied to Ice Sheet Physics

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.

  3. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  4. ICME for Crashworthiness of TWIP Steels: From Ab Initio to the Crash Performance

    NASA Astrophysics Data System (ADS)

    Güvenç, O.; Roters, F.; Hickel, T.; Bambach, M.

    2015-01-01

    During the last decade, integrated computational materials engineering (ICME) emerged as a field which aims to promote synergetic usage of formerly isolated simulation models, data and knowledge in materials science and engineering, in order to solve complex engineering problems. In our work, we applied the ICME approach to a crash box, a common automobile component crucial to passenger safety. A newly developed high manganese steel was selected as the material of the component and its crashworthiness was assessed by simulated and real drop tower tests. The crashworthiness of twinning-induced plasticity (TWIP) steel is intrinsically related to the strain hardening behavior caused by the combination of dislocation glide and deformation twinning. The relative contributions of those to the overall hardening behavior depend on the stacking fault energy (SFE) of the selected material. Both the deformation twinning mechanism and the stacking fault energy are individually well-researched topics, but especially for high-manganese steels, the determination of the stacking-fault energy and the occurrence of deformation twinning as a function of the SFE are crucial to understand the strain hardening behavior. We applied ab initio methods to calculate the stacking fault energy of the selected steel composition as an input to a recently developed strain hardening model which models deformation twinning based on the SFE-dependent dislocation mechanisms. This physically based material model is then applied to simulate a drop tower test in order to calculate the energy absorption capacity of the designed component. The results are in good agreement with experiments. The model chain links the crash performance to the SFE and hence to the chemical composition, which paves the way for computational materials design for crashworthiness.

  5. Theoretical Studies of Spectroscopic Line Mixing in Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Ma, Q.

    2015-12-01

    The phenomenon of collisional transfer of intensity due to line mixing has an increasing importance for atmospheric monitoring. From a theoretical point of view, all relevant information about the collisional processes is contained in the relaxation matrix where the diagonal elements give half-widths and shifts, and the off-diagonal elements correspond to line interferences. For simple systems such as those consisting of diatom-atom or diatom-diatom, accurate fully quantum calculations based on interaction potentials are feasible. However, fully quantum calculations become unrealistic for more complex systems. On the other hand, the semi-classical Robert-Bonamy (RB) formalism, which has been widely used to calculate half-widths and shifts for decades, fails in calculating the off-diagonal matrix elements. As a result, in order to simulate atmospheric spectra where the effects from line mixing are important, semi-empirical fitting or scaling laws such as the ECS and IOS models are commonly used. Recently, while scrutinizing the development of the RB formalism, we have found that these authors applied the isolated line approximation in their evaluating matrix elements of the Liouville scattering operator given in exponential form. Since the criterion of this assumption is so stringent, it is not valid for many systems of interest in atmospheric applications. Furthermore, it is this assumption that blocks the possibility to calculate the whole relaxation matrix at all. By eliminating this unjustified application, and accurately evaluating matrix elements of the exponential operators, we have developed a more capable formalism. With this new formalism, we are now able not only to reduce uncertainties for calculated half-widths and shifts, but also to remove a once insurmountable obstacle to calculate the whole relaxation matrix. This implies that we can address the line mixing with the semi-classical theory based on interaction potentials between molecular absorber and molecular perturber. We have applied this formalism to address the line mixing for Raman and infrared spectra of molecules such as N2, C2H2, CO2, NH3, and H2O. By carrying out rigorous calculations, our calculated relaxation matrices are in good agreement with both experimental data and results derived from the ECS model.

  6. Ammonia emissions from an anaerobic digestion plant estimated using atmospheric measurements and dispersion modelling.

    PubMed

    Bell, Michael W; Tang, Y Sim; Dragosits, Ulrike; Flechard, Chris R; Ward, Paul; Braban, Christine F

    2016-10-01

    Anaerobic digestion (AD) is becoming increasingly implemented within organic waste treatment operations. The storage and processing of large volumes of organic wastes through AD has been identified as a significant source of ammonia (NH3) emissions, however the totality of ammonia emissions from an AD plant have not been previously quantified. The emissions from an AD plant processing food waste were estimated through integrating ambient NH3 concentration measurements, atmospheric dispersion modelling, and comparison with published emission factors (EFs). Two dispersion models (ADMS and a backwards Lagrangian stochastic (bLS) model) were applied to calculate emission estimates. The bLS model (WindTrax) was used to back-calculate a total (top-down) emission rate for the AD plant from a point of continuous NH3 measurement downwind from the plant. The back-calculated emission rates were then input to the ADMS forward dispersion model to make predictions of air NH3 concentrations around the site, and evaluated against weekly passive sampler NH3 measurements. As an alternative approach emission rates from individual sources within the plant were initially estimated by applying literature EFs to the available site parameters concerning the chemical composition of waste materials, room air concentrations, ventilation rates, etc. The individual emission rates were input to ADMS and later tuned by fitting the simulated ambient concentrations to the observed (passive sampler) concentration field, which gave an excellent match to measurements after an iterative process. The total emission from the AD plant thus estimated by a bottom-up approach was 16.8±1.8mgs(-1), which was significantly higher than the back-calculated top-down estimate (7.4±0.78mgs(-1)). The bottom-up approach offered a more realistic treatment of the source distribution within the plant area, while the complexity of the site was not ideally suited to the bLS method, thus the bottom-up method is believed to give a better estimate of emissions. The storage of solid digestate and the aerobic treatment of liquid effluents at the site were the greatest sources of NH3 emissions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Coherent Microwave Scattering Model of Marsh Grass

    NASA Astrophysics Data System (ADS)

    Duan, Xueyang; Jones, Cathleen E.

    2017-12-01

    In this work, we developed an electromagnetic scattering model to analyze radar scattering from tall-grass-covered lands such as wetlands and marshes. The model adopts the generalized iterative extended boundary condition method (GIEBCM) algorithm, previously developed for buried cylindrical media such as vegetation roots, to simulate the scattering from the grass layer. The major challenge of applying GIEBCM to tall grass is the extremely time-consuming iteration among the large number of short subcylinders building up the grass. To overcome this issue, we extended the GIEBCM to multilevel GIEBCM, or M-GIEBCM, in which we first use GIEBCM to calculate a T matrix (transition matrix) database of "straws" with various lengths, thicknesses, orientations, curvatures, and dielectric properties; we then construct the grass with a group of straws from the database and apply GIEBCM again to calculate the T matrix of the overall grass scene. The grass T matrix is transferred to S matrix (scattering matrix) and combined with the ground S matrix, which is computed using the stabilized extended boundary condition method, to obtain the total scattering. In this article, we will demonstrate the capability of the model by simulating scattering from scenes with different grass densities, different grass structures, different grass water contents, and different ground moisture contents. This model will help with radar experiment design and image interpretation for marshland and wetland observations.

  8. Calculations of axisymmetric vortex sheet roll-up using a panel and a filament model

    NASA Technical Reports Server (NTRS)

    Kantelis, J. P.; Widnall, S. E.

    1986-01-01

    A method for calculating the self-induced motion of a vortex sheet using discrete vortex elements is presented. Vortex panels and vortex filaments are used to simulate two-dimensional and axisymmetric vortex sheet roll-up. A straight forward application using vortex elements to simulate the motion of a disk of vorticity with an elliptic circulation distribution yields unsatisfactroy results where the vortex elements move in a chaotic manner. The difficulty is assumed to be due to the inability of a finite number of discrete vortex elements to model the singularity at the sheet edge and due to large velocity calculation errors which result from uneven sheet stretching. A model of the inner portion of the spiral is introduced to eliminate the difficulty with the sheet edge singularity. The model replaces the outermost portion of the sheet with a single vortex of equivalent circulation and a number of higher order terms which account for the asymmetry of the spiral. The resulting discrete vortex model is applied to both two-dimensional and axisymmetric sheets. The two-dimensional roll-up is compared to the solution for a semi-infinite sheet with good results.

  9. Creep Damage Analysis of a Lattice Truss Panel Structure

    NASA Astrophysics Data System (ADS)

    Jiang, Wenchun; Li, Shaohua; Luo, Yun; Xu, Shugen

    2017-01-01

    The creep failure for a lattice truss sandwich panel structure has been predicted by finite element method (FEM). The creep damage is calculated by three kinds of stresses: as-brazed residual stress, operating thermal stress and mechanical load. The creep damage at tensile and compressive loads have been calculated and compared. The creep rate calculated by FEM, Gibson-Ashby and Hodge-Dunand models have been compared. The results show that the creep failure is located at the fillet at both tensile and creep loads. The damage rate at the fillet at tensile load is 50 times as much as that at compressive load. The lattice truss panel structure has a better creep resistance to compressive load than tensile load, because the creep and stress triaxiality at the fillet has been decreased at compressive load. The maximum creep strain at the fillet and the equivalent creep strain of the panel structure increase with the increase of applied load. Compared with Gibson-Ashby model and Hodge-Dunand models, the modified Gibson-Ashby model has a good prediction result compared with FEM. However, a more accurate model considering the size effect of the structure still needs to be developed.

  10. Distributed activation energy model parameters of some Turkish coals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunes, M.; Gunes, S.K.

    2008-07-01

    A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.

  11. The Hubbard Model and Piezoresistivity

    NASA Astrophysics Data System (ADS)

    Celebonovic, V.; Nikolic, M. G.

    2018-02-01

    Piezoresistivity was discovered in the nineteenth century. Numerous applications of this phenomenon exist nowadays. The aim of the present paper is to explore the possibility of applying the Hubbard model to theoretical work on piezoresistivity. Results are encouraging, in the sense that numerical values of the strain gauge obtained by using the Hubbard model agree with results obtained by other methods. The calculation is simplified by the fact that it uses results for the electrical conductivity of 1D systems previously obtained within the Hubbard model by one of the present authors.

  12. Numerical estimation of cavitation intensity

    NASA Astrophysics Data System (ADS)

    Krumenacker, L.; Fortes-Patella, R.; Archer, A.

    2014-03-01

    Cavitation may appear in turbomachinery and in hydraulic orifices, venturis or valves, leading to performance losses, vibrations and material erosion. This study propose a new method to predict the cavitation intensity of the flow, based on a post-processing of unsteady CFD calculations. The paper presents the analyses of cavitating structures' evolution at two different scales: • A macroscopic one in which the growth of cavitating structures is calculated using an URANS software based on a homogeneous model. Simulations of cavitating flows are computed using a barotropic law considering presence of air and interfacial tension, and Reboud's correction on the turbulence model. • Then a small one where a Rayleigh-Plesset software calculates the acoustic energy generated by the implosion of the vapor/gas bubbles with input parameters from macroscopic scale. The volume damage rate of the material during incubation time is supposed to be a part of the cumulated acoustic energy received by the solid wall. The proposed analysis method is applied to calculations on hydrofoil and orifice geometries. Comparisons between model results and experimental works concerning flow characteristic (size of cavity, pressure,velocity) as well as pitting (erosion area, relative cavitation intensity) are presented.

  13. Fully relativistic B-spline R-matrix calculations for electron collisions with xenon

    NASA Astrophysics Data System (ADS)

    Bartschat, Klaus; Zatsarinny, Oleg

    2009-05-01

    We have applied our recently developed fully relativistic Dirac B-spline R-matrix (DBSR) code [1] to calculate electron scattering from xenon atoms. Results from a 31-state close-coupling model for the excitation function of the metastable (5p^5 6s) J=0,2 states show excellent agreement with experiment [2], thereby presenting a significant improvement over the most sophisticated previous Breit-Pauli calculations [3,4]. This allows for a detailed and reliable analysis of the resonance structure. The same model is currently being used to calculate electron-impact excitation from the metastable J=2 state. The results will be compared with recent experimental data [5] and predictions from other theoretical models [6,7]. [1] O. Zatsarinny and K. Bartschat, Phys. Rev. A 77 (2008) 062701. [2] S. J. Buckman et al., J. Phys. B 16 (1983) 4219. [3] A. N. Grum-Grzhimailo and K. Bartschat, J. Phys. B 35 (2002) 3479. [4] M. Allan et al., Phys. Rev. A 74 (2006) 030701(R). [5] R. O. Jung et al., Phys. Rev. A 72 (2005) 022723. [6] R. Srivastava et al., Phys. Rev. A 74 (2006) 012715. [7] J. Jiang et al., J. Phys. B 41 (2008) 245204.

  14. A nephron-based model of the kidneys for macro-to-micro α-particle dosimetry

    NASA Astrophysics Data System (ADS)

    Hobbs, Robert F.; Song, Hong; Huso, David L.; Sundel, Margaret H.; Sgouros, George

    2012-07-01

    Targeted α-particle therapy is a promising treatment modality for cancer. Due to the short path-length of α-particles, the potential efficacy and toxicity of these agents is best evaluated by microscale dosimetry calculations instead of whole-organ, absorbed fraction-based dosimetry. Yet time-integrated activity (TIA), the necessary input for dosimetry, can still only be quantified reliably at the organ or macroscopic level. We describe a nephron- and cellular-based kidney dosimetry model for α-particle radiopharmaceutical therapy, more suited to the short range and high linear energy transfer of α-particle emitters, which takes as input kidney or cortex TIA and through a macro to micro model-based methodology assigns TIA to micro-level kidney substructures. We apply a geometrical model to provide nephron-level S-values for a range of isotopes allowing for pre-clinical and clinical applications according to the medical internal radiation dosimetry (MIRD) schema. We assume that the relationship between whole-organ TIA and TIA apportioned to microscale substructures as measured in an appropriate pre-clinical mammalian model also applies to the human. In both, the pre-clinical and the human model, microscale substructures are described as a collection of simple geometrical shapes akin to those used in the Cristy-Eckerman phantoms for normal organs. Anatomical parameters are taken from the literature for a human model, while murine parameters are measured ex vivo. The murine histological slides also provide the data for volume of occupancy of the different compartments of the nephron in the kidney: glomerulus versus proximal tubule versus distal tubule. Monte Carlo simulations are run with activity placed in the different nephron compartments for several α-particle emitters currently under investigation in radiopharmaceutical therapy. The S-values were calculated for the α-emitters and their descendants between the different nephron compartments for both the human and murine models. The renal cortex and medulla S-values were also calculated and the results compared to traditional absorbed fraction calculations. The nephron model enables a more optimal implementation of treatment and is a critical step in understanding toxicity for human translation of targeted α-particle therapy. The S-values established here will enable a MIRD-type application of α-particle dosimetry for α-emitters, i.e. measuring the TIA in the kidney (or renal cortex) will provide meaningful and accurate nephron-level dosimetry.

  15. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  16. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  17. A combination strategy for tracking the serial criminal

    NASA Astrophysics Data System (ADS)

    He, Chuan; Zhang, Yuan-Biao; Wan, Jiadi; Yu, Wenjing

    2010-08-01

    We build a Geographic Profiling Model to generate the criminal's geographical profile, by combining two complementary strategies: the Spatial Distribution Strategy and the Probability Distance Strategy. In the first strategy, we designate the mean of all the known crime sites as the anchor point, and build a Standard Deviational Ellipse Model, considering the effect of landscape. In the second strategy, we take many factors such as the buffer zone and distance decay theory into consideration and calculate the probability of the offender's residence in a certain area by using the Bayesian Theorem and the Rossmo Algorithm. Then, we combine the result of two strategies and get three search areas suit different conditions of the police to track the serial criminal. Apply the model to the English serial killer Peter Sutcliffe's case, the calculation result shows that the model can effectively be used to track serial criminal.

  18. Virtual screening of B-Raf kinase inhibitors: A combination of pharmacophore modelling, molecular docking, 3D-QSAR model and binding free energy calculation studies.

    PubMed

    Zhang, Wen; Qiu, Kai-Xiong; Yu, Fang; Xie, Xiao-Guang; Zhang, Shu-Qun; Chen, Ya-Juan; Xie, Hui-Ding

    2017-10-01

    B-Raf kinase has been identified as an important target in recent cancer treatment. In order to discover structurally diverse and novel B-Raf inhibitors (BRIs), a virtual screening of BRIs against ZINC database was performed by using a combination of pharmacophore modelling, molecular docking, 3D-QSAR model and binding free energy (ΔG bind ) calculation studies in this work. After the virtual screening, six promising hit compounds were obtained, which were then tested for inhibitory activities of A375 cell lines. In the result, five hit compounds show good biological activities (IC 50 <50μM). The present method of virtual screening can be applied to find structurally diverse inhibitors, and the obtained five structurally diverse compounds are expected to develop novel BRIs. Copyright © 2017. Published by Elsevier Ltd.

  19. An experimental comparison of several current viscoplastic constitutive models at elevated temperature

    NASA Technical Reports Server (NTRS)

    James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.

  20. Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering

    PubMed Central

    Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.

    2011-01-01

    Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761

  1. Computational Cosmology: From the Early Universe to the Large Scale Structure.

    PubMed

    Anninos, Peter

    2001-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  2. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    NASA Technical Reports Server (NTRS)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  3. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  4. Theoretical analysis of the electrical aspects of the basic electro-impulse problem in aircraft de-icing applications

    NASA Technical Reports Server (NTRS)

    Henderson, Robert A.; Schrag, Robert L.

    1987-01-01

    A method of modelling a system consisting of a cylindrical coil with its axis perpendicular to a metal plate of finite thickness, and a simple electrical circuit for producing a transient current in the coil, is discussed in the context of using such a system for de-icing aircraft surfaces. A transmission line model of the coil and metal plate is developed as the heart of the system model. It is shown that this transmission model is central to calculation of the coil impedance, the coil current, the magnetic fields established on the surfaces of the metal plate, and the resultant total force between the coil and the plate. FORTRAN algorithms were developed for numerical calculation of each of these quantities, and the algorithms were applied to an experimental prototype system in which these quantities had been measured. Good agreement is seen to exist between the predicted and measured results.

  5. Application of Gauss's law space-charge limited emission model in iterative particle tracking method

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.; Ponomarev, V. A.

    2016-11-01

    The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.

  6. Dynamic Analysis of Geared Rotors by Finite Elements

    NASA Technical Reports Server (NTRS)

    Kahraman, A.; Ozguven, H. Nevzat; Houser, D. R.; Zakrajsek, J. J.

    1992-01-01

    A finite element model of a geared rotor system on flexible bearings has been developed. The model includes the rotary inertia of on shaft elements, the axial loading on shafts, flexibility and damping of bearings, material damping of shafts and the stiffness and the damping of gear mesh. The coupling between the torsional and transverse vibrations of gears were considered in the model. A constant mesh stiffness was assumed. The analysis procedure can be used for forced vibration analysis geared rotors by calculating the critical speeds and determining the response of any point on the shafts to mass unbalances, geometric eccentricities of gears, and displacement transmission error excitation at the mesh point. The dynamic mesh forces due to these excitations can also be calculated. The model has been applied to several systems for the demonstration of its accuracy and for studying the effect of bearing compliances on system dynamics.

  7. Supersonic flow calculation using a Reynolds-stress and an eddy thermal diffusivity turbulence model

    NASA Technical Reports Server (NTRS)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1993-01-01

    A second-order model for the velocity field and a two-equation model for the temperature field are used to calculate supersonic boundary layers assuming negligible real gas effects. The modeled equations are formulated on the basis of an incompressible assumption and then extended to supersonic flows by invoking Morkovin's hypothesis, which proposes that compressibility effects are completely accounted for by mean density variations alone. In order to calculate the near-wall flow accurately, correction functions are proposed to render the modeled equations asymptotically consistent with the behavior of the exact equations near a wall and, at the same time, display the proper dependence on the molecular Prandtl number. Thus formulated, the near-wall second order turbulence model for heat transfer is applicable to supersonic flows with different Prandtl numbers. The model is validated against flows with different Prandtl numbers and supersonic flows with free-stream Mach numbers as high as 10 and wall temperature ratios as low as 0.3. Among the flow cases considered, the momentum thickness Reynolds number varies from approximately 4,000 to approximately 21,000. Good correlation with measurements of mean velocity, temperature, and its variance is obtained. Discernible improvements in the law-of-the-wall are observed, especially in the range where the big-law applies.

  8. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic inversions we examine the importance of including the topography in the inversion and we test different regularization schemes using weighted second norm of model gradient as well as inverting for a static distortion matrix following Miensopust/Avdeeva approach. We also apply our algorithm to invert MT data collected at Mt St Helens.

  9. Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas

    PubMed Central

    Philibert, Aurore; Loyce, Chantal; Makowski, David

    2012-01-01

    Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430

  10. Two-dimensional simulation of clastic and carbonate sedimentation, consolidation, subsidence, fluid flow, heat flow and solute transport during the formation of sedimentary basins

    NASA Astrophysics Data System (ADS)

    Bitzer, Klaus

    1999-05-01

    Geological processes that create sedimentary basins or act during their formation can be simulated using the public domain computer code `BASIN'. For a given set of geological initial and boundary conditions the sedimentary basin evolution is calculated in a forward modeling approach. The basin is represented in a two-dimensional vertical cross section with individual layers. The stratigraphic, tectonic, hydrodynamic and thermal evolution is calculated beginning at an initial state, and subsequent changes of basin geometry are calculated from sedimentation rates, compaction and pore fluid mobilization, isostatic compensation, fault movement and subsidence. The sedimentologic, hydraulic and thermal parameters are stored at discrete time steps allowing the temporal evolution of the basin to be analyzed. A maximum flexibility in terms of geological conditions is achieved by using individual program modules representing geological processes which can be switched on and off depending on the data available for a specific simulation experiment. The code incorporates a module for clastic and carbonate sedimentation, taking into account the impact of clastic sediment supply on carbonate production. A maximum of four different sediment types, which may be mixed during sedimentation, can be defined. Compaction and fluid flow are coupled through the consolidation equation and the nonlinear form of the equation of state for porosity, allowing nonequilibrium compaction and overpressuring to be calculated. Instead of empirical porosity-effective stress equations, a physically consistent consolidation model is applied which incorporates a porosity dependent sediment compressibility. Transient solute transport and heat flow are calculated as well, applying calculated fluid flow rates from the hydraulic model. As a measure for hydrocarbon generation, the Time-Temperature Index (TTI) is calculated. Three postprocessing programs are available to provide graphic output in PostScript format: BASINVIEW is used to display the distribution of parameters in the simulated cross-section of the basin for defined time steps. It is used in conjunction with the Ghostview software, which is freeware and available on most computer systems. AIBASIN provides PostScript output for Adobe Illustrator®, taking advantage of the layer-concept which facilitates further graphic manipulation. BASELINE is used to display parameter distribution at a defined well or to visualize the temporal evolution of individual elements located in the simulated sedimentary basin. The modular structure of the BASIN code allows additional processes to be included. A module to simulate reactive transport and diagenetic reactions is planned for future versions. The program has been applied to existing sedimentary basins, and it has also shown a high potential for classroom instruction, giving the possibility to create hypothetical basins and to interpret basin evolution in terms of sequence stratigraphy or petroleum potential.

  11. Assessing the prospective resource base for enhanced geothermal systems in Europe

    NASA Astrophysics Data System (ADS)

    Limberger, J.; Calcagno, P.; Manzella, A.; Trumpy, E.; Boxem, T.; Pluymaekers, M. P. D.; van Wees, J.-D.

    2014-12-01

    In this study the resource base for EGS (enhanced geothermal systems) in Europe was quantified and economically constrained, applying a discounted cash-flow model to different techno-economic scenarios for future EGS in 2020, 2030, and 2050. Temperature is a critical parameter that controls the amount of thermal energy available in the subsurface. Therefore, the first step in assessing the European resource base for EGS is the construction of a subsurface temperature model of onshore Europe. Subsurface temperatures were computed to a depth of 10 km below ground level for a regular 3-D hexahedral grid with a horizontal resolution of 10 km and a vertical resolution of 250 m. Vertical conductive heat transport was considered as the main heat transfer mechanism. Surface temperature and basal heat flow were used as boundary conditions for the top and bottom of the model, respectively. If publicly available, the most recent and comprehensive regional temperature models, based on data from wells, were incorporated. With the modeled subsurface temperatures and future technical and economic scenarios, the technical potential and minimum levelized cost of energy (LCOE) were calculated for each grid cell of the temperature model. Calculations for a typical EGS scenario yield costs of EUR 215 MWh-1 in 2020, EUR 127 MWh-1 in 2030, and EUR 70 MWh-1 in 2050. Cutoff values of EUR 200 MWh-1 in 2020, EUR 150 MWh-1 in 2030, and EUR 100 MWh-1 in 2050 are imposed to the calculated LCOE values in each grid cell to limit the technical potential, resulting in an economic potential for Europe of 19 GWe in 2020, 22 GWe in 2030, and 522 GWe in 2050. The results of our approach do not only provide an indication of prospective areas for future EGS in Europe, but also show a more realistic cost determined and depth-dependent distribution of the technical potential by applying different well cost models for 2020, 2030, and 2050.

  12. SU-G-TeP1-08: LINAC Head Geometry Modeling for Cyber Knife System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, B; Li, Y; Liu, B

    Purpose: Knowledge of the LINAC head information is critical for model based dose calculation algorithms. However, the geometries are difficult to measure precisely. The purpose of this study is to develop linac head models for Cyber Knife system (CKS). Methods: For CKS, the commissioning data were measured in water at 800mm SAD. The measured full width at half maximum (FWHM) for each cone was found greater than the nominal value, this was further confirmed by additional film measurement in air. Diameter correction, cone shift and source shift models (DCM, CSM and SSM) are proposed to account for the differences. Inmore » DCM, a cone-specific correction is applied. For CSM and SSM, a single shift is applied to the cone or source physical position. All three models were validated with an in-house developed pencil beam dose calculation algorithm, and further evaluated by the collimator scatter factor (Sc) correction. Results: The mean square error (MSE) between nominal diameter and the FWHM derived from commissioning data and in-air measurement are 0.54mm and 0.44mm, with the discrepancy increasing with cone size. Optimal shift for CSM and SSM is found to be 9mm upward and 18mm downward, respectively. The MSE in FWHM is reduced to 0.04mm and 0.14mm for DCM and CSM (SSM). Both DCM and CSM result in the same set of Sc values. Combining all cones at SAD 600–1000mm, the average deviation from 1 in Sc of DCM (CSM) and SSM is 2.6% and 2.2%, and reduced to 0.9% and 0.7% for the cones with diameter greater than 15mm. Conclusion: We developed three geometrical models for CKS. All models can handle the discrepancy between vendor specifications and commissioning data. And SSM has the best performance for Sc correction. The study also validated that a point source can be used in CKS dose calculation algorithms.« less

  13. Modeling Three-Dimensional Shock Initiation of PBX 9501 in ALE3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leininger, L; Springer, H K; Mace, J

    A recent SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has provided 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate and study code predictions. These SMIS tests used a powder gun to shoot scaled NATO standard fragments into a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. This SMIS real-world shot scenario creates a unique test-bed because (1) SMIS tests facilitatemore » the investigation of 3D Shock to Detonation Transition (SDT) within the context of a considerable suite of diagnostics, and (2) many of the fragments arrive at the impact plate off-center and at an angle of impact. A particular goal of these model validation experiments is to demonstrate the predictive capability of the ALE3D implementation of the Tarver-Lee Ignition and Growth reactive flow model [2] within a fully 3-dimensional regime of SDT. The 3-dimensional Arbitrary Lagrange Eulerian (ALE) hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations reproduce observed 'Go/No-Go' 3D Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied for the response of heterogeneous high explosives in the SDT regime.« less

  14. Interior and exterior ballistics coupled optimization with constraints of attitude control and mechanical-thermal conditions

    NASA Astrophysics Data System (ADS)

    Liang, Xin-xin; Zhang, Nai-min; Zhang, Yan

    2016-07-01

    For solid launch vehicle performance promotion, a modeling method of interior and exterior ballistics associated optimization with constraints of attitude control and mechanical-thermal condition is proposed. Firstly, the interior and external ballistic models of the solid launch vehicle are established, and the attitude control model of the high wind area and the stage of the separation is presented, and the load calculation model of the drag reduction device is presented, and thermal condition calculation model of flight is presented. Secondly, the optimization model is established to optimize the range, which has internal and external ballistic design parameters as variables selected by sensitivity analysis, and has attitude control and mechanical-thermal conditions as constraints. Finally, the method is applied to the optimal design of a three stage solid launch vehicle simulation with differential evolution algorithm. Simulation results are shown that range capability is improved by 10.8%, and both attitude control and mechanical-thermal conditions are satisfied.

  15. Modelling of spatial contaminant probabilities of occurrence of chlorinated hydrocarbons in an urban aquifer.

    PubMed

    Greis, Tillman; Helmholz, Kathrin; Schöniger, Hans Matthias; Haarstrick, Andreas

    2012-06-01

    In this study, a 3D urban groundwater model is presented which serves for calculation of multispecies contaminant transport in the subsurface on the regional scale. The total model consists of two submodels, the groundwater flow and reactive transport model, and is validated against field data. The model equations are solved applying finite element method. A sensitivity analysis is carried out to perform parameter identification of flow, transport and reaction processes. Coming from the latter, stochastic variation of flow, transport, and reaction input parameters and Monte Carlo simulation are used in calculating probabilities of pollutant occurrence in the domain. These probabilities could be part of determining future spots of contamination and their measure of damages. Application and validation is exemplarily shown for a contaminated site in Braunschweig (Germany), where a vast plume of chlorinated ethenes pollutes the groundwater. With respect to field application, the methods used for modelling reveal feasible and helpful tools to assess natural attenuation (MNA) and the risk that might be reduced by remediation actions.

  16. Framework for cascade size calculations on random networks

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Schweitzer, Frank

    2018-04-01

    We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.

  17. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  18. PBSM3D: A finite volume, scalar-transport blowing snow model for use with variable resolution meshes

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Wayand, N. E.; Pomeroy, J. W.; Wheater, H. S.; Spiteri, R. J.

    2017-12-01

    Blowing snow redistribution results in heterogeneous snowcovers that are ubiquitous in cold, windswept environments. Capturing this spatial and temporal variability is important for melt and runoff simulations. Point scale blowing snow transport models are difficult to apply in fully distributed hydrological models due to landscape heterogeneity and complex wind fields. Many existing distributed snow transport models have empirical wind flow and/or simplified wind direction algorithms that perform poorly in calculating snow redistribution where there are divergent wind flows, sharp topography, and over large spatial extents. Herein, a steady-state scalar transport model is discretized using the finite volume method (FVM), using parameterizations from the Prairie Blowing Snow Model (PBSM). PBSM has been applied in hydrological response units and grids to prairie, arctic, glacier, and alpine terrain and shows a good capability to represent snow redistribution over complex terrain. The FVM discretization takes advantage of the variable resolution mesh in the Canadian Hydrological Model (CHM) to ensure efficient calculations over small and large spatial extents. Variable resolution unstructured meshes preserve surface heterogeneity but result in fewer computational elements versus high-resolution structured (raster) grids. Snowpack, soil moisture, and streamflow observations were used to evaluate CHM-modelled outputs in a sub-arctic and an alpine basin. Newly developed remotely sensed snowcover indices allowed for validation over large basins. CHM simulations of snow hydrology were improved by inclusion of the blowing snow model. The results demonstrate the key role of snow transport processes in creating pre-melt snowcover heterogeneity and therefore governing post-melt soil moisture and runoff generation dynamics.

  19. Optical modeling based on mean free path calculations for quantum dot phosphors applied to optoelectronic devices.

    PubMed

    Shin, Min-Ho; Kim, Hyo-Jun; Kim, Young-Joo

    2017-02-20

    We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.

  20. Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools

    NASA Astrophysics Data System (ADS)

    Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.

    2017-12-01

    The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.

  1. Internal friction and dislocation collective pinning in disordered quenched solid solutions

    NASA Astrophysics Data System (ADS)

    D'Anna, G.; Benoit, W.; Vinokur, V. M.

    1997-12-01

    We introduce the collective pinning of dislocations in disordered quenched solid solutions and calculate the macroscopic mechanical response to a small dc or ac applied stress. This work is a generalization of the Granato-Lücke string model, able to describe self-consistently short and long range dislocation motion. Under dc applied stress the long distance dislocation creep has at the microscopic level avalanche features, which result in a macroscopic nonlinear "glassy" velocity-stress characteristic. Under ac conditions the model predicts, in addition to the anelastic internal friction relaxation in the high frequency regime, a linear internal friction background which remains amplitude-independent down to a crossover frequency to a strongly nonlinear internal friction regime.

  2. [The study of medical supplies automation replenishment algorithm in hospital on medical supplies supplying chain].

    PubMed

    Sheng, Xi

    2012-07-01

    The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.

  3. Numerical Analysis of Small Deformation of Flexible Helical Flagellum of Swimming Bacteria

    NASA Astrophysics Data System (ADS)

    Takano, Yasunari; Goto, Tomonobu

    Formulations are conducted to numerically analyze the effect of flexible flagellum of swimming bacteria. In the present model, a single-flagellate bacterium is assumed to consist of a rigid cell body of the prolate spheroidal shape and a flexible flagellum of the helical form. The resistive force theory is applied to estimate the force exerted on the flagellum. The torsional as well as the bending moments determine the curvature and the torsion of the deformed flagellum according to the Kirchhoff model for an elastic rod. The unit tangential vector along the deformed flagellum is calculated by applying evolution equations for space curves, and also a deformed shape of the flagellum is obtained.

  4. Theoretical study on perpendicular magnetoelectric coupling in ferroelectromagnet system

    NASA Astrophysics Data System (ADS)

    Zhong, Chonggui; Jiang, Qing

    2002-06-01

    We apply the Heisenberg model for antiferromagnetic interaction and Diffour model for ferroelectric interaction to analyze the magnetic, electric, magnetoelectric property in the system with the spontaneous coexistence of the ferroelectric and antiferromagnetic orders below a certain temperature. The soft mode theory is used to calculate the on-site polarization and mean field theory is applied to deal with the on-site magnetization. We also present the perpendicular magnetoelectric susceptibility χme⊥, polarization susceptibility χp as a function of temperature, and discuss the effect of the inherent magnetoelectric coupling on them. In addition, it is found that an anomaly appears in the curve of the polarization susceptibility due to the coupling between the ferroelectric and antiferromagnetic orders.

  5. Low-energy electron scattering from CO. 2: Ab-initio study using the frame-transformation theory

    NASA Technical Reports Server (NTRS)

    Chandra, N.

    1976-01-01

    The Wigner-Eisenbud R matrix method has been combined with the frame transformation theory to study electron scattering from molecular systems. The R matrix, calculated at the boundary point of the molecular core radius, has been transformed to the space frame in order to continue the solution of the scattering equations in the outer region where rotational motion of the nuclei is taken into account. This procedure has been applied to a model calculation of thermal energy electron scattering from CO.

  6. Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, C.E.

    Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.

  7. AGARD standard aeroelastic configurations for dynamic response. 1: Wing 445.6

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.

    1988-01-01

    This report contains experimental flutter data for the AGARD 3-D swept tapered standard configuration Wing 445.6, along with related descriptive data of the model properties required for comparative flutter calculations. As part of a cooperative AGARD-SMP program, guided by the Sub-Committee on Aeroelasticity, this standard configuration may serve as a common basis for comparison of calculated and measured aeroelastic behavior. These comparisons will promote a better understanding of the assumptions, approximations and limitations underlying the various aerodynamic methods applied, thus pointing the way to further improvements.

  8. A hybrid Reynolds averaged/PDF closure model for supersonic turbulent combustion

    NASA Technical Reports Server (NTRS)

    Frankel, Steven H.; Hassan, H. A.; Drummond, J. Philip

    1990-01-01

    A hybrid Reynolds averaged/assumed pdf approach has been developed and applied to the study of turbulent combustion in a supersonic mixing layer. This approach is used to address the 'laminar-like' treatment of the thermochemical terms that appear in the conservation equations. Calculations were carried out for two experiments involving H2-air supersonic turbulent mixing. Two different forms of the pdf were implemented. In general, the results show modest improvement from previous calculations. Moreover, the results appear to be somewhat independent of the form of the assumed pdf.

  9. Calculation of wind-driven surface currents in the North Atlantic Ocean

    NASA Technical Reports Server (NTRS)

    Rees, T. H.; Turner, R. E.

    1976-01-01

    Calculations to simulate the wind driven near surface currents of the North Atlantic Ocean are described. The primitive equations were integrated on a finite difference grid with a horizontal resolution of 2.5 deg in longitude and latitude. The model ocean was homogeneous with a uniform depth of 100 m and with five levels in the vertical direction. A form of the rigid-lid approximation was applied. Generally, the computed surface current patterns agreed with observed currents. The development of a subsurface equatorial countercurrent was observed.

  10. Hybrid High-Fidelity Modeling of Radar Scenarios Using Atemporal, Discrete-Event, and Time-Step Simulation

    DTIC Science & Technology

    2016-12-01

    time T1 for the mover to travel from the current position to the next waypoint is calculated by the T1 = DistanceMaxSeed . The "EndMove" event will...speed of light in a real atmosphere. The factor of 12 is the result of the round trip travel time of the signal. The maximum detection range (Rmax) is...34EnterRange" event is triggered by the referee, the time for the target traveling to the midpoint towards its waypoint tm is calculated and applied

  11. A New Method with General Diagnostic Utility for the Calculation of Immunoglobulin G Avidity

    PubMed Central

    Korhonen, Maria H.; Brunstein, John; Haario, Heikki; Katnikov, Alexei; Rescaldani, Roberto; Hedman, Klaus

    1999-01-01

    The reference method for immunoglobulin G (IgG) avidity determination includes reagent-consuming serum titration. Aiming at better IgG avidity diagnostics, we applied a logistic model for the reproduction of antibody titration curves. This method was tested with well-characterized serum panels for cytomegalovirus, Epstein-Barr virus, rubella virus, parvovirus B19, and Toxoplasma gondii. This approach for IgG avidity calculation is generally applicable and attains the diagnostic performance of the reference method while being less laborious and twice as cost-effective. PMID:10473525

  12. Hybrid transfer-matrix FDTD method for layered periodic structures.

    PubMed

    Deinega, Alexei; Belousov, Sergei; Valuev, Ilya

    2009-03-15

    A hybrid transfer-matrix finite-difference time-domain (FDTD) method is proposed for modeling the optical properties of finite-width planar periodic structures. This method can also be applied for calculation of the photonic bands in infinite photonic crystals. We describe the procedure of evaluating the transfer-matrix elements by a special numerical FDTD simulation. The accuracy of the new method is tested by comparing computed transmission spectra of a 32-layered photonic crystal composed of spherical or ellipsoidal scatterers with the results of direct FDTD and layer-multiple-scattering calculations.

  13. Probabilistic model of nonlinear penalties due to collision-induced timing jitter for calculation of the bit error ratio in wavelength-division-multiplexed return-to-zero systems

    NASA Astrophysics Data System (ADS)

    Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.

    2006-12-01

    We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.

  14. Joint inversion of seismic refraction and resistivity data using layered models - applications to hydrogeology

    NASA Astrophysics Data System (ADS)

    Juhojuntti, N. G.; Kamm, J.

    2010-12-01

    We present a layered-model approach to joint inversion of shallow seismic refraction and resistivity (DC) data, which we believe is a seldom tested method of addressing the problem. This method has been developed as we believe that for shallow sedimentary environments (roughly <100 m depth) a model with a few layers and sharp layer boundaries better represents the subsurface than a smooth minimum-structure (grid) model. Due to the strong assumption our model parameterization implies on the subsurface, only a low number of well resolved model parameters has to be estimated, and provided that this assumptions holds our method can also be applied to other environments. We are using a least-squares inversion, with lateral smoothness constraints, allowing lateral variations in the seismic velocity and the resistivity but no vertical variations. One exception is a positive gradient in the seismic velocity in the uppermost layer in order to get diving rays (the refractions in the deeper layers are modeled as head waves). We assume no connection between seismic velocity and resistivity, and these parameters are allowed to vary individually within the layers. The layer boundaries are, however, common for both parameters. During the inversion lateral smoothing can be applied to the layer boundaries as well as to the seismic velocity and the resistivity. The number of layers is specified before the inversion, and typically we use models with three layers. Depending on the type of environment it is possible to apply smoothing either to the depth of the layer boundaries or to the thickness of the layers, although normally the former is used for shallow sedimentary environments. The smoothing parameters can be chosen independently for each layer. For the DC data we use a finite-difference algorithm to perform the forward modeling and to calculate the Jacobian matrix, while for the seismic data the corresponding entities are retrieved via ray-tracing, using components from the RAYINVR package. The modular layout of the code makes it straightforward to include other types of geophysical data, i.e. gravity. The code has been tested using synthetic examples with fairly simple 2D geometries, mainly for checking the validity of the calculations. The inversion generally converges towards the correct solution, although there could be stability problems if the starting model is too erroneous. We have also applied the code to field data from seismic refraction and multi-electrode resistivity measurements at typical sand-gravel groundwater reservoirs. The tests are promising, as the calculated depths agree fairly well with information from drilling and the velocity and resistivity values appear reasonable. Current work includes better regularization of the inversion as well as defining individual weight factors for the different datasets, as the present algorithm tends to constrain the depths mainly by using the seismic data. More complex synthetic examples will also be tested, including models addressing the seismic hidden-layer problem.

  15. Vertical profiles of aerosol absorption coefficient from micro-Aethalometer data and Mie calculation over Milan.

    PubMed

    Ferrero, L; Mocnik, G; Ferrini, B S; Perrone, M G; Sangiorgi, G; Bolzacchini, E

    2011-06-15

    Vertical profiles of aerosol number-size distribution and black carbon (BC) concentration were measured between ground-level and 500m AGL over Milan. A tethered balloon was fitted with an instrumentation package consisting of the newly-developed micro-Aethalometer (microAeth® Model AE51, Magee Scientific, USA), an optical particle counter, and a portable meteorological station. At the same time, PM(2.5) samples were collected both at ground-level and at a high altitude sampling site, enabling particle chemical composition to be determined. Vertical profiles and PM(2.5) data were collected both within and above the mixing layer. Absorption coefficient (b(abs)) profiles were calculated from the Aethalometer data: in order to do so, an optical enhancement factor (C), accounting for multiple light-scattering within the filter of the new microAeth® Model AE51, was determined for the first time. The value of this parameter C (2.05±0.03 at λ=880nm) was calculated by comparing the Aethalometer attenuation coefficient and aerosol optical properties determined from OPC data along vertical profiles. Mie calculations were applied to the OPC number-size distribution data, and the aerosol refractive index was calculated using the effective medium approximation applied to aerosol chemical composition. The results compare well with AERONET data. The BC and b(abs) profiles showed a sharp decrease at the mixing height (MH), and fairly constant values of b(abs) and BC were found above the MH, representing 17±2% of those values measured within the mixing layer. The BC fraction of aerosol volume was found to be lower above the MH: 48±8% of the corresponding ground-level values. A statistical mean profile was calculated, both for BC and b(abs), to better describe their behaviour; the model enabled us to compute their average behaviour as a function of height, thus laying the foundations for valid parametrizations of vertical profile data which can be useful in both remote sensing and climatic studies. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Radiative Heating Methodology for the Huygens Probe

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth

    2007-01-01

    The radiative heating environment for the Huygens probe near peak heating conditions for Titan entry is investigated in this paper. The task of calculating the radiation-coupled flowfield, accounting for non-Boltzmann and non-optically thin radiation, is simplified to a rapid yet accurate calculation. This is achieved by using the viscous-shock layer (VSL) technique for the stagnation-line flowfield calculation and a modified smeared rotational band (SRB) model for the radiation calculation. These two methods provide a computationally efficient alternative to a Navier-Stokes flowfield and line-by-line radiation calculation. The results of the VSL technique are shown to provide an excellent comparison with the Navier-Stokes results of previous studies. It is shown that a conventional SRB approach is inadequate for the partially optically-thick conditions present in the Huygens shock-layer around the peak heating trajectory points. A simple modification is proposed to the SRB model that improves its accuracy in these partially optically-thick conditions. This modified approach, labeled herein as SRBC, is compared throughout this study with a detailed line-by-line (LBL) calculation and is shown to compare within 5% in all cases. The SRBC method requires many orders-of-magnitude less computational time than the LBL method, which makes it ideal for coupling to the flowfield. The application of a collisional-radiative (CR) model for determining the population of the CN electronic states, which govern the radiation for Huygens entry, is discussed and applied. The non-local absorption term in the CR model is formulated in terms of an escape factor, which is then curve-fit with temperature. Although the curve-fit is an approximation, it is shown to compare well with the exact escape factor calculation, which requires a computationally intensive iteration procedure.

  17. A hybrid deep neural network and physically based distributed model for river stage prediction

    NASA Astrophysics Data System (ADS)

    hitokoto, Masayuki; sakuraba, Masaaki

    2016-04-01

    We developed the real-time river stage prediction model, using the hybrid deep neural network and physically based distributed model. As the basic model, 4 layer feed-forward artificial neural network (ANN) was used. As a network training method, the deep learning technique was applied. To optimize the network weight, the stochastic gradient descent method based on the back propagation method was used. As a pre-training method, the denoising autoencoder was used. Input of the ANN model is hourly change of water level and hourly rainfall, output data is water level of downstream station. In general, the desirable input of the ANN has strong correlation with the output. In conceptual hydrological model such as tank model and storage-function model, river discharge is governed by the catchment storage. Therefore, the change of the catchment storage, downstream discharge subtracted from rainfall, can be the potent input candidate of the ANN model instead of rainfall. From this point of view, the hybrid deep neural network and physically based distributed model was developed. The prediction procedure of the hybrid model is as follows; first, downstream discharge was calculated by the distributed model, and then estimates the hourly change of catchment storage form rainfall and calculated discharge as the input of the ANN model, and finally the ANN model was calculated. In the training phase, hourly change of catchment storage can be calculated by the observed rainfall and discharge data. The developed model was applied to the one catchment of the OOYODO River, one of the first-grade river in Japan. The modeled catchment is 695 square km. For the training data, 5 water level gauging station and 14 rain-gauge station in the catchment was used. The training floods, superior 24 events, were selected during the period of 2005-2014. Prediction was made up to 6 hours, and 6 models were developed for each prediction time. To set the proper learning parameters and network architecture of the ANN model, sensitivity analysis was done by the case study approach. The prediction result was evaluated by the superior 4 flood events by the leave-one-out cross validation. The prediction result of the basic 4 layer ANN was better than the conventional 3 layer ANN model. However, the result did not reproduce well the biggest flood event, supposedly because the lack of the sufficient high-water level flood event in the training data. The result of the hybrid model outperforms the basic ANN model and distributed model, especially improved the performance of the basic ANN model in the biggest flood event.

  18. Superconductivity in an almost localized Fermi liquid of quasiparticles with spin-dependent masses and effective-field induced by electron correlations

    NASA Astrophysics Data System (ADS)

    Kaczmarczyk, Jan; Spałek, Jozef

    2009-06-01

    Paired state of nonstandard quasiparticles is analyzed in detail in two model situations. Namely, we consider the Cooper-pair bound state and the condensed phase of an almost localized Fermi liquid composed of quasiparticles in a narrow band with the spin-dependent masses and an effective field, both introduced earlier and induced by strong electronic correlations. Each of these novel characteristics is calculated in a self-consistent manner. We analyze the bound states as a function of Cooper-pair momentum |Q| in applied magnetic field in the strongly Pauli limiting case (i.e., when the orbital effects of applied magnetic field are disregarded). The spin-direction dependence of the effective mass makes the quasiparticles comprising Cooper-pair spin distinguishable in the quantum-mechanical sense, whereas the condensed gas of pairs may still be regarded as composed of identical entities. The Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) condensed phase of moving pairs is by far more robust in the applied field for the case with spin-dependent masses than in the situation with equal masses of quasiparticles. Relative stability of the Bardeen-Cooper-Schrieffer vs FFLO phase is analyzed in detail on temperature-applied field plane. Although our calculations are carried out for a model situation, we can conclude that the spin-dependent masses should play an important role in stabilizing high-field low-temperature unconventional superconducting phases (FFLO, for instance) in systems such as CeCoIn5 , organic metals, and possibly others.

  19. Beta-decay half-lives for short neutron rich nuclei involved into the r-process

    NASA Astrophysics Data System (ADS)

    Panov, I.; Lutostansky, Yu; Thielemann, F.-K.

    2018-01-01

    The beta-strength function model based on Finite Fermi-Systems Theory is applied for calculations of the beta-decay half-lives for short neutron rich nuclei involved into the r- process. It is shown that the accuracy of beta-decay half-lives of short-lived neutron-rich nuclei is improving with increasing neutron excess and can be used for modeling of nucleosynthesis of heavy nuclei in the r-process.

  20. A new mathematical modeling approach for the energy of threonine molecule

    NASA Astrophysics Data System (ADS)

    Sahiner, Ahmet; Kapusuz, Gulden; Yilmaz, Nurullah

    2017-07-01

    In this paper, we propose an improved new methodology in energy conformation problems for finding optimum energy values. First, we construct the Bezier surfaces near local minimizers based on the data obtained from Density Functional Theory (DFT) calculations. Second, we blend the constructed surfaces in order to obtain a single smooth model. Finally, we apply the global optimization algorithm to find two torsion angles those make the energy of the molecule minimum.

Top