Accurate spectral modeling for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Gupta, S. K.
1977-01-01
Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.
NASA Astrophysics Data System (ADS)
Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.
2013-10-01
Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar
modeled temperature is 5-10K too low compared to measurements. This suggests that processes other than radiative heating/cooling by trace
Fu, Q.; Sun, W.B.; Yang, P.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Spurr, R. J. D.; Shia, R. L.; Yung, Y. L.
2014-12-01
Radiative transfer (RT) computations are an essential component of energy budget calculations in climate models. However, full treatment of RT processes is computationally expensive, prompting usage of 2-stream approximations in operational climate models. This simplification introduces errors of the order of 10% in the top of the atmosphere (TOA) fluxes [Randles et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT simulations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those (few) optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Here, we extend the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Comparisons between the new model, called Universal Principal Component Analysis model for Radiative Transfer (UPCART), 2-stream models (such as those used in climate applications) and line-by-line RT models are performed, in order for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the TOA for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and solar and viewing geometries. We demonstrate that very accurate radiative forcing estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases as compared to an exact line-by-line RT model. The model is comparable in speeds to 2-stream models, potentially rendering UPCART useful for operational General Circulation Models (GCMs). The operational speed and accuracy of UPCART can be further
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region
NASA Astrophysics Data System (ADS)
Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.
2016-04-01
Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.
Hu, Y.X.; Stamnes, K. )
1993-04-01
A new parameterization of the radiative Properties of water clouds is presented. Cloud optical properties for valent radius throughout the solar and both solar and terrestrial spectra and for cloud equivalent radii in the range 2.5-60 [mu]m are calculated from Mie theory. It is found that cloud optical properties depend mainly on equivalent radius throughout the solar and terrestrial spectrum and are insensitive to the details of the droplet size distribution, such as shape, skewness, width, and modality (single or bimodal). This suggests that in cloud models, aimed at predicting the evolution of cloud microphysics with climate change, it is sufficient to determine the third and the second moments of the size distribution (the ratio of which determines the equivalent radius). It also implies that measurements of the cloud liquid water content and the extinction coefficient are sufficient to determine cloud optical properties experimentally (i.e., measuring the complete droplet size distribution is not required). Based on the detailed calculations, the optical properties are parameterized as a function of cloud liquid water path and equivalent cloud droplet radius by using a nonlinear least-square fitting. The parameterization is performed separately for the range of radii 2.5-12 [mu]m, 12-30,[mu]m, and 30-60 [mu]m. Cloud heating and cooling rates are computed from this parameterization by using a comprehensive radiation model. Comparison with similar results obtained from exact Mie scattering calculations shows that this parameterization yields very accurate results and that it is several thousand times faster. This parameterization separates the dependence of cloud optical properties on droplet size and liquid water content, and is suitable for inclusion into climate models. 22 refs., 7 figs., 6 tabs.
Remote balance weighs accurately amid high radiation
NASA Technical Reports Server (NTRS)
Eggenberger, D. N.; Shuck, A. B.
1969-01-01
Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1982-01-01
The circulation of the middle atmosphere of the earth (15-90 km) is driven by the unequal distribution of net radiative heating. Calculations have shown that local radiative heating is nearly balanced by radiative cooling throughout parts of the stratosphere and mesosphere. The 15 micrometer band of CO2 is the dominant component of the infrared cooling. The present investigation is concerned with an algorithm regarding the involved cooling process. The algorithm was designed for the semispectral primitive equation model of the stratosphere and mesosphere described by Holton and Wehrbein (1980). The model consists of 16 layers, each nominally 5 km thick, between the base of the stratosphere at 100 mb (approximately 16 km) and the base of the thermosphere (approximately 96 km). The considered algorithm provides a convenient means of incorporating cooling due to CO2 into dynamical models of the middle atmosphere.
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
Anatomically accurate individual face modeling.
Zhang, Yu; Prakash, Edmond C; Sung, Eric
2003-01-01
This paper presents a new 3D face model of a specific person constructed from the anatomical perspective. By exploiting the laser range data, a 3D facial mesh precisely representing the skin geometry is reconstructed. Based on the geometric facial mesh, we develop a deformable multi-layer skin model. It takes into account the nonlinear stress-strain relationship and dynamically simulates the non-homogenous behavior of the real skin. The face model also incorporates a set of anatomically-motivated facial muscle actuators and underlying skull structure. Lagrangian mechanics governs the facial motion dynamics, dictating the dynamic deformation of facial skin in response to the muscle contraction. PMID:15455936
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
The Greenhouse Effect - Determination From Accurate Surface Longwave Radiation Measurements
NASA Astrophysics Data System (ADS)
Philipona, R.
Longwave radiation measurements have been drastically improved in recent years. Uncertainty levels down to s2 Wm-2 are realistic and achieved during long-term ´ longwave irradiance measurements. Longwave downward irradiance measurements together with temperature and humidity measurements at the station are used to sepa- rate clear-sky from cloudy-sky situations. Longwave net radiation separated between clear-sky and all-sky situations allows to determine the longwave cloud radiative forc- ing at the station. For clear-sky situations radiative transfer models demonstrate a lin- ear relation between longwave downward radiation and the greenhouse radiative flux. Clear-sky longwave radiation, temperature and humidity for different atmospheres and different altitudes were modeled with the MODTRAN radiative transfer code and compared to longwave radiation, temperature and humidity measured at 4 radiation stations of the Alpine Surface Radiation Budget (ASRB) network at similar altitude and with corresponding atmospheres. At the 11 ASRB stations the clear-sky green- house effect was determined by using clear-sky longwave downward measurements and MODTRAN model calculations. The all-sky greenhouse effect was determined by adding the longwave cloud radiative forcing to the clear-sky greenhouse radiative flux. The altitude dependence of annual and seasonal mean values of the greenhouse effect will be shown for the altitude range of 400 to 3600 meter a.s.l. in the Alps.
ERIC Educational Resources Information Center
James, W. G. G.
1970-01-01
Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
GORRAM: Introducing accurate operational-speed radiative transfer Monte Carlo solvers
NASA Astrophysics Data System (ADS)
Buras-Schnell, Robert; Schnell, Franziska; Buras, Allan
2016-06-01
We present a new approach for solving the radiative transfer equation in horizontally homogeneous atmospheres. The motivation was to develop a fast yet accurate radiative transfer solver to be used in operational retrieval algorithms for next generation meteorological satellites. The core component is the program GORRAM (Generator Of Really Rapid Accurate Monte-Carlo) which generates solvers individually optimized for the intended task. These solvers consist of a Monte Carlo model capable of path recycling and a representative set of photon paths. Latter is generated using the simulated annealing technique. GORRAM automatically takes advantage of limitations on the variability of the atmosphere. Due to this optimization the number of photon paths necessary for accurate results can be reduced by several orders of magnitude. For the shown example of a forward model intended for an aerosol satellite retrieval, comparison with an exact yet slow solver shows that a precision of better than 1% can be achieved with only 36 photons. The computational time is at least an order of magnitude faster than any other type of radiative transfer solver. Merely the lookup table approach often used in satellite retrieval is faster, but on the other hand suffers from limited accuracy. This makes GORRAM-generated solvers an eligible candidate as forward model in operational-speed retrieval algorithms and data assimilation applications. GORRAM also has the potential to create fast solvers of other integrable equations.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203
Accurate astronomical atmospheric dispersion models in ZEMAX
NASA Astrophysics Data System (ADS)
Spanò, P.
2014-07-01
ZEMAX provides a standard built-in atmospheric model to simulate atmospheric refraction and dispersion. This model has been compared with other ones to assess its intrinsic accuracy, critical for very demanding application like ADCs for AO-assisted extremely large telescopes. A revised simple model, based on updated published data of the air refractivity, is proposed by using the "Gradient 5" surface of Zemax. At large zenith angles (65 deg), discrepancies up to 100 mas in the differential refraction are expected near the UV atmospheric transmission cutoff. When high-accuracy modeling is required, the latter model should be preferred.
Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
Radiation environment models and the atmospheric cutoff
NASA Technical Reports Server (NTRS)
Konradi, Andrei; Hardy, Alva C.; Atwell, William
1987-01-01
The limitations of radiation environment models are examined by applying the model to the South Atlantic anomaly (SAA). The local magnetic-field-intensity (in gauss) and McIlwain (1961) drift-shell-parameter contours in the SAA are analyzed. It is noted that it is necessary to decouple the atmospheric absorption effects from the trapped radiation models in order to obtain accurate radiation dose predictions. Two methods for obtaining more accurate results are proposed.
Modeling the Space Radiation Environment
NASA Technical Reports Server (NTRS)
Xapsos, Michael A.
2006-01-01
There has been a renaissance of interest in space radiation environment modeling. This has been fueled by the growing need to replace long time standard AP-9 and AE-8 trapped particle models, the interplanetary exploration initiative, the modern satellite instrumentation that has led to unprecedented measurement accuracy, and the pervasive use of Commercial off the Shelf (COTS) microelectronics that require more accurate predictive capabilities. The objective of this viewgraph presentation was to provide basic understanding of the components of the space radiation environment and their variations, review traditional radiation effects application models, and present recent developments.
ACCURATE TEMPERATURE MEASUREMENTS IN A NATURALLY-ASPIRATED RADIATION SHIELD
Kurzeja, R.
2009-09-09
Experiments and calculations were conducted with a 0.13 mm fine wire thermocouple within a naturally-aspirated Gill radiation shield to assess and improve the accuracy of air temperature measurements without the use of mechanical aspiration, wind speed or radiation measurements. It was found that this thermocouple measured the air temperature with root-mean-square errors of 0.35 K within the Gill shield without correction. A linear temperature correction was evaluated based on the difference between the interior plate and thermocouple temperatures. This correction was found to be relatively insensitive to shield design and yielded an error of 0.16 K for combined day and night observations. The correction was reliable in the daytime when the wind speed usually exceeds 1 m s{sup -1} but occasionally performed poorly at night during very light winds. Inspection of the standard deviation in the thermocouple wire temperature identified these periods but did not unambiguously locate the most serious events. However, estimates of sensor accuracy during these periods is complicated by the much larger sampling volume of the mechanically-aspirated sensor compared with the naturally-aspirated sensor and the presence of significant near surface temperature gradients. The root-mean-square errors therefore are upper limits to the aspiration error since they include intrinsic sensor differences and intermittent volume sampling differences.
Phase-function normalization for accurate analysis of ultrafast collimated radiative transfer.
Hunter, Brian; Guo, Zhixiong
2012-04-20
The scattering of radiation from collimated irradiation is accurately treated via normalization of phase function. This approach is applicable to any numerical method with directional discretization. In this study it is applied to the transient discrete-ordinates method for ultrafast collimated radiative transfer analysis in turbid media. A technique recently developed by the authors, which conserves a phase-function asymmetry factor as well as scattered energy for the Henyey-Greenstein phase function in steady-state diffuse radiative transfer analysis, is applied to the general Legendre scattering phase function in ultrafast collimated radiative transfer. Heat flux profiles in a model tissue cylinder are generated for various phase functions and compared to those generated when normalization of the collimated phase function is neglected. Energy deposition in the medium is also investigated. Lack of conservation of scattered energy and the asymmetry factor for the collimated scattering phase function causes overpredictions in both heat flux and energy deposition for highly anisotropic scattering media. In addition, a discussion is presented to clarify the time-dependent formulation of divergence of radiative heat flux. PMID:22534933
Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay
2012-07-01
Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.
Chandra Radiation Environment Modeling
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Blackwell, W. C.
2003-01-01
CRMFLX (Chandra Radiation Model of ion FluX) is a radiation environment risk mitigation tool for use as a decision aid in planning the operations times for Chandra's Advanced CCD Imaging Spectrometer (ACIS) detector. The accurate prediction of the proton flux environment with energies of 100 - 200 keV is needed in order to protect the ACIS detector against proton degradation. Unfortunately, protons of this energy are abundant in the region of space Chandra must operate, and on-board particle detectors do not measure proton flux levels of the required energy range. This presentation will describe the plasma environment data analysis and modeling basis of the CRMFLX engineering environment model developed to predict the proton flux in the solar wind, magnetosheath, and magnetosphere phenomenological regions of geospace. The recently released CRMFLX Version 2 implementation includes an algorithm that propagates flux from an observation location to other regions of the magnetosphere based on convective ExB and VB-curvature particle drift motions. This technique has the advantage of more completely filling out the database and makes maximum use of limited data obtained during high Kp periods or in areas of the magnetosphere with poor satellite flux measurement coverage.
RRTMGP: A fast and accurate radiation code for the next decade
NASA Astrophysics Data System (ADS)
Mlawer, E. J.; Pincus, R.; Wehe, A.; Delamere, J.
2015-12-01
Atmospheric radiative processes are key drivers of the Earth's climate and must be accurately represented in global circulations models (GCMs) to allow faithful simulations of the planet's past, present, and future. The radiation code RRTMG is widely utilized by global modeling centers for both climate and weather predictions, but it has become increasingly out-of-date. The code's structure is not well suited for the current generation of computer architectures and its stored absorption coefficients are not consistent with the most recent spectroscopic information. We are developing a new broadband radiation code for the current generation of computational architectures. This code, called RRTMGP, will be a completely restructured and modern version of RRTMG. The new code preserves the strengths of the existing RRTMG parameterization, especially the high accuracy of the k-distribution treatment of absorption by gases, but the entire code is being rewritten to provide highly efficient computation across a range of architectures. Our redesign includes refactoring the code into discrete kernels corresponding to fundamental computational elements (e.g. gas optics), optimizing the code for operating on multiple columns in parallel, simplifying the subroutine interface, revisiting the existing gas optics interpolation scheme to reduce branching, and adding flexibility with respect to run-time choices of streams, need for consideration of scattering, aerosol and cloud optics, etc. The result of the proposed development will be a single, well-supported and well-validated code amenable to optimization across a wide range of platforms. Our main emphasis is on highly-parallel platforms including Graphical Processing Units (GPUs) and Many-Integrated-Core processors (MICs), which experience shows can accelerate broadband radiation calculations by as much as a factor of fifty. RRTMGP will provide highly efficient and accurate radiative fluxes calculations for coupled global
Water wave model with accurate dispersion and vertical vorticity
NASA Astrophysics Data System (ADS)
Bokhove, Onno
2010-05-01
Cotter and Bokhove (Journal of Engineering Mathematics 2010) derived a variational water wave model with accurate dispersion and vertical vorticity. In one limit, it leads to Luke's variational principle for potential flow water waves. In the another limit it leads to the depth-averaged shallow water equations including vertical vorticity. Presently, focus will be put on the Hamiltonian formulation of the variational model and its boundary conditions.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
NASA Astrophysics Data System (ADS)
Kasai, Hidetaka; Nishibori, Eiji
2016-04-01
In recent years multiple synchrotron radiation (SR) powder x-ray diffraction profiles have been successfully applied to advanced structural studies such as an accurate charge density study and a structure determination from powder diffraction. The results have been presented with several examples. Abilities and future prospects have been discussed using state of the art powder diffraction data.
NASA Astrophysics Data System (ADS)
Maloney, James G.; Smith, Glenn S.; Scott, Waymond R., Jr.
1990-07-01
Two antennas are considered, a cylindrical monopole and a conical monopole. Both are driven through an image plane from a coaxial transmission line. Each of these antennas corresponds to a well-posed theoretical electromagnetic boundary value problem and a realizable experimental model. These antennas are analyzed by a straightforward application of the time-domain finite-difference method. The computed results for these antennas are shown to be in excellent agreement with accurate experimental measurements for both the time domain and the frequency domain. The graphical displays presented for the transient near-zone and far-zone radiation from these antennas provide physical insight into the radiation process.
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
On the importance of having accurate data for astrophysical modelling
NASA Astrophysics Data System (ADS)
Lique, Francois
2016-06-01
The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.
Accurate method of modeling cluster scaling relations in modified gravity
NASA Astrophysics Data System (ADS)
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations
Baglietto, Emilio
2006-07-01
An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)
Saturn Radiation (SATRAD) Model
NASA Technical Reports Server (NTRS)
Garrett, H. B.; Ratliff, J. M.; Evans, R. W.
2005-01-01
The Saturnian radiation belts have not received as much attention as the Jovian radiation belts because they are not nearly as intense-the famous Saturnian particle rings tend to deplete the belts near where their peak would occur. As a result, there has not been a systematic development of engineering models of the Saturnian radiation environment for mission design. A primary exception is that of Divine (1990). That study used published data from several charged particle experiments aboard the Pioneer 1 1, Voyager 1, and Voyager 2 spacecraft during their flybys at Saturn to generate numerical models for the electron and proton radiation belts between 2.3 and 13 Saturn radii. The Divine Saturn radiation model described the electron distributions at energies between 0.04 and 10 MeV and the proton distributions at energies between 0.14 and 80 MeV. The model was intended to predict particle intensity, flux, and fluence for the Cassini orbiter. Divine carried out hand calculations using the model but never formally developed a computer program that could be used for general mission analyses. This report seeks to fill that void by formally developing a FORTRAN version of the model that can be used as a computer design tool for missions to Saturn that require estimates of the radiation environment around the planet. The results of that effort and the program listings are presented here along with comparisons with the original estimates carried out by Divine. In addition, Pioneer and Voyager data were scanned in from the original references and compared with the FORTRAN model s predictions. The results were statistically analyzed in a manner consistent with Divine s approach to provide estimates of the ability of the model to reproduce the original data. Results of a formal review of the model by a panel of experts are also presented. Their recommendations for further tests, analyses, and extensions to the model are discussed.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
An accurate model potential for alkali neon systems.
Zanuttini, D; Jacquet, E; Giglio, E; Douady, J; Gervais, B
2009-12-01
We present a detailed investigation of the ground and lowest excited states of M-Ne dimers, for M=Li, Na, and K. We show that the potential energy curves of these Van der Waals dimers can be obtained accurately by considering the alkali neon systems as one-electron systems. Following previous authors, the model describes the evolution of the alkali valence electron in the combined potentials of the alkali and neon cores by means of core polarization pseudopotentials. The key parameter for an accurate model is the M(+)-Ne potential energy curve, which was obtained by means of ab initio CCSD(T) calculation using a large basis set. For each MNe dimer, a systematic comparison with ab initio computation of the potential energy curve for the X, A, and B states shows the remarkable accuracy of the model. The vibrational analysis and the comparison with existing experimental data strengthens this conclusion and allows for a precise assignment of the vibrational levels. PMID:19968334
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Shumway, R.W.
1987-10-01
The ATHENA computer program has many features that make it desirable to use as a space reactor evaluation tool. One of the missing features was a surface-to-surface thermal radiation model. A model was developed that allows any of the regular ATHENA heat slabs to radiate to any other heat slab. The view factors and surface emissivities must be specified by the user. To verify that the model was properly accounting for radiant energy transfer, two different types of test calculations were performed. Both calculations have excellent results. The updates have been used on both the INEL CDC-176 and the Livermore Cray. 7 refs., 2 figs., 6 tabs.
Radiation risk estimation models
Hoel, D.G.
1987-11-01
Cancer risk models and their relationship to ionizing radiation are discussed. There are many model assumptions and risk factors that have a large quantitative impact on the cancer risk estimates. Other health end points such as mental retardation may be an even more serious risk than cancer for those with in utero exposures. 8 references.
NASA Astrophysics Data System (ADS)
Smirnova, Olga
Biologically motivated mathematical models, which describe the dynamics of the major hematopoietic lineages (the thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems) in acutely/chronically irradiated humans are developed. These models are implemented as systems of nonlinear differential equations, which variables and constant parameters have clear biological meaning. It is shown that the developed models are capable of reproducing clinical data on the dynamics of these systems in humans exposed to acute radiation in the result of incidents and accidents, as well as in humans exposed to low-level chronic radiation. Moreover, the averaged value of the "lethal" dose rates of chronic irradiation evaluated within models of these four major hematopoietic lineages coincides with the real minimal dose rate of lethal chronic irradiation. The demonstrated ability of the models of the human thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems to predict the dynamical response of these systems to acute/chronic irradiation in wide ranges of doses and dose rates implies that these mathematical models form an universal tool for the investigation and prediction of the dynamics of the major human hematopoietic lineages for a vast pattern of irradiation scenarios. In particular, these models could be applied for the radiation risk assessment for health of astronauts exposed to space radiation during long-term space missions, such as voyages to Mars or Lunar colonies, as well as for health of people exposed to acute/chronic irradiation due to environmental radiological events.
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data. PMID:26355331
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
NASA Astrophysics Data System (ADS)
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Accurate, low-cost 3D-models of gullies
NASA Astrophysics Data System (ADS)
Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine
2015-04-01
Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we
Status of LDEF radiation modeling
NASA Technical Reports Server (NTRS)
Watts, John W.; Armstrong, T. W.; Colborn, B. L.
1995-01-01
The current status of model prediction and comparison with LDEF radiation dosimetry measurements is summarized with emphasis on major results obtained in evaluating the uncertainties of present radiation environment model. The consistency of results and conclusions obtained from model comparison with different sets of LDEF radiation data (dose, activation, fluence, LET spectra) is discussed. Examples where LDEF radiation data and modeling results can be utilized to provide improved radiation assessments for planned LEO missions (e.g., Space Station) are given.
Towards Accurate Molecular Modeling of Plastic Bonded Explosives
NASA Astrophysics Data System (ADS)
Chantawansri, T. L.; Andzelm, J.; Taylor, D.; Byrd, E.; Rice, B.
2010-03-01
There is substantial interest in identifying the controlling factors that influence the susceptibility of polymer bonded explosives (PBXs) to accidental initiation. Numerous Molecular Dynamics (MD) simulations of PBXs using the COMPASS force field have been reported in recent years, where the validity of the force field in modeling the solid EM fill has been judged solely on its ability to reproduce lattice parameters, which is an insufficient metric. Performance of the COMPASS force field in modeling EMs and the polymeric binder has been assessed by calculating structural, thermal, and mechanical properties, where only fair agreement with experimental data is obtained. We performed MD simulations using the COMPASS force field for the polymer binder hydroxyl-terminated polybutadiene and five EMs: cyclotrimethylenetrinitramine, 1,3,5,7-tetranitro-1,3,5,7-tetra-azacyclo-octane, 2,4,6,8,10,12-hexantirohexaazazisowurzitane, 2,4,6-trinitro-1,3,5-benzenetriamine, and pentaerythritol tetranitate. Predicted EM crystallographic and molecular structural parameters, as well as calculated properties for the binder will be compared with experimental results for different simulation conditions. We also present novel simulation protocols, which improve agreement between experimental and computation results thus leading to the accurate modeling of PBXs.
An accurate and simple quantum model for liquid water.
Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A
2006-11-14
The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics
A novel approach for accurate radiative transfer in cosmological hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Petkova, Margarita; Springel, Volker
2011-08-01
accurately deal with non-equilibrium effects. We discuss several tests of the new method, including shadowing configurations in two and three dimensions, ionized sphere expansion in static and dynamic density fields and the ionization of a cosmological density field. The tests agree favourably with analytical expectations and results based on other numerical radiative transfer approximations.
The dynamic radiation environment assimilation model (DREAM)
Reeves, Geoffrey D; Koller, Josef; Tokar, Robert L; Chen, Yue; Henderson, Michael G; Friedel, Reiner H
2010-01-01
The Dynamic Radiation Environment Assimilation Model (DREAM) is a 3-year effort sponsored by the US Department of Energy to provide global, retrospective, or real-time specification of the natural and potential nuclear radiation environments. The DREAM model uses Kalman filtering techniques that combine the strengths of new physical models of the radiation belts with electron observations from long-term satellite systems such as GPS and geosynchronous systems. DREAM includes a physics model for the production and long-term evolution of artificial radiation belts from high altitude nuclear explosions. DREAM has been validated against satellites in arbitrary orbits and consistently produces more accurate results than existing models. Tools for user-specific applications and graphical displays are in beta testing and a real-time version of DREAM has been in continuous operation since November 2009.
Application of Improved Radiation Modeling to General Circulation Models
Michael J Iacono
2011-04-07
This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.
Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M
2014-12-01
Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. PMID:25195174
ACCURATE ACCUMULATION OF DOSE FOR IMPROVED UNDERSTANDING OF RADIATION EFFECTS IN NORMAL TISSUE
Jaffray, David A.; Lindsay, Patricia E.; Brock, Kristy K.; Deasy, Joseph O.; Tomé, W. A.
2013-01-01
The actual distribution of radiation dose accumulated in normal tissues over the complete course of radiation therapy is, in general, poorly quantified. Differences in the patient anatomy between planning and treatment can occur gradually (e.g., tumor regression, resolution of edema) or relatively rapidly (e.g., bladder filling, breathing motion) and these undermine the accuracy of the planned dose distribution. Current efforts to maximize the therapeutic ratio require models that relate the true accumulated dose to clinical outcome. The needed accuracy can only be achieved through the development of robust methods that track the accumulation of dose within the various tissues in the body. Specific needs include the development of segmentation methods, tissue-mapping algorithms, uncertainty estimation, optimal schedules for image-based monitoring, and the development of informatics tools to support subsequent analysis. These developments will not only improve radiation outcomes modeling but will address the technical demands of the adaptive radiotherapy paradigm. The next 5 years need to see academia and industry bring these tools into the hands of the clinician and the clinical scientist. PMID:20171508
Accurate Accumulation of Dose for Improved Understanding of Radiation Effects in Normal Tissue
Jaffray, David A.; Lindsay, Patricia E.; Brock, Kristy K.; Deasy, Joseph O.; Tome, W.A.
2010-03-01
The actual distribution of radiation dose accumulated in normal tissues over the complete course of radiation therapy is, in general, poorly quantified. Differences in the patient anatomy between planning and treatment can occur gradually (e.g., tumor regression, resolution of edema) or relatively rapidly (e.g., bladder filling, breathing motion) and these undermine the accuracy of the planned dose distribution. Current efforts to maximize the therapeutic ratio require models that relate the true accumulated dose to clinical outcome. The needed accuracy can only be achieved through the development of robust methods that track the accumulation of dose within the various tissues in the body. Specific needs include the development of segmentation methods, tissue-mapping algorithms, uncertainty estimation, optimal schedules for image-based monitoring, and the development of informatics tools to support subsequent analysis. These developments will not only improve radiation outcomes modeling but will address the technical demands of the adaptive radiotherapy paradigm. The next 5 years need to see academia and industry bring these tools into the hands of the clinician and the clinical scientist.
Kouznetsov, Alexei; Tambasco, Mauro
2011-03-15
Purpose: To develop and validate a fast and accurate method that uses computed tomography (CT) voxel data to estimate absorbed radiation dose at a point of interest (POI) or series of POIs from a kilovoltage (kV) imaging procedure. Methods: The authors developed an approach that computes absorbed radiation dose at a POI by numerically evaluating the linear Boltzmann transport equation (LBTE) using a combination of deterministic and Monte Carlo (MC) techniques. This hybrid approach accounts for material heterogeneity with a level of accuracy comparable to the general MC algorithms. Also, the dose at a POI is computed within seconds using the Intel Core i7 CPU 920 2.67 GHz quad core architecture, and the calculations are performed using CT voxel data, making it flexible and feasible for clinical applications. To validate the method, the authors constructed and acquired a CT scan of a heterogeneous block phantom consisting of a succession of slab densities: Tissue (1.29 cm), bone (2.42 cm), lung (4.84 cm), bone (1.37 cm), and tissue (4.84 cm). Using the hybrid transport method, the authors computed the absorbed doses at a set of points along the central axis and x direction of the phantom for an isotropic 125 kVp photon spectral point source located along the central axis 92.7 cm above the phantom surface. The accuracy of the results was compared to those computed with MCNP, which was cross-validated with EGSnrc, and served as the benchmark for validation. Results: The error in the depth dose ranged from -1.45% to +1.39% with a mean and standard deviation of -0.12% and 0.66%, respectively. The error in the x profile ranged from -1.3% to +0.9%, with standard deviations of -0.3% and 0.5%, respectively. The number of photons required to achieve these results was 1x10{sup 6}. Conclusions: The voxel-based hybrid method evaluates the LBTE rapidly and accurately to estimate the absorbed x-ray dose at any POI or series of POIs from a kV imaging procedure.
Applying an accurate spherical model to gamma-ray burst afterglow observations
NASA Astrophysics Data System (ADS)
Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.
2013-05-01
We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.
Density Functional Theory Models for Radiation Damage
NASA Astrophysics Data System (ADS)
Dudarev, S. L.
2013-07-01
Density functional theory models developed over the past decade provide unique information about the structure of nanoscale defects produced by irradiation and about the nature of short-range interaction between radiation defects, clustering of defects, and their migration pathways. These ab initio models, involving no experimental input parameters, appear to be as quantitatively accurate and informative as the most advanced experimental techniques developed for the observation of radiation damage phenomena. Density functional theory models have effectively created a new paradigm for the scientific investigation and assessment of radiation damage effects, offering new insight into the origin of temperature- and dose-dependent response of materials to irradiation, a problem of pivotal significance for applications.
New process model proves accurate in tests on catalytic reformer
Aguilar-Rodriguez, E.; Ancheyta-Juarez, J. )
1994-07-25
A mathematical model has been devised to represent the process that takes place in a fixed-bed, tubular, adiabatic catalytic reforming reactor. Since its development, the model has been applied to the simulation of a commercial semiregenerative reformer. The development of mass and energy balances for this reformer led to a model that predicts both concentration and temperature profiles along the reactor. A comparison of the model's results with experimental data illustrates its accuracy at predicting product profiles. Simple steps show how the model can be applied to simulate any fixed-bed catalytic reformer.
Coupling Efforts to the Accurate and Efficient Tsunami Modelling System
NASA Astrophysics Data System (ADS)
Son, S.
2015-12-01
In the present study, we couple two different types of tsunami models, i.e., nondispersive shallow water model of characteristic form(MOST ver.4) and dispersive Boussinesq model of non-characteristic form(Son et al. (2011)) in an attempt to improve modelling accuracy and efficiency. Since each model deals with different type of primary variables, additional care on matching boundary condition is required. Using an absorbing-generating boundary condition developed by Van Dongeren and Svendsen(1997), model coupling and integration is achieved. Characteristic variables(i.e., Riemann invariants) in MOST are converted to non-characteristic variables for Boussinesq solver without any loss of physical consistency. Established modelling system has been validated through typical test problems to realistic tsunami events. Simulated results reveal good performance of developed modelling system. Since coupled modelling system provides advantageous flexibility feature during implementation, great efficiencies and accuracies are expected to be gained through spot-focusing application of Boussinesq model inside the entire domain of tsunami propagation.
Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel
2016-02-01
The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.
Infrared radiation models for atmospheric methane
NASA Technical Reports Server (NTRS)
Cess, R. D.; Kratz, D. P.; Caldwell, J.; Kim, S. J.
1986-01-01
Mutually consistent line-by-line, narrow-band and broad-band infrared radiation models are presented for methane, a potentially important anthropogenic trace gas within the atmosphere. Comparisons of the modeled band absorptances with existing laboratory data produce the best agreement when, within the band models, spurious band intensities are used which are consistent with the respective laboratory data sets, but which are not consistent with current knowledge concerning the intensity of the infrared fundamental band of methane. This emphasizes the need for improved laboratory band absorptance measurements. Since, when applied to atmospheric radiation calculations, the line-by-line model does not require the use of scaling approximations, the mutual consistency of the band models provides a means of appraising the accuracy of scaling procedures. It is shown that Curtis-Godson narrow-band and Chan-Tien broad-band scaling provide accurate means of accounting for atmospheric temperature and pressure variations.
Accurate patient dosimetry of kilovoltage cone-beam CT in radiation therapy
Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.
2008-03-15
The increased utilization of x-ray imaging in image-guided radiotherapy has dramatically improved the radiation treatment and the lives of cancer patients. Daily imaging procedures, such as cone-beam computed tomography (CBCT), for patient setup may significantly increase the dose to the patient's normal tissues. This study investigates the dosimetry from a kilovoltage (kV) CBCT for real patient geometries. Monte Carlo simulations were used to study the kV beams from a Varian on-board imager integrated into the Trilogy accelerator. The Monte Carlo calculated results were benchmarked against measurements and good agreement was obtained. The authors developed a novel method to calibrate Monte Carlo simulated beams with measurements using an ionization chamber in which the air-kerma calibration factors are obtained from an Accredited Dosimetry Calibration Laboratory. The authors have introduced a new Monte Carlo calibration factor, f{sub MCcal}, which is determined from the calibration procedure. The accuracy of the new method was validated by experiment. When a Monte Carlo simulated beam has been calibrated, the simulated beam can be used to accurately predict absolute dose distributions in the irradiated media. Using this method the authors calculated dose distributions to patient anatomies from a typical CBCT acquisition for different treatment sites, such as head and neck, lung, and pelvis. Their results have shown that, from a typical head and neck CBCT, doses to soft tissues, such as eye, spinal cord, and brain can be up to 8, 6, and 5 cGy, respectively. The dose to the bone, due to the photoelectric effect, can be as much as 25 cGy, about three times the dose to the soft tissue. The study provides detailed information on the additional doses to the normal tissues of a patient from a typical kV CBCT acquisition. The methodology of the Monte Carlo beam calibration developed and introduced in this study allows the user to calculate both relative and absolute
Dynamic Radiation Environment Assimilation Model: DREAM
NASA Astrophysics Data System (ADS)
Reeves, G. D.; Chen, Y.; Cunningham, G. S.; Friedel, R. W. H.; Henderson, M. G.; Jordanova, V. K.; Koller, J.; Morley, S. K.; Thomsen, M. F.; Zaharia, S.
2012-03-01
The Dynamic Radiation Environment Assimilation Model (DREAM) was developed to provide accurate, global specification of the Earth's radiation belts and to better understand the physical processes that control radiation belt structure and dynamics. DREAM is designed using a modular software approach in order to provide a computational framework that makes it easy to change components such as the global magnetic field model, radiation belt dynamics model, boundary conditions, etc. This paper provides a broad overview of the DREAM model and a summary of some of the principal results to date. We describe the structure of the DREAM model, describe the five major components, and illustrate the various options that are available for each component. We discuss how the data assimilation is performed and the data preprocessing and postprocessing that are required for producing the final DREAM outputs. We describe how we apply global magnetic field models for conversion between flux and phase space density and, in particular, the benefits of using a self-consistent, coupled ring current-magnetic field model. We discuss some of the results from DREAM including testing of boundary condition assumptions and effects of adding a source term to radial diffusion models. We also describe some of the testing and validation of DREAM and prospects for future development.
RRTM: A rapid radiative transfer model
Mlawer, E.J.; Taubman, S.J.; Clough, S.A.
1996-04-01
A rapid radiative transfer model (RRTM) for the calculation of longwave clear-sky fluxes and cooling rates has been developed. The model, which uses the correlated-k method, is both accurate and computationally fast. The foundation for RRTM is the line-by-line radiative transfer model (LBLRTM) from which the relevant k-distributions are obtained. LBLRTM, which has been extensively validated against spectral observations e.g., the high-resolution sounder and the Atmospheric Emitted Radiance Interferometer, is used to validate the flux and cooling rate results from RRTM. Validations of RRTM`s results have been performed for the tropical, midlatitude summer, and midlatitude winter atmospheres, as well as for the four Intercomparison of Radiation Codes in Climate Models (ICRCCM) cases from the Spectral Radiance Experiment (SPECTRE). Details of some of these validations are presented below. RRTM has the identical atmospheric input module as LBLRTM, facilitating intercomparisons with LBLRTM and application of the model at the Atmospheric Radiation Measurement Cloud and Radiation Testbed sites.
Accurate modelling of flow induced stresses in rigid colloidal aggregates
NASA Astrophysics Data System (ADS)
Vanni, Marco
2015-07-01
A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however
Magnetic field models of nine CP stars from "accurate" measurements
NASA Astrophysics Data System (ADS)
Glagolevskij, Yu. V.
2013-01-01
The dipole models of magnetic fields in nine CP stars are constructed based on the measurements of metal lines taken from the literature, and performed by the LSD method with an accuracy of 10-80 G. The model parameters are compared with the parameters obtained for the same stars from the hydrogen line measurements. For six out of nine stars the same type of structure was obtained. Some parameters, such as the field strength at the poles B p and the average surface magnetic field B s differ considerably in some stars due to differences in the amplitudes of phase dependences B e (Φ) and B s (Φ), obtained by different authors. It is noted that a significant increase in the measurement accuracy has little effect on the modelling of the large-scale structures of the field. By contrast, it is more important to construct the shape of the phase dependence based on a fairly large number of field measurements, evenly distributed by the rotation period phases. It is concluded that the Zeeman component measurement methods have a strong effect on the shape of the phase dependence, and that the measurements of the magnetic field based on the lines of hydrogen are more preferable for modelling the large-scale structures of the field.
An Accurate In Vitro Model of the E. coli Envelope
Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H
2015-01-01
Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir–Blodgett and Langmuir–Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292
Leidenfrost effect: accurate drop shape modeling and new scaling laws
NASA Astrophysics Data System (ADS)
Sobac, Benjamin; Rednikov, Alexey; Dorbolo, Stéphane; Colinet, Pierre
2014-11-01
In this study, we theoretically investigate the shape of a drop in a Leidenfrost state, focusing on the geometry of the vapor layer. The drop geometry is modeled by numerically matching the solution of the hydrostatic shape of a superhydrophobic drop (for the upper part) with the solution of the lubrication equation of the vapor flow underlying the drop (for the bottom part). The results highlight that the vapor layer, fed by evaporation, forms a concave depression in the drop interface that becomes increasingly marked with the drop size. The vapor layer then consists of a gas pocket in the center and a thin annular neck surrounding it. The film thickness increases with the size of the drop, and the thickness at the neck appears to be of the order of 10--100 μm in the case of water. The model is compared to recent experimental results [Burton et al., Phys. Rev. Lett., 074301 (2012)] and shows an excellent agreement, without any fitting parameter. New scaling laws also emerge from this model. The geometry of the vapor pocket is only weakly dependent on the superheat (and thus on the evaporation rate), this weak dependence being more pronounced in the neck region. In turn, the vapor layer characteristics strongly depend on the drop size.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.
2015-12-01
We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.
Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael
2014-05-01
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of
Toon, O.B.
1996-12-31
We conducted modeling work in radiative transfer and cloud microphysics. Our work in radiative transfer included performance tests to other high accuracy methods and to measurements under cloudy, partial cloudy and cloud-free conditions. Our modeling efforts have been aimed to (1) develop an accurate and rapid radiative transfer model; (2) develop three-dimensional radiative transfer models; and (3) develop microphysics resolving cloud and aerosol models. We applied our models to investigate solar clear-sky model biases, investigate aerosol direct effects, investigate aerosol indirect effects, investigate microphysical properties of cirrus, investigate microphysical properties of stratus, investigate relationships between cloud properties, and investigate the effects of cloud structure.
Analytical modeling of the steady radiative shock
NASA Astrophysics Data System (ADS)
Boireau, L.; Bouquet, S.; Michaut, C.; Clique, C.
2006-06-01
In a paper dated 2000 [1], a fully analytical theory of the radiative shock has been presented. This early model had been used to design [2] radiative shock experiments at the Laboratory for the Use of Intense Lasers (LULI) [3 5]. It became obvious from numerical simulations [6, 7] that this model had to be improved in order to accurately recover experiments. In this communication, we present a new theory in which the ionization rates in the unshocked (bar{Z_1}) and shocked (bar{Z_2} neq bar{Z_1}) material, respectively, are included. Associated changes in excitation energy are also taken into account. We study the influence of these effects on the compression and temperature in the shocked medium.
Radiation Belt Electron Dynamics: Modeling Atmospheric Losses
NASA Technical Reports Server (NTRS)
Selesnick, R. S.
2003-01-01
The first year of work on this project has been completed. This report provides a summary of the progress made and the plan for the coming year. Also included with this report is a preprint of an article that was accepted for publication in Journal of Geophysical Research and describes in detail most of the results from the first year of effort. The goal for the first year was to develop a radiation belt electron model for fitting to data from the SAMPEX and Polar satellites that would provide an empirical description of the electron losses into the upper atmosphere. This was largely accomplished according to the original plan (with one exception being that, for reasons described below, the inclusion of the loss cone electrons in the model was deferred). The main concerns at the start were to accurately represent the balance between pitch angle diffusion and eastward drift that determines the dominant features of the low altitude data, and then to accurately convert the model into simulated data based on the characteristics of the particular electron detectors. Considerable effort was devoted to achieving these ends. Once the model was providing accurate results it was applied to data sets selected from appropriate periods in 1997, 1998, and 1999. For each interval of -30 to 60 days, the model parameters were calculated daily, thus providing good short and long term temporal resolution, and for a range of radial locations from L = 2.7 to 3.9. .
Validation of the Poisson Stochastic Radiative Transfer Model
NASA Technical Reports Server (NTRS)
Zhuravleva, Tatiana; Marshak, Alexander
2004-01-01
A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.
Takahashi, F; Shigemori, Y; Seki, A
2009-01-01
A system has been developed to assess radiation dose distribution inside the body of exposed persons in a radiological accident by utilising radiation transport calculation codes-MCNP and MCNPX. The system consists mainly of two parts, pre-processor and post-processor of the radiation transport calculation. Programs for the pre-processor are used to set up a 'problem-dependent' input file, which defines the accident condition and dosimetric quantities to be estimated. The program developed for the post-processor part can effectively indicate dose information based upon the output file of the code. All of the programs in the dosimetry system can be executed with a generally used personal computer and accurately give the dose profile to an exposed person in a radiological accident without complicated procedures. An experiment using a physical phantom was carried out to verify the availability of the dosimetry system with the developed programs in a gamma ray irradiation field. PMID:19181661
O`Brien, E.; Lissauer, D.; McCorkle, S.; Polychronakos, V.; Takai, H.; Chi, C.Y.; Nagamiya, S.; Sippach, W.; Toy, M.; Wang, D.; Wang, Y.F.; Wiggins, C.; Willis, W.; Cherniatin, V.; Dolgoshein, B.; Bennett, M.; Chikanian, A.; Kumar, S.; Mitchell, J.T.; Pope, K.
1991-12-31
We describe the results of a test ran involving a Transition Radiation Detector that can both distinguish electrons from pions which momenta greater titan 0.7 GeV/c and simultaneously track particles passing through the detector. The particle identification is accomplished through a combination of the detection of Transition Radiation from the electron and the differences in electron and pion energy loss (dE/dx) in the detector. The dE/dx particle separation is most, efficient below 2 GeV/c while particle ID utilizing Transition Radiation effective above 1.5 GeV/c. Combined, the electron-pion separation is-better than 5 {times} 10{sup 2}. The single-wire, track-position resolution for the TRD is {approximately}230 {mu}m.
NASA Astrophysics Data System (ADS)
Zhu, P.; Karatekin, O.; Noel, J.-P.; van Ruymbeke, M.; Dehant, V.
2012-04-01
The radio meter has been broadly applied for the study of the Total Solar Irradiance (TSI). As the electromagnetic radiation is the main external climate driving force of our planet: Earth, the Imbalance of the Earth's Radiation Budget (IERB) is a key to better understanding our climate system. The PICARD mission is to study the Sun-Earth's climate connections. With the opportunity of the PICARD mission, we have developed a Bolometric Oscillation Sensor (BOS), which are currently flying side by side with the radiometer SOlar Variability for Picard (SOVAP-an updated instrument of DIARAD/VIRGO on SOHO) to study the solar constant as well as the radiation of the Earth. The BOS sensor is composed with two detectors, the light mass detector (m1), which is rapidly response to the thermal-flux change, and the heavy mass detector (m2), which is slowly modulated by the electromagnetic energy. In addition, the m1 detector can stand alone to precisely monitor the ambient temperature. The original goal of the BOS-PICARD is to study the irradiance of the Sun's and the Earth's. After nearly two year's observations, the variations of Long-Wave radiation of the Earth can be well determined from the BOS measurements. It confirms that the BOS can be applied to measure the electromagnetic radiation near the infrared. Encouraged by these results, we are now working on a second generation of the BOS sensor for the nano-satellite project and future planetary missions. The new sensor will be able to determine the albedo (visible), infrared radiation as well as to detect the thermal initial of objective target either by the remote sensing on-board satellite or by the in-situ measurement setting up in the Lander.
NASA Technical Reports Server (NTRS)
Horwitz, James L.
1992-01-01
The purpose of this work was to assist with the development of analytical techniques for the interpretation of infrared observations. We have done the following: (1) helped to develop models for continuum absorption calculations for water vapor in the far infrared spectral region; (2) worked on models for pressure-induced absorption for O2 and N2 and their comparison with available observations; and (3) developed preliminary studies of non-local thermal equilibrium effects in the upper stratosphere and mesosphere for infrared gases. These new techniques were employed for analysis of balloon-borne far infrared data by a group at the Harvard-Smithsonian Center for Astrophysics. The empirical continuum absorption model for water vapor in the far infrared spectral region and the pressure-induced N2 absorption model were found to give satisfactory results in the retrieval of the mixing ratios of a number of stratospheric trace constituents from balloon-borne far infrared observations.
Accurate tumor localization and tracking in radiation therapy using wireless body sensor networks.
Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark
2014-07-01
Radiation therapy is an effective method to combat cancerous tumors by killing the malignant cells or controlling their growth. Knowing the exact position of the tumor is a very critical prerequisite in radiation therapy. Since the position of the tumor changes during the process of radiation therapy due to the patient׳s movements and respiration, a real-time tumor tracking method is highly desirable in order to deliver a sufficient dose of radiation to the tumor region without damaging the surrounding healthy tissues. In this paper, we develop a novel tumor positioning method based on spatial sparsity. We estimate the position by processing the received signals from only one implantable RF transmitter. The proposed method uses less number of sensors compared to common magnetic transponder based approaches. The performance of the proposed method is evaluated in two different cases: (1) when the tissue configuration is perfectly determined (acquired beforehand by MRI or CT) and (2) when there are some uncertainties about the tissue boundaries. The results demonstrate the high accuracy and performance of the proposed method, even when the tissue boundaries are imperfectly known. PMID:24832352
Mouse models for radiation-induced cancers.
Rivina, Leena; Davoren, Michael J; Schiestl, Robert H
2016-09-01
Potential ionising radiation exposure scenarios are varied, but all bring risks beyond the simple issues of short-term survival. Whether accidentally exposed to a single, whole-body dose in an act of terrorism or purposefully exposed to fractionated doses as part of a therapeutic regimen, radiation exposure carries the consequence of elevated cancer risk. The long-term impact of both intentional and unintentional exposure could potentially be mitigated by treatments specifically developed to limit the mutations and precancerous replication that ensue in the wake of irradiation The development of such agents would undoubtedly require a substantial degree of in vitro testing, but in order to accurately recapitulate the complex process of radiation-induced carcinogenesis, well-understood animal models are necessary. Inbred strains of the laboratory mouse, Mus musculus, present the most logical choice due to the high number of molecular and physiological similarities they share with humans. Their small size, high rate of breeding and fully sequenced genome further increase its value for use in cancer research. This chapter will review relevant m. musculus inbred and F1 hybrid animals of radiation-induced myeloid leukemia, thymic lymphoma, breast and lung cancers. Method of cancer induction and associated molecular pathologies will also be described for each model. PMID:27209205
Optimum satellite orbits for accurate measurement of the earth's radiation budget, summary
NASA Technical Reports Server (NTRS)
Campbell, G. G.; Vonderhaar, T. H.
1978-01-01
The optimum set of orbit inclinations for the measurement of the earth radiation budget from spacially integrating sensor systems was estimated for two and three satellite systems. The best set of the two were satellites at orbit inclinations of 80 deg and 50 deg; of three the inclinations were 80 deg, 60 deg and 50 deg. These were chosen on the basis of a simulation of flat plate and spherical detectors flying over a daily varying earth radiation field as measured by the Nimbus 3 medium resolution scanners. A diurnal oscillation was also included in the emitted flux and albedo to give a source field as realistic as possible. Twenty three satellites with different inclinations and equator crossings were simulated, allowing the results of thousand of multisatellite sets to be intercompared. All were circular orbits of radius 7178 kilometers.
NASA Astrophysics Data System (ADS)
Oh, K.; Han, M.; Kim, K.; Heo, Y.; Moon, C.; Park, S.; Nam, S.
2016-02-01
For quality assurance in radiation therapy, several types of dosimeters are used such as ionization chambers, radiographic films, thermo-luminescent dosimeter (TLD), and semiconductor dosimeters. Among them, semiconductor dosimeters are particularly useful for in vivo dosimeters or high dose gradient area such as the penumbra region because they are more sensitive and smaller in size compared to typical dosimeters. In this study, we developed and evaluated Cadmium Telluride (CdTe) dosimeters, one of the most promising semiconductor dosimeters due to their high quantum efficiency and charge collection efficiency. Such CdTe dosimeters include single crystal form and polycrystalline form depending upon the fabrication process. Both types of CdTe dosimeters are commercially available, but only the polycrystalline form is suitable for radiation dosimeters, since it is less affected by volumetric effect and energy dependence. To develop and evaluate polycrystalline CdTe dosimeters, polycrystalline CdTe films were prepared by thermal evaporation. After that, CdTeO3 layer, thin oxide layer, was deposited on top of the CdTe film by RF sputtering to improve charge carrier transport properties and to reduce leakage current. Also, the CdTeO3 layer which acts as a passivation layer help the dosimeter to reduce their sensitivity changes with repeated use due to radiation damage. Finally, the top and bottom electrodes, In/Ti and Pt, were used to have Schottky contact. Subsequently, the electrical properties under high energy photon beams from linear accelerator (LINAC), such as response coincidence, dose linearity, dose rate dependence, reproducibility, and percentage depth dose, were measured to evaluate polycrystalline CdTe dosimeters. In addition, we compared the experimental data of the dosimeter fabricated in this study with those of the silicon diode dosimeter and Thimble ionization chamber which widely used in routine dosimetry system and dose measurements for radiation
MONA: An accurate two-phase well flow model based on phase slippage
Asheim, H.
1984-10-01
In two phase flow, holdup and pressure loss are related to interfacial slippage. A model based on the slippage concept has been developed and tested using production well data from Forties, the Ekofisk area, and flowline data from Prudhoe Bay. The model developed turned out considerably more accurate than the standard models used for comparison.
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.
2007-09-01
BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.
An urban radiation obstruction model
NASA Astrophysics Data System (ADS)
Frank, Randall S.; Gerding, R. Bruce; O'Rourke, Patricia A.; Terjung, Werner H.
1981-03-01
An urban street canyon radiation obstruction model has been developed. The model can describe community structure in terms of the type and dimensions of every building, block, road, park, etc. The need for massive data acquisition in regard to obstruction modeling calls for computerized algorithms, relieving the researcher of the needless tedium of hand calculations and the accompanying high degree of error and labor costs. The model program OBSTRUCT was written in FORTRAN IV for use on the IBM 3033. To facilitate changes or modifications, OBSTRUCT was written in modular form.
An Improved Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-01
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young
2015-07-01
This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.
Models for infrared atmospheric radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.
1976-01-01
Line and band models for infrared spectral absorption are discussed. Radiative transmittance and integrated absorptance of Lorentz, Doppler, and voigt line profiles were compared for a range of parameters. It was found that, for the intermediate path lengths, the combined Lorentz-Doppler (Voigt) profile is essential in calculating the atmospheric transmittance. Narrow band model relations for absorptance were used to develop exact formulations for total absorption by four wide band models. Several continuous correlations for the absorption of a wide band model were compared with the numerical solutions of the wide band models. By employing the line-by-line and quasi-random band model formulations, computational procedures were developed for evaluating transmittance and upwelling atmospheric radiance. Homogeneous path transmittances were calculated for selected bands of CO, CO2, and N2O and compared with experimental measurements. The upwelling radiance and signal change in the wave number interval of the CO fundamental band were also calculated.
Slot Region Radiation Environment Models
NASA Astrophysics Data System (ADS)
Sandberg, Ingmar; Daglis, Ioannis; Heynderickx, Daniel; Evans, Hugh; Nieminen, Petteri
2013-04-01
Herein we present the main characteristics and first results of the Slot Region Radiation Environment Models (SRREMs) project. The statistical models developed in SRREMs aim to address the variability of trapped electron and proton fluxes in the region between the inner and the outer electron radiation belt. The energetic charged particle fluxes in the slot region are highly dynamic and are known to vary by several orders of magnitude on both short and long timescales. During quiet times, the particle fluxes are much lower than those found at the peak of the inner and outer belts and the region is considered benign. During geospace magnetic storms, though, this region can fill with energetic particles as the peak of the outer belt is pushed Earthwards and the fluxes can increase drastically. There has been a renewed interest in the potential operation of commercial satellites in orbits that are at least partially contained within the Slot Region. Hence, there is a need to improve the current radiation belt models, most of which do not model the extreme variability of the slot region and instead provide long-term averages between the better-known low and medium Earth orbits (LEO and MEO). The statistical models developed in the SRREMs project are based on the analysis of a large volume of available data and on the construction of a virtual database of slot region particle fluxes. The analysis that we have followed retains the long-term temporal, spatial and spectral variations in electron and proton fluxes as well as the short-term enhancement events at altitudes and inclinations relevant for satellites in the slot region. A large number of datasets have been used for the construction, evaluation and inter-calibration of the SRREMs virtual dataset. Special emphasis has been given on the use and analysis of ESA Standard Radiation Environment Monitor (SREM) data from the units on-board PROBA-1, INTEGRAL, and GIOVE-B due to the sufficient spatial and long temporal
Radiation dosimetry and biophysical models of space radiation effects
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wu, Honglu; Shavers, Mark R.; George, Kerry
2003-01-01
Estimating the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons, and the absence of epidemiology data for these radiation types. Developing useful biophysical parameters or models that relate energy deposition by space particles to the probabilities of biological outcomes is a complex problem. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra. In contrast to conventional dosimetric methods, models of radiation track structure provide descriptions of energy deposition events in biomolecules, cells, or tissues, which can be used to develop biophysical models of radiation risks. In this paper, we address the biophysical description of heavy particle tracks in the context of the interpretation of both space radiation dosimetry and radiobiology data, which may provide insights into new approaches to these problems.
Space shuttle main engine plume radiation model
NASA Technical Reports Server (NTRS)
Reardon, J. E.; Lee, Y. C.
1978-01-01
The methods are described which are used in predicting the thermal radiation received by space shuttles, from the plumes of the main engines. Radiation to representative surface locations were predicted using the NASA gaseous plume radiation GASRAD program. The plume model is used with the radiative view factor (RAVFAC) program to predict sea level radiation at specified body points. The GASRAD program is described along with the predictions. The RAVFAC model is also discussed.
Identification of accurate nonlinear rainfall-runoff models with unique parameters
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N.
2009-04-01
We propose a strategy to identify models with unique parameters that yield accurate streamflow predictions, given a time-series of rainfall inputs. The procedure consists of five general steps. First, an a priori range of model structures is specified based on prior general and site-specific hydrologic knowledge. To this end, we rely on a flexible model code that allows a specification of a wide range of model structures, from simple to complex. Second, using global optimization each model structure is calibrated to a record of rainfall-runoff data, yielding optimal parameter values for each model structure. Third, accuracy of each model structure is determined by estimating model prediction errors using independent validation and statistical theory. Fourth, parameter identifiability of each calibrated model structure is estimated by means of Monte Carlo Markov Chain simulation. Finally, an assessment is made about each model structure in terms of its accuracy of mimicking rainfall-runoff processes (step 3), and the uniqueness of its parameters (step 4). The procedure results in the identification of the most complex and accurate model supported by the data, without causing parameter equifinality. As such, it provides insight into the information content of the data for identifying nonlinear rainfall-runoff models. We illustrate the method using rainfall-runoff data records from several MOPEX basins in the US.
Technology Transfer Automated Retrieval System (TEKTRAN)
Solar radiation plays a key role in the Earth’s energy balance and is used as an essential input data in radiation-based evapotranspiration (ET) models. Accurate gridded solar radiation data at high spatial and temporal resolution are needed to retrieve ET over large domains. In this work we present...
Material Models for Accurate Simulation of Sheet Metal Forming and Springback
NASA Astrophysics Data System (ADS)
Yoshida, Fusahito
2010-06-01
For anisotropic sheet metals, modeling of anisotropy and the Bauschinger effect is discussed in the framework of Yoshida-Uemori kinematic hardening model incorporating with anisotropic yield functions. The performances of the models in predicting yield loci, cyclic stress-strain responses on several types of steel and aluminum sheets are demonstrated by comparing the numerical simulation results with the corresponding experimental observations. From some examples of FE simulation of sheet metal forming and springback, it is concluded that modeling of both the anisotropy and the Bauschinger effect is essential for the accurate numerical simulation.
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.
Development of modified cable models to simulate accurate neuronal active behaviors
2014-01-01
In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted. PMID:25277743
NASA Astrophysics Data System (ADS)
Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent
2013-11-01
The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.
Methodology to set up accurate OPC model using optical CD metrology and atomic force microscopy
NASA Astrophysics Data System (ADS)
Shim, Yeon-Ah; Kang, Jaehyun; Lee, Sang-Uk; Kim, Jeahee; Kim, Keeho
2007-03-01
For the 90nm node and beyond, smaller Critical Dimension(CD) control budget is required and the ways to control good CD uniformity are needed. Moreover Optical Proximity Correction(OPC) for the sub-90nm node demands more accurate wafer CD data in order to improve accuracy of OPC model. Scanning Electron Microscope (SEM) is the typical method for measuring CD until ArF process. However SEM can give serious attack such as shrinkage of Photo Resist(PR) by burning of weak chemical structure of ArF PR due to high energy electron beam. In fact about 5nm CD narrowing occur when we measure CD by using CD-SEM in ArF photo process. Optical CD Metrology(OCD) and Atomic Force Microscopy(AFM) has been considered to the method for measuring CD without attack of organic materials. Also the OCD and AFM measurement system have the merits of speed, easiness and accurate data. For model-based OPC, the model is generated using CD data of test patterns transferred onto the wafer. In this study we discuss to generate accurate OPC model using OCD and AFM measurement system.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
Collins, William; Iacono, Michael J.; Delamere, Jennifer S.; Mlawer, Eli J.; Shephard, Mark W.; Clough, Shepard A.; Collins, William D.
2008-04-01
A primary component of the observed, recent climate change is the radiative forcing from increased concentrations of long-lived greenhouse gases (LLGHGs). Effective simulation of anthropogenic climate change by general circulation models (GCMs) is strongly dependent on the accurate representation of radiative processes associated with water vapor, ozone and LLGHGs. In the context of the increasing application of the Atmospheric and Environmental Research, Inc. (AER) radiation models within the GCM community, their capability to calculate longwave and shortwave radiative forcing for clear sky scenarios previously examined by the radiative transfer model intercomparison project (RTMIP) is presented. Forcing calculations with the AER line-by-line (LBL) models are very consistent with the RTMIP line-by-line results in the longwave and shortwave. The AER broadband models, in all but one case, calculate longwave forcings within a range of -0.20 to 0.23 W m{sup -2} of LBL calculations and shortwave forcings within a range of -0.16 to 0.38 W m{sup -2} of LBL results. These models also perform well at the surface, which RTMIP identified as a level at which GCM radiation models have particular difficulty reproducing LBL fluxes. Heating profile perturbations calculated by the broadband models generally reproduce high-resolution calculations within a few hundredths K d{sup -1} in the troposphere and within 0.15 K d{sup -1} in the peak stratospheric heating near 1 hPa. In most cases, the AER broadband models provide radiative forcing results that are in closer agreement with high 20 resolution calculations than the GCM radiation codes examined by RTMIP, which supports the application of the AER models to climate change research.
Building an accurate 3D model of a circular feature for robot vision
NASA Astrophysics Data System (ADS)
Li, L.
2012-06-01
In this paper, an accurate 3D model analysis of a circular feature is built with error compensation for robot vision. We propose an efficient method of fitting ellipses to data points by minimizing the algebraic distance subject to the constraint that a conic should be an ellipse and solving the ellipse parameters through a direct ellipse fitting method by analysing the 3D geometrical representation in a perspective projection scheme, the 3D position of a circular feature with known radius can be obtained. A set of identical circles, machined on a calibration board whose centres were known, was calibrated with a camera and did the model analysis that our method developed. Experimental results show that our method is more accurate than other methods.
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z
2016-09-01
The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
NASA Astrophysics Data System (ADS)
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Accurate and efficient halo-based galaxy clustering modelling with simulations
NASA Astrophysics Data System (ADS)
Zheng, Zheng; Guo, Hong
2016-06-01
Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.
Ultraviolet radiation therapy and UVR dose models
Grimes, David Robert
2015-01-15
Ultraviolet radiation (UVR) has been an effective treatment for a number of chronic skin disorders, and its ability to alleviate these conditions has been well documented. Although nonionizing, exposure to ultraviolet (UV) radiation is still damaging to deoxyribonucleic acid integrity, and has a number of unpleasant side effects ranging from erythema (sunburn) to carcinogenesis. As the conditions treated with this therapy tend to be chronic, exposures are repeated and can be high, increasing the lifetime probability of an adverse event or mutagenic effect. Despite the potential detrimental effects, quantitative ultraviolet dosimetry for phototherapy is an underdeveloped area and better dosimetry would allow clinicians to maximize biological effect whilst minimizing the repercussions of overexposure. This review gives a history and insight into the current state of UVR phototherapy, including an overview of biological effects of UVR, a discussion of UVR production, illness treated by this modality, cabin design and the clinical implementation of phototherapy, as well as clinical dose estimation techniques. Several dose models for ultraviolet phototherapy are also examined, and the need for an accurate computational dose estimation method in ultraviolet phototherapy is discussed.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707
Accurate protein structure modeling using sparse NMR data and homologous structure information
Thompson, James M.; Sgourakis, Nikolaos G.; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L.; Szyperski, Thomas; Montelione, Gaetano T.; Baker, David
2012-01-01
While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining , 13C, and 15N backbone and 13Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2–1.9 Å relative to the conventional determined NMR ensembles and of 0.9–1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments. PMID:22665781
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Estimating solar radiation for plant simulation models
NASA Technical Reports Server (NTRS)
Hodges, T.; French, V.; Leduc, S.
1985-01-01
Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.
Coarse-grained red blood cell model with accurate mechanical properties, rheology and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George E
2009-01-01
We present a coarse-grained red blood cell (RBC) model with accurate and realistic mechanical properties, rheology and dynamics. The modeled membrane is represented by a triangular mesh which incorporates shear inplane energy, bending energy, and area and volume conservation constraints. The macroscopic membrane elastic properties are imposed through semi-analytic theory, and are matched with those obtained in optical tweezers stretching experiments. Rheological measurements characterized by time-dependent complex modulus are extracted from the membrane thermal fluctuations, and compared with those obtained from the optical magnetic twisting cytometry results. The results allow us to define a meaningful characteristic time of the membrane. The dynamics of RBCs observed in shear flow suggests that a purely elastic model for the RBC membrane is not appropriate, and therefore a viscoelastic model is required. The set of proposed analyses and numerical tests can be used as a complete model testbed in order to calibrate the modeled viscoelastic membranes to accurately represent RBCs in health and disease. PMID:19965026
High fidelity chemistry and radiation modeling for oxy -- combustion scenarios
NASA Astrophysics Data System (ADS)
Abdul Sater, Hassan A.
To account for the thermal and chemical effects associated with the high CO2 concentrations in an oxy-combustion atmosphere, several refined gas-phase chemistry and radiative property models have been formulated for laminar to highly turbulent systems. This thesis examines the accuracies of several chemistry and radiative property models employed in computational fluid dynamic (CFD) simulations of laminar to transitional oxy-methane diffusion flames by comparing their predictions against experimental data. Literature review about chemistry and radiation modeling in oxy-combustion atmospheres considered turbulent systems where the predictions are impacted by the interplay and accuracies of the turbulence, radiation and chemistry models. Thus, by considering a laminar system we minimize the impact of turbulence and the uncertainties associated with turbulence models. In the first section of this thesis, an assessment and validation of gray and non-gray formulations of a recently proposed weighted-sum-of-gray gas model in oxy-combustion scenarios was undertaken. Predictions of gas, wall temperatures and flame lengths were in good agreement with experimental measurements. The temperature and flame length predictions were not sensitive to the radiative property model employed. However, there were significant variations between the gray and non-gray model radiant fraction predictions with the variations in general increasing with decrease in Reynolds numbers possibly attributed to shorter flames and steeper temperature gradients. The results of this section confirm that non-gray model predictions of radiative heat fluxes are more accurate than gray model predictions especially at steeper temperature gradients. In the second section, the accuracies of three gas-phase chemistry models were assessed by comparing their predictions against experimental measurements of temperature, species concentrations and flame lengths. The chemistry was modeled employing the Eddy
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. PMID:15931680
NASA Astrophysics Data System (ADS)
Lazzeroni, Marta; Brahme, Anders
2015-09-01
In the present study we develop a new technique for the production of clean quasi-monochromatic 11C positron emitter beams for accurate radiation therapy and PET-CT dose delivery imaging and treatment verification. The 11C ion beam is produced by projectile fragmentation using a primary 12C ion beam. The practical elimination of the energy spread of the secondary 11C fragments and other beam contaminating fragments is described. Monte Carlo calculation with the SHIELD-HIT10+ code and analytical methods for the transport of the ions in matter are used in the analysis. Production yields, as well as energy, velocity and magnetic rigidity distributions of the fragments generated in a cylindrical target are scored as a function of the depth within 1 cm thick slices for an optimal target consisting of a fixed 20 cm section of liquid hydrogen followed by a variable thickness section of polyethylene. The wide energy and magnetic rigidity spread of the 11C ion beam can be reduced to values around 1% by using a variable monochromatizing wedge-shaped degrader in the beam line. Finally, magnetic rigidity and particle species selection, as well as discrimination of the particle velocity through a combined Time of Flight and Radio Frequency-driven Velocity filter purify the beam from similar magnetic rigidity contaminating fragments (mainly 7Be and 3He fragments). A beam purity of about 99% is expected by the combined method.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.
2016-06-01
We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k < 10 h Mpc-1, and we present theoretically motivated extensions to cover non-minimally coupled scalar fields, massive neutrinos and Vainshtein screened modified gravity models that result in few per cent accurate power spectra for k < 10 h Mpc-1. For chameleon screened models, we achieve only 10 per cent accuracy for the same range of scales. Finally, we use our halo model to investigate degeneracies between different extensions to the standard cosmological model, finding that the impact of baryonic feedback on the non-linear matter power spectrum can be considered independently of modified gravity or massive neutrino extensions. In contrast, considering the impact of modified gravity and massive neutrinos independently results in biased estimates of power at the level of 5 per cent at scales k > 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
NASA Astrophysics Data System (ADS)
Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo
2014-04-01
We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.
Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z; Chen, Ronald C; Shen, Dinggang
2016-06-01
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531
Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL
NASA Astrophysics Data System (ADS)
Ciambur, B. C.
2015-09-01
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
A Method for Accurate in silico modeling of Ultrasound Transducer Arrays
Guenther, Drake A.; Walker, William F.
2009-01-01
This paper presents a new approach to improve the in silico modeling of ultrasound transducer arrays. While current simulation tools accurately predict the theoretical element spatio-temporal pressure response, transducers do not always behave as theorized. In practice, using the probe's physical dimensions and published specifications in silico, often results in unsatisfactory agreement between simulation and experiment. We describe a general optimization procedure used to maximize the correlation between the observed and simulated spatio-temporal response of a pulsed single element in a commercial ultrasound probe. A linear systems approach is employed to model element angular sensitivity, lens effects, and diffraction phenomena. A numerical deconvolution method is described to characterize the intrinsic electro-mechanical impulse response of the element. Once the response of the element and optimal element characteristics are known, prediction of the pressure response for arbitrary apertures and excitation signals is performed through direct convolution using available tools. We achieve a correlation of 0.846 between the experimental emitted waveform and simulated waveform when using the probe's physical specifications in silico. A far superior correlation of 0.988 is achieved when using the optimized in silico model. Electronic noise appears to be the main effect preventing the realization of higher correlation coefficients. More accurate in silico modeling will improve the evaluation and design of ultrasound transducers as well as aid in the development of sophisticated beamforming strategies. PMID:19041997
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
Radiation-induced myeloid leukemia in murine models
2014-01-01
The use of radiation therapy is a cornerstone of modern cancer treatment. The number of patients that undergo radiation as a part of their therapy regimen is only increasing every year, but this does not come without cost. As this number increases, so too does the incidence of secondary, radiation-induced neoplasias, creating a need for therapeutic agents targeted specifically towards incidence reduction and treatment of these cancers. Development and efficacy testing of these agents requires not only extensive in vitro testing but also a set of reliable animal models to accurately recreate the complex situations of radiation-induced carcinogenesis. As radiation-induced leukemic progression often involves genomic changes such as rearrangements, deletions, and changes in methylation, the laboratory mouse Mus musculus, with its fully sequenced genome, is a powerful tool in cancer research. This fact, combined with the molecular and physiological similarities it shares with man and its small size and high rate of breeding in captivity, makes it the most relevant model to use in radiation-induced leukemia research. In this work, we review relevant M. musculus inbred and F1 hybrid animal models, as well as methods of induction of radiation-induced myeloid leukemia. Associated molecular pathologies are also included. PMID:25062865
Session on modeling of radiative transfer processes
NASA Technical Reports Server (NTRS)
Flatau, Piotr
1993-01-01
The session on modeling of radiative transfer processes is reviewed. Six critical issues surfaced in the discussion concerning scale-interactive radiative processes relevent to the mesoscale convective systems (MCS's). These issues are the need to expand basic knowledge of how MCS's influence climate through extensive cloud shields and increased humidity in the upper troposphere; to improve radiation parameterizations used in mesoscale and General Circulation Model (GCM) models; to improve our basic understanding of the influence of radiation on MCS dynamics due to diabatic heating, production of condensate, and vertical and horizontal heat fluxes; to quantify our understanding of radiative impacts of MCS's on the surface and free atmosphere energy budgets; to quantify and identify radiative and microphysical processes important in the evolution of MCS's; and to improve the capability to remotely sense MCS radiative properties from space and ground-based systems.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756
A Computational Model of Cellular Response to Modulated Radiation Fields
McMahon, Stephen J.; Butterworth, Karl T.; McGarry, Conor K.; Trainor, Colman; O'Sullivan, Joe M.; Hounsell, Alan R.; Prise, Kevin M.
2012-09-01
Purpose: To develop a model to describe the response of cell populations to spatially modulated radiation exposures of relevance to advanced radiotherapies. Materials and Methods: A Monte Carlo model of cellular radiation response was developed. This model incorporated damage from both direct radiation and intercellular communication including bystander signaling. The predictions of this model were compared to previously measured survival curves for a normal human fibroblast line (AGO1522) and prostate tumor cells (DU145) exposed to spatially modulated fields. Results: The model was found to be able to accurately reproduce cell survival both in populations which were directly exposed to radiation and those which were outside the primary treatment field. The model predicts that the bystander effect makes a significant contribution to cell killing even in uniformly irradiated cells. The bystander effect contribution varies strongly with dose, falling from a high of 80% at low doses to 25% and 50% at 4 Gy for AGO1522 and DU145 cells, respectively. This was verified using the inducible nitric oxide synthase inhibitor aminoguanidine to inhibit the bystander effect in cells exposed to different doses, which showed significantly larger reductions in cell killing at lower doses. Conclusions: The model presented in this work accurately reproduces cell survival following modulated radiation exposures, both in and out of the primary treatment field, by incorporating a bystander component. In addition, the model suggests that the bystander effect is responsible for a significant portion of cell killing in uniformly irradiated cells, 50% and 70% at doses of 2 Gy in AGO1522 and DU145 cells, respectively. This description is a significant departure from accepted radiobiological models and may have a significant impact on optimization of treatment planning approaches if proven to be applicable in vivo.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks
Fu, Jun-Song; Liu, Yun
2015-01-01
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211
NASA Astrophysics Data System (ADS)
Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.
2012-11-01
A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.
NASA Astrophysics Data System (ADS)
Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.
2012-07-01
Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1992-01-01
The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J
2013-07-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
An accurate and comprehensive model of thin fluid flows with inertia on curved substrates
NASA Astrophysics Data System (ADS)
Roberts, A. J.; Li, Zhenquan
2006-04-01
Consider the three-dimensional flow of a viscous Newtonian fluid upon a curved two-dimensional substrate when the fluid film is thin, as occurs in many draining, coating and biological flows. We derive a comprehensive model of the dynamics of the film, the model being expressed in terms of the film thickness eta and the average lateral velocity bar{bm u}. Centre manifold theory assures us that the model accurately and systematically includes the effects of the curvature of substrate, gravitational body force, fluid inertia and dissipation. The model resolves wavelike phenomena in the dynamics of viscous fluid flows over arbitrarily curved substrates such as cylinders, tubes and spheres. We briefly illustrate its use in simulating drop formation on cylindrical fibres, wave transitions, three-dimensional instabilities, Faraday waves, viscous hydraulic jumps, flow vortices in a compound channel and flow down and up a step. These models are the most complete models for thin-film flow of a Newtonian fluid; many other thin-film models can be obtained by different restrictions and truncations of the model derived here.
Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Gu, Lizhi
2015-09-01
The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and
Predictive models of radiative neutrino masses
NASA Astrophysics Data System (ADS)
Julio, J.
2016-06-01
We discuss two models of radiative neutrino mass generation. The first model features one-loop Zee model with Z4 symmetry. The second model is the two-loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
NASA Astrophysics Data System (ADS)
Selcuk, Nevin
1993-02-01
Four flux-type models for radiative heat transfer in cylindrical configurations were applied to the prediction of radiative flux density and source term of a cylindrical enclosure problem based on data reported previously on a pilot-scale experimental combustor with steep temperature gradients. The models, which are Schuster-Hamaker type four-flux model derived by Lockwood and Spalding, two Schuster-Schwarzschild type four-flux models derived by Siddall and Selcuk and Richter and Quack and spherical harmonics approximation, were evaluated from the viewpoint of predictive accuracy by comparing their predictions with exact solutions produced previously. The comparisons showed that spherical harmonics approximation produces more accurate results than the other models with respect to the radiative energy source term and that the four-flux models of Lockwood and Spalding and Siddall and Selcuk for isotropic radiation field are more accurate with respect to the prediction of radiative flux density to the side wall.
Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various
NASA Astrophysics Data System (ADS)
Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.
2015-04-01
We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.
Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.
Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit
2015-05-01
A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies. PMID:25502920
Radiation dose modeling using IGRIP and Deneb/ERGO
Vickers, D.S.; Davis, K.R.; Breazeal, N.L.; Watson, R.A.; Ford, M.S.
1995-12-31
The Radiological Environment Modeling System (REMS) quantifies dose to humans in radiation environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO (Ergonomics) simulation software products. These commercially available products are augmented with custom C code to provide the radiation exposure information to and collect the radiation dose information from the workcell simulations. The emphasis of this paper is on the IGRIP and Deneb/ERGO parts of REMS, since that represents the extension to existing capabilities developed by the authors. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these database files to compute and accumulate dose to human devices (Deneb`s ERGO human) during simulated operations around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. REMS was developed because the proposed reduction in the yearly radiation exposure limit will preclude or require changes in many of the manual operations currently being utilized in the Weapons Complex. This is particularly relevant in the area of dismantlement activities at the Pantex Plant in Amarillo, TX. Therefore, a capability was needed to be able to quantify the dose associated with certain manual processes so that the benefits of automation could be identified and understood.
Davis, J.L.; Grant, J.W.
2014-01-01
Anatomically correct turtle utricle geometry was incorporated into two finite element models. The geometrically accurate model included appropriately shaped macular surface and otoconial layer, compact gel and column filament (or shear) layer thicknesses and thickness distributions. The first model included a shear layer where the effects of hair bundle stiffness was included as part of the shear layer modulus. This solid model’s undamped natural frequency was matched to an experimentally measured value. This frequency match established a realistic value of the effective shear layer Young’s modulus of 16 Pascals. We feel this is the most accurate prediction of this shear layer modulus and fits with other estimates (Kondrachuk, 2001b). The second model incorporated only beam elements in the shear layer to represent hair cell bundle stiffness. The beam element stiffness’s were further distributed to represent their location on the neuroepithelial surface. Experimentally measured striola hair cell bundles mean stiffness values were used in the striolar region and the mean extrastriola hair cell bundles stiffness values were used in this region. The results from this second model indicated that hair cell bundle stiffness contributes approximately 40% to the overall stiffness of the shear layer– hair cell bundle complex. This analysis shows that high mass saccules, in general, achieve high gain at the sacrifice of frequency bandwidth. We propose the mechanism by which this can be achieved is through increase the otoconial layer mass. The theoretical difference in gain (deflection per acceleration) is shown for saccules with large otoconial layer mass relative to saccules and utricles with small otoconial layer mass. Also discussed is the necessity of these high mass saccules to increase their overall system shear layer stiffness. Undamped natural frequencies and mode shapes for these sensors are shown. PMID:25445820
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2016-01-01
In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
NASA Technical Reports Server (NTRS)
Kopasakis, George
2014-01-01
The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.
Near-Earth Space Radiation Models
NASA Technical Reports Server (NTRS)
Xapsos, Michael A.; O'Neill, Patrick M.; O'Brien, T. Paul
2012-01-01
Review of models of the near-Earth space radiation environment is presented, including recent developments in trapped proton and electron, galactic cosmic ray and solar particle event models geared toward spacecraft electronics applications.
Method for modeling radiative transport in luminescent particulate media.
Hughes, Michael D; Borca-Tasciuc, Diana-Andra; Kaminski, Deborah A
2016-04-20
Modeling radiative transport in luminescent particulate media is important to a variety of applications, from biomedical imaging to solar power harvesting. When absorption and scattering from individual particles must be considered, the description of radiative transport is not straightforward. For large particles and interparticle spacing, geometrical optics can be employed. However, this approach requires accurate knowledge of several particle properties, such as index of refraction and absorption coefficient, along with particle geometry and positioning. Because the determination of these variables is often nontrivial, we developed an approach for modeling radiative transport in such media, which combines two simple experiments with Monte Carlo simulations to determine the particle extinction coefficient (Γ) and the probability of absorption of light by a particle (P_{A}). The method is validated on samples consisting of luminescent phosphor powder dispersed in a silicone matrix. PMID:27140095
Burward-Hoy, J. M.; Geist, W. H.; Krick, M. S.; Mayo, D. R.
2004-01-01
Neutron multiplicity counting is a technique for the rapid, nondestructive measurement of plutonium mass in pure and impure materials. This technique is very powerful because it uses the measured coincidence count rates to determine the sample mass without requiring a set of representative standards for calibration. Interpreting measured singles, doubles, and triples count rates using the three-parameter standard point model accurately determines plutonium mass, neutron multiplication, and the ratio of ({alpha},n) to spontaneous-fission neutrons (alpha) for oxides of moderate mass. However, underlying standard point model assumptions - including constant neutron energy and constant multiplication throughout the sample - cause significant biases for the mass, multiplication, and alpha in measurements of metal and large, dense oxides.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, J. A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Highly physical penumbra solar radiation pressure modeling with atmospheric effects
NASA Astrophysics Data System (ADS)
Robertson, Robert; Flury, Jakob; Bandikova, Tamara; Schilling, Manuel
2015-10-01
We present a new method for highly physical solar radiation pressure (SRP) modeling in Earth's penumbra. The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. However, we aim to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects are tabulated to significantly reduce computational cost. We present new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the high spatial and temporal variability in lower atmospheric conditions. Modeled penumbra SRP accelerations for the Gravity Recovery and Climate Experiment (GRACE) satellites are compared to the sub-nm/s2 precision GRACE accelerometer data. Comparisons to accelerometer data and a traditional penumbra SRP model illustrate the improved accuracy which our methods provide. Sensitivity analyses illustrate the significance of various atmospheric parameters and modeled effects on penumbra SRP. While this model is more complex than a traditional penumbra SRP model, we demonstrate its utility and propose that a highly physical model which considers atmospheric effects should be the basis for any simplified approach to penumbra SRP modeling.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
An Accurate Model for Biomolecular Helices and Its Application to Helix Visualization
Wang, Lincong; Qiao, Hui; Cao, Chen; Xu, Shutan; Zou, Shuxue
2015-01-01
Helices are the most abundant secondary structural elements in proteins and the structural forms assumed by double stranded DNAs (dsDNA). Though the mathematical expression for a helical curve is simple, none of the previous models for the biomolecular helices in either proteins or DNAs use a genuine helical curve, likely because of the complexity of fitting backbone atoms to helical curves. In this paper we model a helix as a series of different but all bona fide helical curves; each one best fits the coordinates of four consecutive backbone Cα atoms for a protein or P atoms for a DNA molecule. An implementation of the model demonstrates that it is more accurate than the previous ones for the description of the deviation of a helix from a standard helical curve. Furthermore, the accuracy of the model makes it possible to correlate deviations with structural and functional significance. When applied to helix visualization, the ribbon diagrams generated by the model are less choppy or have smaller side chain detachment than those by the previous visualization programs that typically model a helix as a series of low-degree splines. PMID:26126117
Dean, J; Welsh, L; Gulliford, S; Harrington, K; Nutting, C
2014-06-01
Purpose: The significant morbidity caused by radiation-induced acute oral mucositis means that studies aiming to elucidate dose-response relationships in this tissue are a high priority. However, there is currently no standardized method for delineating the mucosal structures within the oral cavity. This report describes the development of a methodology to delineate the oral mucosa accurately on CT scans in a semi-automated manner. Methods: An oral mucosa atlas for automated segmentation was constructed using the RayStation Atlas-Based Segmentation (ABS) module. A radiation oncologist manually delineated the full surface of the oral mucosa on a planning CT scan of a patient receiving radiotherapy (RT) to the head and neck region. A 3mm fixed annulus was added to incorporate the mucosal wall thickness. This structure was saved as an atlas template. ABS followed by model-based segmentation was performed on four further patients sequentially, adding each patient to the atlas. Manual editing of the automatically segmented structure was performed. A dose comparison between these contours and previously used oral cavity volume contours was performed. Results: The new approach was successful in delineating the mucosa, as assessed by an experienced radiation oncologist, when applied to a new series of patients receiving head and neck RT. Reductions in the mean doses obtained when using the new delineation approach, compared with the previously used technique, were demonstrated for all patients (median: 36.0%, range: 25.6% – 39.6%) and were of a magnitude that might be expected to be clinically significant. Differences in the maximum dose that might reasonably be expected to be clinically significant were observed for two patients. Conclusion: The method developed provides a means of obtaining the dose distribution delivered to the oral mucosa more accurately than has previously been achieved. This will enable the acquisition of high quality dosimetric data for use in
A rapid radiative transfer model for reflection of solar radiation
NASA Technical Reports Server (NTRS)
Xiang, X.; Smith, E. A.; Justus, C. G.
1994-01-01
A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
Are Quasi-Steady-State Approximated Models Suitable for Quantifying Intrinsic Noise Accurately?
Sengupta, Dola; Kar, Sandip
2015-01-01
Large gene regulatory networks (GRN) are often modeled with quasi-steady-state approximation (QSSA) to reduce the huge computational time required for intrinsic noise quantification using Gillespie stochastic simulation algorithm (SSA). However, the question still remains whether the stochastic QSSA model measures the intrinsic noise as accurately as the SSA performed for a detailed mechanistic model or not? To address this issue, we have constructed mechanistic and QSSA models for few frequently observed GRNs exhibiting switching behavior and performed stochastic simulations with them. Our results strongly suggest that the performance of a stochastic QSSA model in comparison to SSA performed for a mechanistic model critically relies on the absolute values of the mRNA and protein half-lives involved in the corresponding GRN. The extent of accuracy level achieved by the stochastic QSSA model calculations will depend on the level of bursting frequency generated due to the absolute value of the half-life of either mRNA or protein or for both the species. For the GRNs considered, the stochastic QSSA quantifies the intrinsic noise at the protein level with greater accuracy and for larger combinations of half-life values of mRNA and protein, whereas in case of mRNA the satisfactory accuracy level can only be reached for limited combinations of absolute values of half-lives. Further, we have clearly demonstrated that the abundance levels of mRNA and protein hardly matter for such comparison between QSSA and mechanistic models. Based on our findings, we conclude that QSSA model can be a good choice for evaluating intrinsic noise for other GRNs as well, provided we make a rational choice based on experimental half-life values available in literature. PMID:26327626
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-01-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
Argudo, David; Bethel, Neville P; Marcoline, Frank V; Grabe, Michael
2016-07-01
Biological membranes deform in response to resident proteins leading to a coupling between membrane shape and protein localization. Additionally, the membrane influences the function of membrane proteins. Here we review contributions to this field from continuum elastic membrane models focusing on the class of models that couple the protein to the membrane. While it has been argued that continuum models cannot reproduce the distortions observed in fully-atomistic molecular dynamics simulations, we suggest that this failure can be overcome by using chemically accurate representations of the protein. We outline our recent advances along these lines with our hybrid continuum-atomistic model, and we show the model is in excellent agreement with fully-atomistic simulations of the nhTMEM16 lipid scramblase. We believe that the speed and accuracy of continuum-atomistic methodologies will make it possible to simulate large scale, slow biological processes, such as membrane morphological changes, that are currently beyond the scope of other computational approaches. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. PMID:26853937
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Hybridization modeling of oligonucleotide SNP arrays for accurate DNA copy number estimation
Wan, Lin; Sun, Kelian; Ding, Qi; Cui, Yuehua; Li, Ming; Wen, Yalu; Elston, Robert C.; Qian, Minping; Fu, Wenjiang J
2009-01-01
Affymetrix SNP arrays have been widely used for single-nucleotide polymorphism (SNP) genotype calling and DNA copy number variation inference. Although numerous methods have achieved high accuracy in these fields, most studies have paid little attention to the modeling of hybridization of probes to off-target allele sequences, which can affect the accuracy greatly. In this study, we address this issue and demonstrate that hybridization with mismatch nucleotides (HWMMN) occurs in all SNP probe-sets and has a critical effect on the estimation of allelic concentrations (ACs). We study sequence binding through binding free energy and then binding affinity, and develop a probe intensity composite representation (PICR) model. The PICR model allows the estimation of ACs at a given SNP through statistical regression. Furthermore, we demonstrate with cell-line data of known true copy numbers that the PICR model can achieve reasonable accuracy in copy number estimation at a single SNP locus, by using the ratio of the estimated AC of each sample to that of the reference sample, and can reveal subtle genotype structure of SNPs at abnormal loci. We also demonstrate with HapMap data that the PICR model yields accurate SNP genotype calls consistently across samples, laboratories and even across array platforms. PMID:19586935
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
NASA Astrophysics Data System (ADS)
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?
Searcy, Christopher A; Shaffer, H Bradley
2016-04-01
Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071
Linaro, Daniele; Storace, Marco; Giugliano, Michele
2011-01-01
Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here. PMID:21423712
Accurate integral equation theory for the central force model of liquid water and ionic solutions
NASA Astrophysics Data System (ADS)
Ichiye, Toshiko; Haymet, A. D. J.
1988-10-01
The atom-atom pair correlation functions and thermodynamics of the central force model of water, introduced by Lemberg, Stillinger, and Rahman, have been calculated accurately by an integral equation method which incorporates two new developments. First, a rapid new scheme has been used to solve the Ornstein-Zernike equation. This scheme combines the renormalization methods of Allnatt, and Rossky and Friedman with an extension of the trigonometric basis-set solution of Labik and co-workers. Second, by adding approximate ``bridge'' functions to the hypernetted-chain (HNC) integral equation, we have obtained predictions for liquid water in which the hydrogen bond length and number are in good agreement with ``exact'' computer simulations of the same model force laws. In addition, for dilute ionic solutions, the ion-oxygen and ion-hydrogen coordination numbers display both the physically correct stoichiometry and good agreement with earlier simulations. These results represent a measurable improvement over both a previous HNC solution of the central force model and the ex-RISM integral equation solutions for the TIPS and other rigid molecule models of water.
Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models
NASA Technical Reports Server (NTRS)
Arya, Vinod K.
1994-01-01
Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.
An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion
NASA Astrophysics Data System (ADS)
Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.
2014-11-01
Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782
NASA Technical Reports Server (NTRS)
Tsay, Si-Chee; Ji, Q. Jack
2011-01-01
Earth's climate is driven primarily by solar radiation. As summarized in various IPCC reports, the global average of radiative forcing for different agents and mechanisms, such as aerosols or CO2 doubling, is in the range of a few W/sq m. However, when solar irradiance is measured by broadband radiometers, such as the fleet of Eppley Precision Solar Pyranometers (PSP) and equivalent instrumentation employed worldwide, the measurement uncertainty is larger than 2% (e.g., WMO specification of pyranometer, 2008). Thus, out of the approx. 184 W/sq m (approx.263 W/sq m if cloud-free) surface solar insolation (Trenberth et al. 2009), the measurement uncertainty is greater than +/-3.6 W/sq m, overwhelming the climate change signals. To discern these signals, less than a 1 % measurement uncertainty is required and is currently achievable only by means of a newly developed methodology employing a modified PSP-like pyranometer and an updated calibration equation to account for its thermal effects (li and Tsay, 2010). In this talk, we will show that some auxiliary measurements, such as those from a collocated pyrgeometer or air temperature sensors, can help correct historical datasets. Additionally, we will also demonstrate that a pyrheliometer is not free of the thermal effect; therefore, comparing to a high cost yet still not thermal-effect-free "direct + diffuse" approach in measuring surface solar irradiance, our new method is more economical, and more likely to be suitable for correcting a wide variety of historical datasets. Modeling simulations will be presented that a corrected solar irradiance measurement has a significant impact on aerosol forcing, and thus plays an important role in climate studies.
Santolini, Marc; Mora, Thierry; Hakim, Vincent
2014-01-01
The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
NASA Astrophysics Data System (ADS)
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
Felmy, Andrew R.; Mason, Marvin; Qafoku, Odeta; Xia, Yuanxian; Wang, Zheming; MacLean, Graham
2003-03-27
Developing accurate thermodynamic models for predicting the chemistry of the high-level waste tanks at Hanford is an extremely daunting challenge in electrolyte and radionuclide chemistry. These challenges stem from the extremely high ionic strength of the tank waste supernatants, presence of chelating agents in selected tanks, wide temperature range in processing conditions and the presence of important actinide species in multiple oxidation states. This presentation summarizes progress made to date in developing accurate models for these tank waste solutions, how these data are being used at Hanford and the important challenges that remain. New thermodynamic measurements on Sr and actinide complexation with specific chelating agents (EDTA, HEDTA and gluconate) will also be presented.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy. PMID:27230942
O’Connor, James PB; Boult, Jessica KR; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff JM; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P
2015-01-01
There is a clinical need for non-invasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning and therapy monitoring. Oxygen enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed “Oxy-R fraction”) would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here we demonstrate that OE-MRI signals are accurate, precise and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia non-invasively and is immediately translatable to the clinic. PMID:26659574
Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-14
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
The S-model: A highly accurate MOST model for CAD
NASA Astrophysics Data System (ADS)
Satter, J. H.
1986-09-01
A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
Discrete state model and accurate estimation of loop entropy of RNA secondary structures.
Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie
2008-03-28
Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982
A biokinetic model for zinc for use in radiation protection
Leggett, Richard Wayne
2012-01-01
The physiology of the essential trace element zinc has been studied extensively in human subjects using kinetic analysis of time-dependent measurements of administered zinc tracers. A number of biokinetic models describing zinc exchange between plasma and tissues and loss of systemic zinc in excreta have been developed from the derived data. More rudimentary biokinetic models for zinc have been developed to estimate radiation doses from internally deposited radioisotopes of zinc. The latter models are designed to provide broadly accurate estimates of cumulative decays of zinc radioisotopes in tissues and are not intended as realistic descriptions of the directions of movement of zinc in the body. This paper reviews biokinetic data for zinc and proposes a physiologically meaningful biokinetic model for systemic zinc for use in radiation protection. The proposed model bears some resemblance to zinc models developed in physiological studies but depicts a finer division of systemic zinc and is based on a broader spectrum of data than previous models. The proposed model and current radiation protection model for zinc yield broadly similar estimates of effective dose from internally deposited radioisotopes of zinc but substantially different dose estimates for several individual tissues, particularly the liver.
Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline
2012-01-01
In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B–like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B–like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy. PMID:22412019
Wijma, Hein J; Marrink, Siewert J; Janssen, Dick B
2014-07-28
Computational approaches could decrease the need for the laborious high-throughput experimental screening that is often required to improve enzymes by mutagenesis. Here, we report that using multiple short molecular dynamics (MD) simulations makes it possible to accurately model enantioselectivity for large numbers of enzyme-substrate combinations at low computational costs. We chose four different haloalkane dehalogenases as model systems because of the availability of a large set of experimental data on the enantioselective conversion of 45 different substrates. To model the enantioselectivity, we quantified the frequency of occurrence of catalytically productive conformations (near attack conformations) for pairs of enantiomers during MD simulations. We found that the angle of nucleophilic attack that leads to carbon-halogen bond cleavage was a critical variable that limited the occurrence of productive conformations; enantiomers for which this angle reached values close to 180° were preferentially converted. A cluster of 20-40 very short (10 ps) MD simulations allowed adequate conformational sampling and resulted in much better agreement to experimental enantioselectivities than single long MD simulations (22 ns), while the computational costs were 50-100 fold lower. With single long MD simulations, the dynamics of enzyme-substrate complexes remained confined to a conformational subspace that rarely changed significantly, whereas with multiple short MD simulations a larger diversity of conformations of enzyme-substrate complexes was observed. PMID:24916632
Accurate models for P-gp drug recognition induced from a cancer cell line cytotoxicity screen.
Levatić, Jurica; Ćurak, Jasna; Kralj, Marijeta; Šmuc, Tomislav; Osmak, Maja; Supek, Fran
2013-07-25
P-glycoprotein (P-gp, MDR1) is a promiscuous drug efflux pump of substantial pharmacological importance. Taking advantage of large-scale cytotoxicity screening data involving 60 cancer cell lines, we correlated the differential biological activities of ∼13,000 compounds against cellular P-gp levels. We created a large set of 934 high-confidence P-gp substrates or nonsubstrates by enforcing agreement with an orthogonal criterion involving P-gp overexpressing ADR-RES cells. A support vector machine (SVM) was 86.7% accurate in discriminating P-gp substrates on independent test data, exceeding previous models. Two molecular features had an overarching influence: nearly all P-gp substrates were large (>35 atoms including H) and dense (specific volume of <7.3 Å(3)/atom) molecules. Seven other descriptors and 24 molecular fragments ("effluxophores") were found enriched in the (non)substrates and incorporated into interpretable rule-based models. Biological experiments on an independent P-gp overexpressing cell line, the vincristine-resistant VK2, allowed us to reclassify six compounds previously annotated as substrates, validating our method's predictive ability. Models are freely available at http://pgp.biozyne.com . PMID:23772653
Modelling of ground-level UV radiation
NASA Astrophysics Data System (ADS)
Koepke, P.; Schwander, H.; Thomalla, E.
1996-06-01
A number of modifications were made on the STAR radiation transmission model for greater ease of use while keeping its fault liability low. The improvements concern the entire aerosol description function of the model, the option of radiation calculation for different receiver geometries, the option of switching off temperature-dependent ozone absorption, and simplications of the STAR menu. The assets of using STAR are documented in the studies on the accuracy of the radiation transmission model. One of these studies gives a detailed comparison of the present model with a simple radiation model which reveals the limitations of approximation models. The other examines the error margin of radiation transmission models as a function of the input parameters available. It was found here that errors can be expected to range between 5 and 15% depending on the quality of the input data sets. A comparative study on the values obtained by measurement and through the model proved this judgement correct, the relative errors lying within the predicted range. Attached to this final report is a comprehensive sensitivity study which quantifies the action of various atmospheric parameters relevant to UV radiation, thus contributing to an elucidation of the process.
Radiation models for thermal flows at low Mach number
Teleaga, Ioan . E-mail: teleaga@mathematik.tu-darmstadt.de; Seaid, Mohammed . E-mail: seaid@mathematik.uni-kl.de; Gasser, Ingenuin . E-mail: gasser@math.uni-hamburg.de; Klar, Axel . E-mail: klar@itwm.fhg.de; Struckmeier, Jens . E-mail: struckmeier@math.uni-hamburg.de
2006-07-01
Simplified approximate models for radiation are proposed to study thermal effects in low Mach flow in open tunnels. The governing equations for fluid dynamics are derived by applying a low Mach asymptotic in the compressible Navier-Stokes problem. Based on an asymptotic analysis we show that the integro-differential equation for radiative transfer can be replaced by a set of differential equations which are independent of angle variable and easy to solve using standard numerical discretizations. As an application we consider a simplified fire model in vehicular tunnels. The results presented in this paper show that the proposed models are able to predict temperature in the tunnels accurately with low computational cost.
2011-01-01
Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645
Mass model of the LDEF satellite spacecraft and experiments for ionizing radiation analyses.
Colborn, B L; Armstrong, T W
1996-11-01
A three-dimensional (3D) mass model of the LDEF spacecraft and selected experiments has been developed to allow the influence of material shielding on ionizing radiation measurements and analyses to be determined accurately. This computer model has been applied in a stand-alone mode to provide 3D shielding distributions around radiation dosimeters to aid data interpretation, and has been interfaced with radiation transport codes for a variety of different types of radiation predictions. This paper summarizes the methodology used, the level of detail incorporated, and some example model applications. PMID:11540514
Elvira, L; Hernandez, F; Cuesta, P; Cano, S; Gonzalez-Martin, J-V; Astiz, S
2013-06-01
Although the intensive production system of Lacaune dairy sheep is the only profitable method for producers outside of the French Roquefort area, little is known about this type of systems. This study evaluated yield records of 3677 Lacaune sheep under intensive management between 2005 and 2010 in order to describe the lactation curve of this breed and to investigate the suitability of different mathematical functions for modeling this curve. A total of 7873 complete lactations during a 40-week lactation period corresponding to 201 281 pieces of weekly yield data were used. First, five mathematical functions were evaluated on the basis of the residual mean square, determination coefficient, Durbin Watson and Runs Test values. The two better models were found to be Pollott Additive and fractional polynomial (FP). In the second part of the study, the milk yield, peak of milk yield, day of peak and persistency of the lactations were calculated with Pollot Additive and FP models and compared with the real data. The results indicate that both models gave an extremely accurate fit to Lacaune lactation curves in order to predict milk yields (P = 0.871), with the FP model being the best choice to provide a good fit to an extensive amount of real data and applicable on farm without specific statistical software. On the other hand, the interpretation of the parameters of the Pollott Additive function helps to understand the biology of the udder of the Lacaune sheep. The characteristics of the Lacaune lactation curve and milk yield are affected by lactation number and length. The lactation curves obtained in the present study allow the early identification of ewes with low milk yield potential, which will help to optimize farm profitability. PMID:23257242
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Accurate modeling of cache replacement policies in a Data-Grid.
Otoo, Ekow J.; Shoshani, Arie
2003-01-23
Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.
An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).
Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert
2015-08-01
The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255
NASA Astrophysics Data System (ADS)
Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-01
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.
ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS
Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.
2009-12-10
A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance LAMBDACDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and LAMBDACDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the LAMBDACDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass
An Earth radiation budget climate model
NASA Technical Reports Server (NTRS)
Bartman, Fred L.
1988-01-01
A 2-D Earth Radiation Budget Climate Model has been constructed from an OLWR (Outgoing Longwave Radiation) model and an Earth albedo model. Each of these models uses the same cloud cover climatology modified by a factor GLCLC which adjusts the global annual average cloud cover. The two models are linked by a set of equations which relate the cloud albedos to the cloud top temperatures of the OLWR model. These equations are derived from simultaneous narrow band satellite measurements of cloud top temperature and albedo. Initial results include global annual average values of albedo and latitude/longitude radiation for 45 percent and 57 percent global annual average cloud cover and two different forms of the cloud albedo-cloud top temperature equations.
Band models and correlations for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.
1975-01-01
Absorption of infrared radiation by various line and band models are briefly reviewed. Narrow band model relations for absorptance are used to develop 'exact' formulations for total absorption by four wide band models. Application of a wide band model to a particular gas largely depends upon the spectroscopic characteristic of the absorbing-emitting molecule. Seven continuous correlations for the absorption of a wide band model are presented and each one of these is compared with the exact (numerical) solutions of the wide band models. Comparison of these results indicate the validity of a correlation for a particular radiative transfer application. In radiative transfer analyses, use of continuous correlations for total band absorptance provides flexibilities in various mathematical operations.
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Nuclear model calculations and their role in space radiation research.
Townsend, L W; Cucinotta, F A; Heilbronn, L H
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. PMID:12539757
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
Critical ingredients of Type Ia supernova radiative-transfer modelling
NASA Astrophysics Data System (ADS)
Dessart, Luc; Hillier, D. John; Blondin, Stéphane; Khokhlov, Alexei
2014-07-01
We explore the physics of Type Ia supernova (SN Ia) light curves and spectra using the 1D non-local thermodynamic equilibrium (non-LTE) time-dependent radiative-transfer code CMFGEN. Rather than adjusting ejecta properties to match observations, we select as input one `standard' 1D Chandrasekhar-mass delayed-detonation hydrodynamical model, and then explore the sensitivity of radiation and gas properties of the ejecta on radiative-transfer modelling assumptions. The correct computation of SN Ia radiation is not exclusively a solution to an `opacity problem', characterized by the treatment of a large number of lines. We demonstrate that the key is to identify and treat important atomic processes consistently. This is not limited to treating line blanketing in non-LTE. We show that including forbidden-line transitions of metals, and in particular Co, is increasingly important for the temperature and ionization of the gas beyond maximum light. Non-thermal ionization and excitation are also critical since they affect the colour evolution and the ΔM15 decline rate of our model. While impacting little the bolometric luminosity, a more complete treatment of decay routes leads to enhanced line blanketing, e.g. associated with 48Ti in the U and B bands. Overall, we find that SN Ia radiation properties are influenced in a complicated way by the atomic data we employ, so that obtaining converged results is a real challenge. Nonetheless, with our fully fledged CMFGEN model, we obtain good agreement with the golden standard Type Ia SN 2005cf in the optical and near-IR, from 5 to 60 d after explosion, suggesting that assuming spherical symmetry is not detrimental to SN Ia radiative-transfer modelling at these times. Multi-D effects no doubt matter, but they are perhaps less important than accurately treating the non-LTE processes that are crucial to obtain reliable temperature and ionization structures.
Analytical modeling of worldwide medical radiation use
Mettler, F.A. Jr.; Davis, M.; Kelsey, C.A.; Rosenberg, R.; Williams, A.
1987-02-01
An analytical model was developed to estimate the availability and frequency of medical radiation use on a worldwide basis. This model includes medical and dental x-ray, nuclear medicine, and radiation therapy. The development of an analytical model is necessary as the first step in estimating the radiation dose to the world's population from this source. Since there is no data about the frequency of medical radiation use in more than half the countries in the world and only fragmentary data in an additional one-fourth of the world's countries, such a model can be used to predict the uses of medical radiation in these countries. The model indicates that there are approximately 400,000 medical x-ray machines worldwide and that approximately 1.2 billion diagnostic medical x-ray examinations are performed annually. Dental x-ray examinations are estimated at 315 million annually and approximately 22 million in-vivo diagnostic nuclear medicine examinations. Approximately 4 million radiation therapy procedures or courses of treatment are undertaken annually.
The NIAID Radiation Countermeasures Program business model.
Hafer, Nathaniel; Maidment, Bert W; Hatchett, Richard J
2010-12-01
The National Institute of Allergy and Infectious Diseases (NIAID) Radiation/Nuclear Medical Countermeasures Development Program has developed an integrated approach to providing the resources and expertise required for the research, discovery, and development of radiation/nuclear medical countermeasures (MCMs). These resources and services lower the opportunity costs and reduce the barriers to entry for companies interested in working in this area and accelerate translational progress by providing goal-oriented stewardship of promising projects. In many ways, the radiation countermeasures program functions as a "virtual pharmaceutical firm," coordinating the early and mid-stage development of a wide array of radiation/nuclear MCMs. This commentary describes the radiation countermeasures program and discusses a novel business model that has facilitated product development partnerships between the federal government and academic investigators and biopharmaceutical companies. PMID:21142762
The NIAID Radiation Countermeasures Program Business Model
Hafer, Nathaniel; Maidment, Bert W.
2010-01-01
The National Institute of Allergy and Infectious Diseases (NIAID) Radiation/Nuclear Medical Countermeasures Development Program has developed an integrated approach to providing the resources and expertise required for the research, discovery, and development of radiation/nuclear medical countermeasures (MCMs). These resources and services lower the opportunity costs and reduce the barriers to entry for companies interested in working in this area and accelerate translational progress by providing goal-oriented stewardship of promising projects. In many ways, the radiation countermeasures program functions as a “virtual pharmaceutical firm,” coordinating the early and mid-stage development of a wide array of radiation/nuclear MCMs. This commentary describes the radiation countermeasures program and discusses a novel business model that has facilitated product development partnerships between the federal government and academic investigators and biopharmaceutical companies. PMID:21142762
Walter, Johannes; Thajudeen, Thaseem; Süss, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-21
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles. PMID:25789666
NASA Astrophysics Data System (ADS)
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-08-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-06-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
Modeling Impaired Hippocampal Neurogenesis after Radiation Exposure.
Cacao, Eliedonna; Cucinotta, Francis A
2016-03-01
Radiation impairment of neurogenesis in the hippocampal dentate gyrus is one of several factors associated with cognitive detriments after treatment of brain cancers in children and adults with radiation therapy. Mouse models have been used to study radiation-induced changes in neurogenesis, however the models are limited in the number of doses, dose fractions, age and time after exposure conditions that have been studied. The purpose of this study is to develop a novel predictive mathematical model of radiation-induced changes to neurogenesis using a system of nonlinear ordinary differential equations (ODEs) to represent the time, age and dose-dependent changes to several cell populations participating in neurogenesis as reported in mouse experiments exposed to low-LET radiation. We considered four compartments to model hippocampal neurogenesis and, consequently, the effects of radiation treatment in altering neurogenesis: (1) neural stem cells (NSCs), (2) neuronal progenitor cells or neuroblasts (NB), (3) immature neurons (ImN) and (4) glioblasts (GB). Because neurogenesis is decreasing with increasing mouse age, a description of the age-related dynamics of hippocampal neurogenesis is considered in the model, which is shown to be an important factor in comparisons to experimental data. A key feature of the model is the description of negative feedback regulation on early and late neuronal proliferation after radiation exposure. The model is augmented with parametric descriptions of the dose and time after irradiation dependences of activation of microglial cells and a possible shift of NSC proliferation from neurogenesis to gliogenesis reported at higher doses (∼10 Gy). Predictions for dose-fractionation regimes and for different mouse ages, and prospects for future work are then discussed. PMID:26943452
The JPL Uranian Radiation Model (UMOD)
NASA Technical Reports Server (NTRS)
Garrett, Henry; Martinez-Sierra, Luz Maria; Evans, Robin
2015-01-01
The objective of this study is the development of a comprehensive radiation model (UMOD) of the Uranian environment for JPL mission planning. The ultimate goal is to provide a description of the high energy electron and proton environments and the magnetic field at Uranus that can be used for engineering design. Currently no model exists at JPL. A preliminary electron radiation model employing Voyager 2 data was developed by Selesnick and Stone in 1991. The JPL Uranian Radiation Model extends that analysis, which modeled electrons between 0.7 MeV and 2.5 MeV based on the Voyager Cosmic Ray Subsystem electron telescope, down to an energy of 0.022 MeV for electrons and from 0.028 MeV to 3.5 MeV for protons. These latter energy ranges are based on measurements by the Applied Physics Laboratory Low Energy Charged Particle Detector on Voyager 2. As in previous JPL radiation models, the form of the Uranian model is based on magnetic field coordinates and requires a conversion from spacecraft coordinates to Uranian-centered magnetic "B-L" coordinates. Two magnetic field models have been developed for Uranus: 1) a simple "offset, tilted dipole" (OTD), and 2) a complex, multi-pole expansion model ("Q3"). A review of the existing data on Uranus and a search of the NASA Planetary Data System (PDS) were completed to obtain the latest, up to date descriptions of the Uranian high energy particle environment. These data were fit in terms of the Q3 B-L coordinates to extend and update the original Selesnick and Stone electron model in energy and to develop the companion proton flux model. The flux predictions of the new model were used to estimate the total ionizing dose for the Voyager 2 flyby, and a movie illustrating the complex radiation belt variations was produced to document the uses of the model for planning purposes.
Accurate calculation and modeling of the adiabatic connection in density functional theory
NASA Astrophysics Data System (ADS)
Teale, A. M.; Coriani, S.; Helgaker, T.
2010-04-01
AC. When parametrized in terms of the same input data, the AC-CI model offers improved performance over the corresponding AC-D model, which is shown to be the lowest-order contribution to the AC-CI model. The utility of the accurately calculated AC curves for the analysis of standard density functionals is demonstrated for the BLYP exchange-correlation functional and the interaction-strength-interpolation (ISI) model AC integrand. From the results of this analysis, we investigate the performance of our proposed two-parameter AC-D and AC-CI models when a simple density functional for the AC at infinite interaction strength is employed in place of information at the fully interacting point. The resulting two-parameter correlation functionals offer a qualitatively correct behavior of the AC integrand with much improved accuracy over previous attempts. The AC integrands in the present work are recommended as a basis for further work, generating functionals that avoid spurious error cancellations between exchange and correlation energies and give good accuracy for the range of densities and types of correlation contained in the systems studied here.
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de
Effective Rat Lung Tumor Model for Stereotactic Body Radiation Therapy.
Zhang, Zhang; Wodzak, Michelle; Belzile, Olivier; Zhou, Heling; Sishc, Brock; Yan, Hao; Stojadinovic, Strahinja; Mason, Ralph P; Brekken, Rolf A; Chopra, Rajiv; Story, Michael D; Timmerman, Robert; Saha, Debabrata
2016-06-01
Stereotactic body radiation therapy (SBRT) has found an important role in the treatment of patients with non-small cell lung cancer, demonstrating improvements in dose distribution and even tumor cure rates, particularly for early-stage disease. Despite its emerging clinical efficacy, SBRT has primarily evolved due to advances in medical imaging and more accurate dose delivery, leaving a void in knowledge of the fundamental biological mechanisms underlying its activity. Thus, there is a critical need for the development of orthotropic animal models to further probe the biology associated with high-dose-per-fraction treatment typical of SBRT. We report here on an improved surgically based methodology for generating solitary intrapulmonary nodule tumors, which can be treated with simulated SBRT using the X-RAD 225Cx small animal irradiator and Small Animal RadioTherapy (SmART) Plan treatment system. Over 90% of rats developed solitary tumors in the right lung. Furthermore, the tumor response to radiation was monitored noninvasively via bioluminescence imaging (BLI), and complete ablation of tumor growth was achieved with 36 Gy (3 fractions of 12 Gy each). We report a reproducible, orthotopic, clinically relevant lung tumor model, which better mimics patient treatment regimens. This system can be utilized to further explore the underlying biological mechanisms relevant to SBRT and high-dose-per-fraction radiation exposure and to provide a useful model to explore the efficacy of radiation modifiers in the treatment of non-small cell lung cancer. PMID:27223828
Assessment of diffuse radiation models in Azores
NASA Astrophysics Data System (ADS)
Magarreiro, Clarisse; Brito, Miguel; Soares, Pedro; Azevedo, Eduardo
2014-05-01
Measured irradiance databases usually consist of global solar radiation data with limited spatial coverage. Hence, solar radiation models have been developed to estimate the diffuse fraction from the measured global irradiation. This information is critical for the assessment of the potential of solar energy technologies; for example, the decision to use photovoltaic systems with tracking system. The different solar radiation models for this purpose differ on the parameters used as input. The simplest, and most common, are models which use global radiation information only. More sophisticated models require meteorological parameters such as information from clouds, atmospheric turbidity, temperature or precipitable water content. Most of these models comprise correlations with the clearness index, kt (portion of horizontal extra-terrestrial radiation reaching the Earth's surface) to obtain the diffuse fraction kd (portion of diffuse component from global radiation). The applicability of these different models is related to the local atmospheric conditions and its climatic characteristics. The models are not of general validity and can only be applicable to locations where the albedo of the surrounding terrain and the atmospheric contamination by dust are not significantly different from those where the corresponding methods were developed. Thus, models of diffuse fraction exhibit a relevant degree of location dependence: e.g. models developed considering data acquired in Europe are mainly linked to Northern, Central or, more recently, Mediterranean areas. The Azores Archipelago, with its particular climate and cloud cover characteristics, different from mainland Europe, has not yet been considered for the development of testing of such models. The Azorean climate reveals large amounts of cloud cover in its annual cycle, with spatial and temporal variabilities more complex than the common Summer/Winter pattern. This study explores the applicability of different
Kang, Chaogui; Liu, Yu; Guo, Diansheng; Qin, Kun
2015-01-01
We generalized the recently introduced “radiation model”, as an analog to the generalization of the classic “gravity model”, to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses. PMID:26600153
The simplest models of radiative neutrino mass
NASA Astrophysics Data System (ADS)
Law, Sandy S. C.; McDonald, Kristian L.
2014-04-01
The complexity of radiative neutrino-mass models can be judged by: (i) whether they require the imposition of ad hoc symmetries, (ii) the number of new multiplets they introduce and (iii) the number of arbitrary parameters that appear. Considering models that do not employ new symmetries, the simplest models have two new multiplets and a minimal number of new parameters. With this in mind, we search for the simplest models of radiative neutrino mass. We are led to two models, containing a real scalar triplet and a charged scalar doublet (respectively), in addition to the charged singlet scalar considered by Zee [h+ (1, 1, 2)]. These models are essentially simplified versions of the Zee model and appear to be the simplest models of radiative neutrino mass. However, despite successfully generating nonzero masses, present-day data is sufficient to rule these simple models out. The Zee and Zee-Babu models therefore remain as the simplest viable models. Moving beyond the minimal cases, we find a new model of two-loop masses that employs the charged doublet Φ (1, 2, 3) and the doubly-charged scalar k++ (1, 1, 4). This is the sole remaining model that employs only three new noncolored multiplets.
Radiation budget measurement/model interface
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.
1983-01-01
This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Evaluation of Two New Models of Net Radiometers and Comparison to a Model to Predict Net Radiation
NASA Astrophysics Data System (ADS)
Blonquist, J. M.; Tanner, B. D.; Bugbee, B.
2007-12-01
Net radiation is a key component to the surface energy balance, but it is difficult and expensive to measure accurately. Two new net radiometer models (Hukseflux NR01 and Kipp & Zonen CNR2) have been released in the past year. We evaluated and compared these models to two Kipp and Zonen model CNR1 net radiometers, and to two less expensive, older model net radiometers (Kipp & Zonen NR-Lite and REBS Q*7.1). Additionally, we predicted net radiation from solar radiation, air temperature, and absolute humidity measurements using a commonly used model that calculates net longwave radiation using a Brunt (1932; 1952) approach for predicting net emissivity. The model uses the ratio of measured solar radiation to predicted clear-sky solar radiation as a surrogate for cloud cover. Net shortwave radiation is determined by direct measurement of solar radiation and the albedo of the surface. Hourly averages and daily totals (over the course of the study; 33 days) from three replicate sensors of the two new net radiometers compared quite well to the CNR1 radiometers. The difference was generally less than +/- 5 %. Three replicates of the two older model net radiometers did not agree as well with the newer models, with differences generally less than +/- 15 %. Our data matched what others (Cobos and Baker, 2003; Brotzge and Duchon, 2000) have shown for these older radiometers. The net radiation model yielded hourly average and daily total values that were 10-15 % higher than the CNR1 radiometers. Our findings indicate that accuracy increases with increasing cost. Prediction of net radiation from the model yielded adequate results for some applications, such as evapotranspiration predictions and irrigation scheduling, but the model has considerable error at night due to some simplifying assumptions. Accurate net radiation measurements depend on proper placement of the sensor, proper leveling, and routine maintenance to keep the sensing surfaces clean.
String Fragmentation Model in Space Radiation Problems
NASA Technical Reports Server (NTRS)
Tang, Alfred; Johnson, Eloise (Editor); Norbury, John W.; Tripathi, R. K.
2002-01-01
String fragmentation models such as the Lund Model fit experimental particle production cross sections very well in the high-energy limit. This paper gives an introduction of the massless relativistic string in the Lund Model and shows how it can be modified with a simple assumption to produce formulas for meson production cross sections for space radiation research. The results of the string model are compared with inclusive pion production data from proton-proton collision experiments.
Modelling the martian cosmic radiation environment
NASA Astrophysics Data System (ADS)
Dartnell, L. R.; Desorgher, L.; Ward, J. M.; Coates, A. J.
2013-09-01
The martian surface is no longer protected by a global magnetic field or substantial atmosphere and so is essentially unshielded to the flux of cosmic rays. This creates an ionising radiation field on the surface and subsurface that is hazardous to life and the operation of spacecraft instruments. Here we report the modelling approach used to characterise this complex and time-variable radiation environment and discuss the wider applications of the results generated.
Towards accurate kinetic modeling of prompt NO formation in hydrocarbon flames via the NCN pathway
Sutton, Jeffrey A.; Fleming, James W.
2008-08-15
A basic kinetic mechanism that can predict the appropriate prompt-NO precursor NCN, as shown by experiment, with relative accuracy while still producing postflame NO results that can be calculated as accurately as or more accurately than through the former HCN pathway is presented for the first time. The basic NCN submechanism should be a starting point for future NCN kinetic and prompt NO formation refinement.
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
Infrared radiation models for atmospheric ozone
NASA Technical Reports Server (NTRS)
Kratz, David P.; Ces, Robert D.
1988-01-01
A hierarchy of line-by-line, narrow-band, and broadband infrared radiation models are discussed for ozone, a radiatively important atmospheric trace gas. It is shown that the narrow-band (Malkmus) model is in near-precise agreement with the line-by-line model, thus providing a means of testing narrow-band Curtis-Godson scaling, and it is found that this scaling procedure leads to errors in atmospheric fluxes of up to 10 percent. Moreover, this is a direct consequence of the altitude dependence of the ozone mixing ratio. Somewhat greater flux errors arise with use of the broadband model, due to both a lesser accuracy of the broadband scaling procedure and to inherent errors within the broadband model, despite the fact that this model has been tuned to the line-by-line model.
Development of a new Global RAdiation Belt model: GRAB
NASA Astrophysics Data System (ADS)
Sicard-Piet, Angelica; Lazaro, Didier; Maget, Vincent; Rolland, Guy; Ecoffet, Robert; Bourdarie, Sébastien; Boscher, Daniel; Standarovski, Denis
2016-07-01
The well known AP8 and AE8 NASA models are commonly used in the industry to specify the radiation belt environment. Unfortunately, there are some limitations in the use of these models, first due to the covered energy range, but also because in some regions of space, there are discrepancies between the predicted average values and the measurements. Therefore, our aim is to develop a radiation belt model, covering a large region of space and energy, from LEO altitudes to GEO and above, and from plasma to relativistic particles. The aim for the first version is to correct the AP8 and AE8 models where they are deficient or not defined. At geostationary, we developed ten years ago for electrons the IGE-2006 model which was proven to be more accurate than AE8, and used commonly in the industry, covering a broad energy range, from 1keV to 5MeV. From then, a proton model for geostationary orbit was also developed for material applications, followed by the OZONE model covering a narrower energy range but the whole outer electron belt, a SLOT model to asses average electron values for 2
Threshold models in radiation carcinogenesis
Hoel, D.G.; Li, P.
1998-09-01
Cancer incidence and mortality data from the atomic bomb survivors cohort has been analyzed to allow for the possibility of a threshold dose response. The same dose-response models as used in the original papers were fit to the data. The estimated cancer incidence from the fitted models over-predicted the observed cancer incidence in the lowest exposure group. This is consistent with a threshold or nonlinear dose-response at low-doses. Thresholds were added to the dose-response models and the range of possible thresholds is shown for both solid tumor cancers as well as the different leukemia types. This analysis suggests that the A-bomb cancer incidence data agree more with a threshold or nonlinear dose-response model than a purely linear model although the linear model is statistically equivalent. This observation is not found with the mortality data. For both the incidence data and the mortality data the addition of a threshold term significantly improves the fit to the linear or linear-quadratic dose response for both total leukemias and also for the leukemia subtypes of ALL, AML, and CML.
A first-order radiative transfer model for microwave radiometry of forest canopies at L-band
Technology Transfer Automated Retrieval System (TEKTRAN)
In this study, a first-order radiative transfer (RT) model is developed to more accurately account for vegetation canopy scattering by modifying the basic radiative transfer model (the zero-order RT solution). In order to optimally utilize microwave radiometric data in soil moisture (SM) retrievals ...
NASA Astrophysics Data System (ADS)
Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.
2015-07-01
Routine measurements of the beam irradiance at normal incidence (DNI) include the irradiance originating from within the extent of the solar disc only (DNIS) whose angular extent is 0.266° ± 1.7 %, and that from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates if the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and a collocated Sun and Aureole Measurement (SAM) instrument which offers reference measurements of the monochromatic profile of solar radiance, were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 5 %, a relative bias of +1 % and acoefficient of determination greater than 0.97. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a Two Term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 22 and -19 % and a coefficient of determination of 0.89. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard DNI measurements.
Helium Reionization Simulations. I. Modeling Quasars as Radiation Sources
NASA Astrophysics Data System (ADS)
La Plante, Paul; Trac, Hy
2016-09-01
We introduce a new project to understand helium reionization using fully coupled N-body, hydrodynamics, and radiative transfer simulations. This project aims to capture correctly the thermal history of the intergalactic medium as a result of reionization and make predictions about the Lyα forest and baryon temperature–density relation. The dominant sources of radiation for this transition are quasars, so modeling the source population accurately is very important for making reliable predictions. In this first paper, we present a new method for populating dark matter halos with quasars. Our set of quasar models includes two different light curves, a lightbulb (simple on/off) and symmetric exponential model, and luminosity-dependent quasar lifetimes. Our method self-consistently reproduces an input quasar luminosity function given a halo catalog from an N-body simulation, and propagates quasars through the merger history of halo hosts. After calibrating quasar clustering using measurements from the Baryon Oscillation Spectroscopic Survey, we find that the characteristic mass of quasar hosts is {M}h∼ 2.5× {10}12 {h}-1 {M}ȯ for the lightbulb model, and {M}h∼ 2.3× {10}12 {h}-1 {M}ȯ for the exponential model. In the latter model, the peak quasar luminosity for a given halo mass is larger than that in the former, typically by a factor of 1.5–2. The effective lifetime for quasars in the lightbulb model is 59 Myr, and in the exponential case, the effective time constant is about 15 Myr. We include semi-analytic calculations of helium reionization, and discuss how to include these quasars as sources of ionizing radiation for full hydrodynamics with radiative transfer simulations in order to study helium reionization.
Radiative transfer model: matrix operator method.
Liu, Q; Ruprecht, E
1996-07-20
A radiative transfer model, the matrix operator method, is discussed here. The matrix operator method is applied to a plane-parallel atmosphere within three spectral ranges: the visible, the infrared, and the microwave. For a homogeneous layer with spherical scattering, the radiative transfer equation can be solved analytically. The vertically inhomogeneous atmosphere can be subdivided into a set of homogeneous layers. The solution of the radiative transfer equation for the vertically inhomogeneous atmosphere is obtained recurrently from the analytical solutions for the subdivided layers. As an example for the application of the matrix operator method, the effects of the cirrus and the stratocumulus clouds on the net radiation at the surface and at the top of the atmosphere are investigated. The relationship between the polarization in the microwave range and the rain rates is also studied. Copies of the FORTRAN program and the documentation of the FORTRAN program on a diskette are available. PMID:21102832
Atmospheric radiation model for water surfaces
NASA Technical Reports Server (NTRS)
Turner, R. E.; Gaskill, D. W.; Lierzer, J. R.
1982-01-01
An atmospheric correction model was extended to account for various atmospheric radiation components in remotely sensed data. Components such as the atmospheric path radiance which results from singly scattered sky radiation specularly reflected by the water surface are considered. A component which is referred to as the virtual Sun path radiance, i.e. the singly scattered path radiance which results from the solar radiation which is specularly reflected by the water surface is also considered. These atmospheric radiation components are coded into a computer program for the analysis of multispectral remote sensor data over the Great Lakes of the United States. The user must know certain parameters, such as the visibility or spectral optical thickness of the atmosphere and the geometry of the sensor with respect to the Sun and the target elements under investigation.
Sengupta, Manajit; Clothiaux, Eugene E.; Ackerman, Thomas P.; Kato, Seiji; Min, Qilong
2003-09-15
A one-year observational study of overcast boundary layer stratus at the U.S. Department of Energy Atmospheric Radiation Measurement Program Southern Great Plains site illustrates that surface radiation is primarily sensitive to cloud liquid water path, with cloud drop effective radius having a secondary influence. The mean, median and standard deviation of cloud liquid water path and cloud drop effective radius for the dataset are 0.120 mm, 0.101 mm, 0.108 mm, and 7.38 {micro}m, 7.13 {micro}m, 2.39 {micro}m, respectively. Radiative transfer calculations demonstrate that cloud optical depth and cloud normalized forcing are respectively three and six times as sensitive to liquid water path variations as they are to effective radius variations, when the observed ranges of each of those variables is considered. Overall, there is a 79% correlation between observed and computed surface fluxes when using a fixed effective radius of 7.5 {micro}m and observed liquid water paths in the calculations. One conclusion from this study is that measurement of the indirect aerosol effect will be problematic at the site, as variations in cloud liquid water path will most likely mask effects of variations in particle size.
Radiation model for row crops: II. Model evaluation
Technology Transfer Automated Retrieval System (TEKTRAN)
Relatively few radiation transfer studies have considered the impact of varying vegetation cover that typifies row crops, and meth¬ods to account for partial row crop cover have not been well investigated. Our objective was to evaluate a widely used radiation model that was modified for row crops ha...
Modeling of Radiative Transfer in Protostellar Disks
NASA Technical Reports Server (NTRS)
VonAllmen, Paul; Turner, Neal
2007-01-01
This program implements a spectral line, radiative transfer tool for interpreting Spitzer Space Telescope observations by matching them with models of protostellar disks for improved understanding of planet and star formation. The Spitzer Space Telescope detects gas phase molecules in the infrared spectra of protostellar disks, with spectral lines carrying information on the chemical composition of the material from which planets form. Input to the software includes chemical models developed at JPL. The products are synthetic images and spectra for comparison with Spitzer measurements. Radiative transfer in a protostellar disk is primarily affected by absorption and emission processes in the dust and in molecular gases such as H2, CO, and HCO. The magnitude of the optical absorption and emission is determined by the population of the electronic, vibrational, and rotational energy levels. The population of the molecular level is in turn determined by the intensity of the radiation field. Therefore, the intensity of the radiation field and the population of the molecular levels are inter-dependent quantities. To meet the computational challenges of solving for the coupled radiation field and electronic level populations in disks having wide ranges of optical depths and spatial scales, the tool runs in parallel on the JPL Dell Cluster supercomputer with C++ and Fortran compiler with a Message Passing Interface. Because this software has been developed on a distributed computing platform, the modeling of systems previously beyond the reach of available computational resources is possible.
ICRCCM Phase 2: Verification and calibration of radiation codes in climate models
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1991-01-01
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). SPECTRE will establish an absolute standard against which to compare models, and will aim to remove the hidden variables'' (unknown humidities, aerosols, etc.) which radiation modelers have invoked to excuse disagreements with observation. The data to be collected during SPECTRE will form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used in climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects.
Technology Transfer Automated Retrieval System (TEKTRAN)
The three evapotranspiration (ET) measurement/retrieval techniques used in this study, lysimeter, scintillometer and remote sensing vary in their level of complexity, accuracy, resolution and applicability. The lysimeter with its point measurement is the most accurate and direct method to measure ET...
Jovian S emission: Model of radiation source
NASA Astrophysics Data System (ADS)
Ryabov, B. P.
1994-04-01
A physical model of the radiation source and an excitation mechanism have been suggested for the S component in Jupiter's sporadic radio emission. The model provides a unique explanation for most of the interrelated phenomena observed, allowing a consistent interpretation of the emission cone structure, behavior of the integrated radio spectrum, occurrence probability of S bursts, location and size of the radiation source, and fine structure of the dynamic spectra. The mechanism responsible for the S bursts is also discussed in connection with the L type emission. Relations are traced between parameters of the radio emission and geometry of the Io flux tube. Fluctuations in the current amplitude through the tube are estimated, along with the refractive index value and mass density of the plasma near the radiation source.
Shuttle Spacesuit (Radiation) Model Development
NASA Technical Reports Server (NTRS)
Anderson, Brooke M.; Nealy, J. E.; Qualls, G. D.; Staritz, P. J.; Wilson, J. W.; Kim, M.-H. Y.; Cucinotta, F. A.; Atwell, W.; DeAngelis, G.; Ware, J.
2001-01-01
A detailed spacesuit computational model is being developed at the Langley Research Center for exposure evaluation studies. The details of the construction of the spacesuit are critical to an estimate of exposures and for assessing the health risk to the astronaut during extravehicular activity (EVA). Fine detail of the basic fabric structure, helmet, and backpack is required to assure a valid evaluation. The exposure fields within the Computerized Anatomical Male (CAM) and Female (CAF) are evaluated at 148 and 156 points, respectively, to determine the dose fluctuations within critical organs. Exposure evaluations for ambient environments will be given and potential implications for geomagnetic storm conditions discussed.
Knorr, K L; Hilsenbeck, S G; Wenger, C R; Pounds, G; Oldaker, T; Vendely, P; Pandian, M R; Harrington, D; Clark, G M
1992-01-01
Determining an appropriate level of adjuvant therapy is one of the most difficult facets of treating breast cancer patients. Although the myriad of prognostic factors aid in this decision, often they give conflicting reports of a patient's prognosis. What we need is a survival model which can properly utilize the information contained in these factors and give an accurate, reliable account of the patient's probability of recurrence. We also need a method of evaluating these models' predictive ability instead of simply measuring goodness-of-fit, as is currently done. Often, prognostic factors are broken into two categories such as positive or negative. But this dichotomization may hide valuable prognostic information. We investigated whether continuous representations of factors, including standard transformations--logarithmic, square root, categorical, and smoothers--might more accurately estimate the underlying relationship between each factor and survival. We chose the logistic regression model, a special case of the commonly used Cox model, to test our hypothesis. The model containing continuous transformed factors fit the data more closely than the model containing the traditional dichotomized factors. In order to appropriately evaluate these models, we introduce three predictive validity statistics--the Calibration score, the Overall Calibration score, and the Brier score--designed to assess the model's accuracy and reliability. These standardized scores showed the transformed factors predicted three year survival accurately and reliably. The scores can also be used to assess models or compare across studies. PMID:1391991
A radiation damage repair model for normal tissues
NASA Astrophysics Data System (ADS)
Partridge, Mike
2008-07-01
A cellular Monte Carlo model describing radiation damage and repair in normal epithelial tissues is presented. The deliberately simplified model includes cell cycling, cell motility and radiation damage response (cell cycle arrest and cell death) only. Results demonstrate that the model produces a stable equilibrium system for mean cell cycle times in the range 24-96 h. Simulated irradiation of these stable equilibrium systems produced a range of responses that are shown to be consistent with experimental and clinical observation, including (i) re-epithelialization of radiation-induced lesions by a mixture of cell migration into the wound and repopulation at the periphery; (ii) observed radiosensitivity that is quantitatively consistent with both rate of induction of irreparable DNA lesions and, independently, with the observed acute oral and pharyngeal mucosal reactions to radiotherapy; (iii) an observed time between irradiation and maximum toxicity that is consistent with experimental data for skin; (iv) quantitatively accurate predictions of low-dose hyper-radiosensitivity; (v) Gomperzian repopulation for very small lesions (~2000 cells) and (vi) a linear rate of re-epithelialization of 5-10 µm h-1 for large lesions (>15 000 cells).
Status of Galileo interim radiation electron model
NASA Technical Reports Server (NTRS)
Garrett, H. B.; Jun, I.; Ratliff, J. M.; Evans, R. W.; Clough, G. A.; McEntire, R. W.
2003-01-01
Measurements of the high energy, omni-directional electron environment by the Galileo spacecraft Energetic Particle Detector (EDP) were used to develop a new model of Jupiter's trapped electron radiation in the jovian equatorial plane for the range 8 to 16 Jupiter radii.
Some analytical models of radiating collapsing spheres
Herrera, L.; Di Prisco, A; Ospino, J.
2006-08-15
We present some analytical solutions to the Einstein equations, describing radiating collapsing spheres in the diffusion approximation. Solutions allow for modeling physical reasonable situations. The temperature is calculated for each solution, using a hyperbolic transport equation, which permits to exhibit the influence of relaxational effects on the dynamics of the system.
Modeling and Laboratory Investigations of Radiative Shocks
NASA Astrophysics Data System (ADS)
Grun, Jacob; Laming, J. Martin; Manka, Charles; Moore, Christopher; Jones, Ted; Tam, Daniel
2001-10-01
Supernova remnants are often inhomogeneous, with knots or clumps of material expanding in ambient plasma. This structure may be initiated by hydrodynamic instabilities occurring during the explosion, but it may plausibly be amplified by instabilities of the expanding shocks such as, for example, corrugation instabilities described by D’yakov in 1954, Vishniac in 1983, and observed in the laboratory by Grun et al. in 1991. Shock instability can occur when radiation lowers the effective adiabatic index of the gas. In view of the difficulty of modeling radiation in non-equilibrium plasmas, and the dependence of shock instabilities on such radiation, we are performing a laboratory experiment to study radiative shocks. The shocks are generated in a miniature, laser-driven shock tube. The gas density inside the tube at any instant in time is measured using time and space-resolved interferometry, and the emission spectrum of the gas is measured with time-resolved spectroscopy. We simulate the experiment with a 1D code that models time dependent post-shock ionization and non-equilibrium radiative cooling. S. P. D’yakov, Zhurnal Eksperimentalnoi Teoreticheskoi Fiziki 27, 288 (1954); see also section 90 in L.D. Landau and E.M. Lifshitz, Fluid Mechanics (Butterworth-Heinemann 1987); E.T. Vishniac, Astrophys. J. 236, 880 (1983); J. Grun, et al., Phys. Rev. Lett., 66, 2738 (1991)
Improving solar radiation forecasts from Eta/CPTEC model using statistical post-processing
NASA Astrophysics Data System (ADS)
Guarnieri, R. A.; Pereira, E. B.; Chou, S. C.
Solar radiation forecasts are mainly demanded by the energy sector besides other applications Accurate short-term forecasts of solar energy resources are required for management of co-generation systems and energy dispatch in transmission lines Mesoscale weather forecast models usually have radiation parameterization codes since solar radiation is the main energy source for atmospheric processes The Eta model running operationally in the Brazilian Center of Weather Forecast and Climate Studies CPTEC INPE is a mesoscale model with 40 km horizontal resolution This model has outputs for many meteorological variables including solar radiation incidence on ground These radiation forecasts are nevertheless greatly overestimated As an attempt to improve the forecasts of solar energy resources using Eta model statistical post-processing models or refining models were used Multiple linear regression MLR models were adjusted and artificial neural networks ANN were trained using a statistically selected group of 7 variables predicted by the Eta model not including the Eta solar radiation forecast itself This group of variables expresses the future weather and surface conditions Theoretical solar radiation amount on the top of atmosphere TOA was calculated and used as another input Solar radiation measurements from piranometers Kipp Zonen CM-21 installed on two ground-stations of the SONDA Project were used as the targets to be simulated throughout the adjustment training of the models These measurements were also used
Development of an infrared radiative heating model
NASA Technical Reports Server (NTRS)
Bergstrom, R. W.; Helmle, L. C.
1979-01-01
Infrared radiative transfer solution algorithms used in global circulation models were assessed. Computation techniques applicable to the Ames circulation model are identified. Transmission properties of gaseous CO2, H2O, and O3 are gathered, and a computer program is developed, using the line parameter tape and Voight profile subroutine, which computes the transmission of CO2, H2O, and O3. A computer code designed to compute atmospheric cooling rates was developed.
Toward a new radiative-transfer-based model for remote sensing of terrestrial surface albedo.
Cui, Shengcheng; Zhen, Xiaobing; Wang, Zhen; Yang, Shizhi; Zhu, WenYue; Li, Xuebin; Huang, Honghua; Wei, Heli
2015-08-15
This Letter formulates a simple yet accurate radiative-transfer-based theoretical model to characterize the fraction of radiation reflected by terrestrial surfaces. Emphasis is placed on the concept of inhomogeneous distribution of the diffuse sky radiation function (DSRF) and multiple interaction effects (MIE). Neglecting DSRF and MIE produces a -1.55% mean relative bias in albedo estimates. The presented model can elucidate the impact of DSRF on the surface volume scattering and geometry-optical scattering components, respectively, especially for slant illuminations with solar zenith angles (SZA) larger than 50°. Particularly striking in the comparisons between our model and ground-based observations is the achievement of the agreement level, indicating that our model can effectively resolve the longstanding issue in accurately estimating albedo at extremely large SZAs and is promising for land-atmosphere interactions studies. PMID:26274674
Grant, K.E.; Taylor, K.E.; Ellis, J.S.; Wuebbles, D.J.
1987-07-01
The authors have implemented a series of state of the art radiation transport submodels in previously developed one dimensional and two dimensional chemical transport models of the troposphere and stratosphere. These submodels provide the capability of calculating accurate solar and infrared heating rates. They are a firm basis for further radiation submodel development as well as for studying interactions between radiation and model dynamics under varying conditions of clear sky, clouds, and aerosols. 37 refs., 3 figs.
NASA Technical Reports Server (NTRS)
Carlson, Leland A.; Bobskill, Glenn J.; Greendyke, Robert B.
1988-01-01
A series of detailed studies comparing various vibration dissociation coupling models, reaction systems and rates, and radiative heating models has been conducted for the nonequilibrium stagnation region of an AFE/AOTV vehicle. Atomic and molecular nonequilibrium radiation correction factors have been developed and applied to various absorption coefficient step models, and a modified vibration dissociation coupling model has been shown to yield good vibration/electronic temperature and concentration profiles. While results indicate sensitivity to the choice of vibration dissociation coupling model and to the nitrogen electron impact ionization rate, by proper combinations accurate flowfield and radiative heating results can be obtained. These results indicate that nonequilibrium effects significantly affect the flowfield and the radiative heat transfer. However, additional work is needed in ionization chemistry and absorption coefficient modeling.
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Ab Initio Modeling of Molecular Radiation
NASA Technical Reports Server (NTRS)
Jaffe, Richard; Schwenke, David
2014-01-01
Radiative emission from excited states of atoms and molecules can comprise a significant fraction of the total heat flux experienced by spacecraft during atmospheric entry at hypersonic speeds. For spacecraft with ablating heat shields, some of this radiative flux can be absorbed by molecular constituents in the boundary layer that are formed by the ablation process. Ab initio quantum mechanical calculations are carried out to predict the strengths of these emission and absorption processes. This talk will describe the methods used in these calculations using, as examples, the 4th positive emission bands of CO and the 1g+ 1u+ absorption in C3. The results of these calculations are being used as input to NASA radiation modeling codes like NeqAir, HARA and HyperRad.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel
NASA Astrophysics Data System (ADS)
Juste, B.; Miró, R.; Verdú, G.; Santos, A.
2014-06-01
This work presents a methodology to reconstruct a Linac high energy photon spectrum beam. The method is based on EPID scatter images generated when the incident photon beam impinges onto a plastic block. The distribution of scatter radiation produced by this scattering object placed on the external EPID surface and centered at the beam field size was measured. The scatter distribution was also simulated for a series of monoenergetic identical geometry photon beams. Monte Carlo simulations were used to predict the scattered photons for monoenergetic photon beams at 92 different locations, with 0.5 cm increments and at 8.5 cm from the centre of the scattering material. Measurements were performed with the same geometry using a 6 MeV photon beam produced by the linear accelerator. A system of linear equations was generated to combine the polyenergetic EPID measurements with the monoenergetic simulation results. Regularization techniques were applied to solve the system for the incident photon spectrum. A linear matrix system, A×S=E, was developed to describe the scattering interactions and their relationship to the primary spectrum (S). A is the monoenergetic scatter matrix determined from the Monte Carlo simulations, S is the incident photon spectrum, and E represents the scatter distribution characterized by EPID measurement. Direct matrix inversion methods produce results that are not physically consistent due to errors inherent in the system, therefore Tikhonov regularization methods were applied to address the effects of these errors and to solve the system for obtaining a consistent bremsstrahlung spectrum.
Dunn, Nicholas J. H.; Noid, W. G.
2015-12-28
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.
A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China
Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin
2014-01-01
Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046
NASA Astrophysics Data System (ADS)
Widlowski, J.-L.; Taberner, M.; Pinty, B.; Bruniquel-Pinel, V.; Disney, M.; Fernandes, R.; Gastellu-Etchegorry, J.-P.; Gobron, N.; Kuusk, A.; Lavergne, T.; Leblanc, S.; Lewis, P. E.; Martin, E.; Mõttus, M.; North, P. R. J.; Qin, W.; Robustelli, M.; Rochdi, N.; Ruiloba, R.; Soler, C.; Thompson, R.; Verhoef, W.; Verstraete, M. M.; Xie, D.
2007-05-01
The Radiation Transfer Model Intercomparison (RAMI) initiative benchmarks canopy reflectance models under well-controlled experimental conditions. Launched for the first time in 1999, this triennial community exercise encourages the systematic evaluation of canopy reflectance models on a voluntary basis. The first phase of RAMI focused on documenting the spread among radiative transfer (RT) simulations over a small set of primarily 1-D canopies. The second phase expanded the scope to include structurally complex 3-D plant architectures with and without background topography. Here sometimes significant discrepancies were noted which effectively prevented the definition of a reliable "surrogate truth," over heterogeneous vegetation canopies, against which other RT models could then be compared. The present paper documents the outcome of the third phase of RAMI, highlighting both the significant progress that has been made in terms of model agreement since RAMI-2 and the capability of/need for RT models to accurately reproduce local estimates of radiative quantities under conditions that are reminiscent of in situ measurements. Our assessment of the self-consistency and the relative and absolute performance of 3-D Monte Carlo models in RAMI-3 supports their usage in the generation of a "surrogate truth" for all RAMI test cases. This development then leads (1) to the presentation of the "RAMI Online Model Checker" (ROMC), an open-access web-based interface to evaluate RT models automatically, and (2) to a reassessment of the role, scope, and opportunities of the RAMI project in the future.
Diffusion model for lightning radiative transfer
NASA Technical Reports Server (NTRS)
Koshak, William J.; Solakiewicz, Richard J.; Phanord, Dieudonne D.; Blakeslee, Richard J.
1994-01-01
A one-speed Boltzmann transport theory, with diffusion approximations, is applied to study the radiative transfer properties of lightning in optically thick thunderclouds. Near-infrared (lambda = 0.7774 micrometers) photons associated with a prominent oxygen emission triplet in the lightning spectrum are considered. Transient and spatially complex lightning radiation sources are placed inside a rectangular parallelepiped thundercloud geometry and the effects of multiple scattering are studied. The cloud is assumed to be composed of a homogeneous collection of identical spherical water droplets, each droplet a nearly conservative, anisotropic scatterer. Conceptually, we treat the thundercloud like a nuclear reactor, with photons replaced by neutrons, and utilize standard one-speed neutron diffusion techniques common in nuclear reactor analyses. Valid analytic results for the intensity distribution (expanded in spherical harmonics) are obtained for regions sufficiently far from sources. Model estimates of the arrival-time delay and pulse width broadening of lightning signals radiated from within the cloud are determined and the results are in good agreement with both experimental data and previous Monte Carlo estimates. Additional model studies of this kind will be used to study the general information content of cloud top lightning radiation signatures.
Accurate cortical tissue classification on MRI by modeling cortical folding patterns.
Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea
2015-09-01
Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453
Ustinov, E A
2014-10-01
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system. PMID:25296827
Ustinov, E. A.
2014-10-07
Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.
Surface electron density models for accurate ab initio molecular dynamics with electronic friction
NASA Astrophysics Data System (ADS)
Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.
2016-06-01
Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.
Biologically based multistage modeling of radiation effects
William Hazelton; Suresh Moolgavkar; E. Georg Luebeck
2005-08-30
This past year we have made substantial progress in modeling the contribution of homeostatic regulation to low-dose radiation effects and carcinogenesis. We have worked to refine and apply our multistage carcinogenesis models to explicitly incorporate cell cycle states, simple and complex damage, checkpoint delay, slow and fast repair, differentiation, and apoptosis to study the effects of low-dose ionizing radiation in mouse intestinal crypts, as well as in other tissues. We have one paper accepted for publication in ''Advances in Space Research'', and another manuscript in preparation describing this work. I also wrote a chapter describing our combined cell-cycle and multistage carcinogenesis model that will be published in a book on stochastic carcinogenesis models edited by Wei-Yuan Tan. In addition, we organized and held a workshop on ''Biologically Based Modeling of Human Health Effects of Low dose Ionizing Radiation'', July 28-29, 2005 at Fred Hutchinson Cancer Research Center in Seattle, Washington. We had over 20 participants, including Mary Helen Barcellos-Hoff as keynote speaker, talks by most of the low-dose modelers in the DOE low-dose program, experimentalists including Les Redpath (and Mary Helen), Noelle Metting from DOE, and Tony Brooks. It appears that homeostatic regulation may be central to understanding low-dose radiation phenomena. The primary effects of ionizing radiation (IR) are cell killing, delayed cell cycling, and induction of mutations. However, homeostatic regulation causes cells that are killed or damaged by IR to eventually be replaced. Cells with an initiating mutation may have a replacement advantage, leading to clonal expansion of these initiated cells. Thus we have focused particularly on modeling effects that disturb homeostatic regulation as early steps in the carcinogenic process. There are two primary considerations that support our focus on homeostatic regulation. First, a number of epidemiologic studies using multistage
High Resolution Aerosol Modeling: Decadal Changes in Radiative Forcing
Bergmann, D J; Chuang, C C; Govindasamy, B; Cameron-Smith, P J; Rotman, D A
2005-02-01
The Atmospheric Science Division of LLNL has performed high-resolution calculations of direct sulfate forcing using a DOE-provided computer resource at NERSC. We integrated our global chemistry-aerosol model (IMPACT) with the LLNL high-resolution global climate model (horizontal resolution as high as 100 km) to examine the temporal evolution of sulfate forcing since 1950. We note that all previous assessments of sulfate forcing reported in IPCC (2001) were based on global models with coarse spatial resolutions ({approx} 300 km or even coarser). However, the short lifetime of aerosols ({approx} days) results in large spatial and temporal variations of radiative forcing by sulfate. As a result, global climate models with coarse resolutions do not accurately simulate sulfate forcing on regional scales. It requires much finer spatial resolutions in order to address the effects of regional anthropogenic SO{sub 2} emissions on the global atmosphere as well as the effects of long-range transport of sulfate aerosols on the regional climate forcing. By taking advantage of the tera-scale computer resources at NERSC, we simulated the historic direct sulfate forcing at much finer spatial resolutions than ever attempted before. Furthermore, we performed high-resolution chemistry simulations and saved monthly averaged oxidant fields, which will be used in subsequent simulations of sulfate aerosol formation and their radiative impact.
Flow-radiation coupling for atmospheric entries using a Hybrid Statistical Narrow Band model
NASA Astrophysics Data System (ADS)
Soucasse, Laurent; Scoggins, James B.; Rivière, Philippe; Magin, Thierry E.; Soufiani, Anouar
2016-09-01
In this study, a Hybrid Statistical Narrow Band (HSNB) model is implemented to make fast and accurate predictions of radiative transfer effects on hypersonic entry flows. The HSNB model combines a Statistical Narrow Band (SNB) model for optically thick molecular systems, a box model for optically thin molecular systems and continua, and a Line-By-Line (LBL) description of atomic radiation. Radiative transfer calculations are coupled to a 1D stagnation-line flow model under thermal and chemical nonequilibrium. Earth entry conditions corresponding to the FIRE 2 experiment, as well as Titan entry conditions corresponding to the Huygens probe, are considered in this work. Thermal nonequilibrium is described by a two temperature model, although non-Boltzmann distributions of electronic levels provided by a Quasi-Steady State model are also considered for radiative transfer. For all the studied configurations, radiative transfer effects on the flow, the plasma chemistry and the total heat flux at the wall are analyzed in detail. The HSNB model is shown to reproduce LBL results with an accuracy better than 5% and a speed up of the computational time around two orders of magnitude. Concerning molecular radiation, the HSNB model provides a significant improvement in accuracy compared to the Smeared-Rotational-Band model, especially for Titan entries dominated by optically thick CN radiation.
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Vu-Quoc, Loc
2007-07-01
We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.
Pulsar Radiation Models - Radio to High Energies
NASA Astrophysics Data System (ADS)
Venter, Christo; Harding, Alice
Rotation-powered pulsars emit over nearly 19 decades of energy. Although an all-encompassing answer as to the origin of this broad-band emission remains elusive nearly 50 years after their discovery, the theorist does have a few tools in his / her toolkit to aid investigation. Phase-averaged spectra give clues as to the emitting particles, their acceleration, environment, and the radiation mechanism. Moreover, the phase-evolution of spectra constrains the radiation energetics and environment as different parts of the magnetosphere are exposed to the observer during the pulsar's rotation. A detailed model furthermore critically depends on the specification of the emission geometry. Modeling the light curves probes this fundamental geometric assumption, which is closely tied to the posited magnetospheric structure. Studying many versions of the same system helps to constrain critical population-averaged quantities, discover population trends, and probe model performance for different regions of phase space. When coupled with population synthesis, such modeling can provide powerful discrimination between competing emission models. Polarization properties may provide complementary constraints on the magnetic field orientation and pulsar geometry. Lastly, comparison of parameters inferred from independent models for the different wavebands yields necessary crosschecks. It is indeed fortunate that the past few years have witnessed an incredible increase in number and improved characterization of rotation-powered pulsars. We will review how the enhanced quality and quantity of data are providing impetus for further model refinement.
NASA Astrophysics Data System (ADS)
Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.
2015-12-01
Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
ERIC Educational Resources Information Center
Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.
2011-01-01
Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
More accurate predictions with transonic Navier-Stokes methods through improved turbulence modeling
NASA Technical Reports Server (NTRS)
Johnson, Dennis A.
1989-01-01
Significant improvements in predictive accuracies for off-design conditions are achievable through better turbulence modeling; and, without necessarily adding any significant complication to the numerics. One well established fact about turbulence is it is slow to respond to changes in the mean strain field. With the 'equilibrium' algebraic turbulence models no attempt is made to model this characteristic and as a consequence these turbulence models exaggerate the turbulent boundary layer's ability to produce turbulent Reynolds shear stresses in regions of adverse pressure gradient. As a consequence, too little momentum loss within the boundary layer is predicted in the region of the shock wave and along the aft part of the airfoil where the surface pressure undergoes further increases. Recently, a 'nonequilibrium' algebraic turbulence model was formulated which attempts to capture this important characteristic of turbulence. This 'nonequilibrium' algebraic model employs an ordinary differential equation to model the slow response of the turbulence to changes in local flow conditions. In its original form, there was some question as to whether this 'nonequilibrium' model performed as well as the 'equilibrium' models for weak interaction cases. However, this turbulence model has since been further improved wherein it now appears that this turbulence model performs at least as well as the 'equilibrium' models for weak interaction cases and for strong interaction cases represents a very significant improvement. The performance of this turbulence model relative to popular 'equilibrium' models is illustrated for three airfoil test cases of the 1987 AIAA Viscous Transonic Airfoil Workshop, Reno, Nevada. A form of this 'nonequilibrium' turbulence model is currently being applied to wing flows for which similar improvements in predictive accuracy are being realized.
Johnson, Timothy C.; Wellman, Dawn M.
2015-06-26
Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
NASA Astrophysics Data System (ADS)
Toyokuni, Genti; Takenaka, Hiroshi
2012-06-01
We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic
Modeling Early Galaxies Using Radiation Hydrodynamics
2011-01-01
This simulation uses a flux-limited diffusion solver to explore the radiation hydrodynamics of early galaxies, in particular, the ionizing radiation created by Population III stars. At the time of this rendering, the simulation has evolved to a redshift of 3.5. The simulation volume is 11.2 comoving megaparsecs, and has a uniform grid of 10243 cells, with over 1 billion dark matter and star particles. This animation shows a combined view of the baryon density, dark matter density, radiation energy and emissivity from this simulation. The multi-variate rendering is particularly useful because is shows both the baryonic matter ("normal") and dark matter, and the pressure and temperature variables are properties of only the baryonic matter. Visible in the gas density are "bubbles", or shells, created by the radiation feedback from young stars. Seeing the bubbles from feedback provides confirmation of the physics model implemented. Features such as these are difficult to identify algorithmically, but easily found when viewing the visualization. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.
Pal, Saikat; Lindsey, Derek P.; Besier, Thor F.; Beaupre, Gary S.
2013-01-01
Cartilage material properties provide important insights into joint health, and cartilage material models are used in whole-joint finite element models. Although the biphasic model representing experimental creep indentation tests is commonly used to characterize cartilage, cartilage short-term response to loading is generally not characterized using the biphasic model. The purpose of this study was to determine the short-term and equilibrium material properties of human patella cartilage using a viscoelastic model representation of creep indentation tests. We performed 24 experimental creep indentation tests from 14 human patellar specimens ranging in age from 20 to 90 years (median age 61 years). We used a finite element model to reproduce the experimental tests and determined cartilage material properties from viscoelastic and biphasic representations of cartilage. The viscoelastic model consistently provided excellent representation of the short-term and equilibrium creep displacements. We determined initial elastic modulus, equilibrium elastic modulus, and equilibrium Poisson’s ratio using the viscoelastic model. The viscoelastic model can represent the short-term and equilibrium response of cartilage and may easily be implemented in whole-joint finite element models. PMID:23027200
Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3
NASA Astrophysics Data System (ADS)
Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.
2016-04-01
Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.
ICRCCM Phase 2: Verification and calibration of radiation codes in climate models
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1992-01-01
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). The data collected during SPECTRE form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used in climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. This report summarizes the activities of our group during the project's Third year to meet our stated objectives. The report is divided into three sections entitled: SPECTRE Activities, ICRCCM Activities, and summary information. The section on SPECTRE activities summarizes the field portion of the project during 1991, and the data reduction/analysis performed by the various participants. The section on ICRCCM activities summarizes our initial attempts to select data for distribution to ICRCCM participants and at comparison of observations with calculations as will be done by the ICRCCM participants. The Summary Information section lists data concerning publications, presentations, graduate students supported, and post-doctoral appointments during the project.
Introductory Tools for Radiative Transfer Models
NASA Astrophysics Data System (ADS)
Feldman, D.; Kuai, L.; Natraj, V.; Yung, Y.
2006-12-01
Satellite data are currently so voluminous that, despite their unprecedented quality and potential for scientific application, only a small fraction is analyzed due to two factors: researchers' computational constraints and a relatively small number of researchers actively utilizing the data. Ultimately it is hoped that the terabytes of unanalyzed data being archived can receive scientific scrutiny but this will require a popularization of the methods associated with the analysis. Since a large portion of complexity is associated with the proper implementation of the radiative transfer model, it is reasonable and appropriate to make the model as accessible as possible to general audiences. Unfortunately, the algorithmic and conceptual details that are necessary for state-of-the-art analysis also tend to frustrate the accessibility for those new to remote sensing. Several efforts have been made to have web- based radiative transfer calculations, and these are useful for limited calculations, but analysis of more than a few spectra requires the utilization of home- or server-based computing resources. We present a system that is designed to allow for easier access to radiative transfer models with implementation on a home computing platform in the hopes that this system can be utilized in and expanded upon in advanced high school and introductory college settings. This learning-by-doing process is aided through the use of several powerful tools. The first is a wikipedia-style introduction to the salient features of radiative transfer that references the seminal works in the field and refers to more complicated calculations and algorithms sparingly5. The second feature is a technical forum, commonly referred to as a tiki-wiki, that addresses technical and conceptual questions through public postings, private messages, and a ranked searching routine. Together, these tools may be able to facilitate greater interest in the field of remote sensing.
Lattice Boltzmann model for a steady radiative transfer equation.
Yi, Hong-Liang; Yao, Feng-Ju; Tan, He-Ping
2016-08-01
A complete lattice Boltzmann model (LBM) is proposed for the steady radiative transfer equation (RTE). The RTE can be regarded as a pure convection equation with a source term. To derive the expressions for the equilibrium distribution function and the relaxation time, an artificial isotropic diffusion term is introduced to form a convection-diffusion equation. When the dimensionless relaxation time has a value of 0.5, the lattice Boltzmann equation (LBE) is exactly applicable to the original steady RTE. We also perform a multiscale analysis based on the Chapman-Enskog expansion to recover the macroscopic RTE from the mesoscopic LBE. The D2Q9 model is used to solve the LBE, and the numerical results obtained by the LBM are comparable to the results obtained by other methods or analytical solutions, which demonstrates that the proposed model is highly accurate and stable in simulating multidimensional radiative transfer. In addition, we find that the convergence rate of the LBM depends on the transport properties of RTE: for diffusion-dominated RTE with a large optical thickness, the LBM shows a second-order convergence rate in space, while for convection-dominated RTE with a small optical thickness, a lower convergence rate is observed. PMID:27627417
NASA Astrophysics Data System (ADS)
Zakrzewski, Jakub; Delande, Dominique
2008-11-01
The quantum phase transition point between the insulator and the superfluid phase at unit filling factor of the infinite one-dimensional Bose-Hubbard model is numerically computed with a high accuracy. The method uses the infinite system version of the time evolving block decimation algorithm, here tested in a challenging case. We provide also the accurate estimate of the phase transition point at double occupancy.
Assimilation of Multiscale Radiation Products Into a Downwelling Surface Radiation Model
NASA Astrophysics Data System (ADS)
Forman, B. A.; Margulis, S. A.
2009-05-01
Accurate characterization of total downwelling radiation (i.e., downwelling longwave and downwelling shortwave) reaching the Earth's surface is important for modeling surface hydrological processes. Different satellite-based radiative flux products exist where each has its own spatial and temporal resolutions and error characteristics because each uses different methods and utilizes different remote sensing observations. A data assimilation approach can be used to obtain high-resolution estimates by merging different products with an a priori estimate in a way that extracts the most information while accounting for differences in their error characteristics. In this study, we use two commonly used data assimilation (DA) techniques - the Ensemble Kalman Filter (EnKF) and the Ensemble Kalman Smoother (EnKS) - to assess the effectiveness of generating accurate high-resolution fields conditioned on multiple products. A simple cloud-coupled model forced by a combination of geostationary and polar orbiting remote sensing products provides a high-resolution a priori ensemble estimate. The prior estimate is then conditioned by the Global Energy and Water Cycle Experiment (GEWEX) Shortwave Radiation Budget (SRB) product [Pinker et al., 2003], or International Satellite Cloud Climatology Project (ISCCP) based shortwave flux product [Pinker and Laszlo, 1992], and/or the ISCCP-based longwave flux product [Gupta et al., 1992] using either the EnKF or EnKS routine. A combination of different measurement products with different DA techniques is investigated to assess DA effectiveness. When compared against ground-based measurements, preliminary results suggest a multiscale DA approach improves radiative flux estimates, effectively downscaling the relatively coarse (in space and time) measurements while simultaneously reducing the amount of uncertainty across that ensemble. Analysis shows that the covariance structure between longwave and shortwave fluxes is limited especially in
Accurate analytical method for the extraction of solar cell model parameters
NASA Astrophysics Data System (ADS)
Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.
1984-05-01
Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.
A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.
2006-01-01
The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models
Drugowitsch, Jan
2016-01-01
We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models’ parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation. PMID:26864391
D’Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo
2014-12-28
A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmann inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.
Accurate calculation of binding energies for molecular clusters - Assessment of different models
NASA Astrophysics Data System (ADS)
Friedrich, Joachim; Fiedler, Benjamin
2016-06-01
In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are <1 kJ/mol. Since they are smaller than the basis set errors, we obtain higher total accuracy due to the applicability of larger basis sets. The final CCSD(T)/CBS benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.
conSSert: Consensus SVM Model for Accurate Prediction of Ordered Secondary Structure.
Kieslich, Chris A; Smadbeck, James; Khoury, George A; Floudas, Christodoulos A
2016-03-28
Accurate prediction of protein secondary structure remains a crucial step in most approaches to the protein-folding problem, yet the prediction of ordered secondary structure, specifically beta-strands, remains a challenge. We developed a consensus secondary structure prediction method, conSSert, which is based on support vector machines (SVM) and provides exceptional accuracy for the prediction of beta-strands with QE accuracy of over 0.82 and a Q2-EH of 0.86. conSSert uses as input probabilities for the three types of secondary structure (helix, strand, and coil) that are predicted by four top performing methods: PSSpred, PSIPRED, SPINE-X, and RAPTOR. conSSert was trained/tested using 4261 protein chains from PDBSelect25, and 8632 chains from PISCES. Further validation was performed using targets from CASP9, CASP10, and CASP11. Our data suggest that poor performance in strand prediction is likely a result of training bias and not solely due to the nonlocal nature of beta-sheet contacts. conSSert is freely available for noncommercial use as a webservice: http://ares.tamu.edu/conSSert/ . PMID:26928531
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. PMID:26708965
A new geometric-based model to accurately estimate arm and leg inertial estimates.
Wicke, Jason; Dumas, Geneviève A
2014-06-01
Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506
Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
NASA Astrophysics Data System (ADS)
Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu
2011-05-01
Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.
An Earth longwave radiation climate model
NASA Technical Reports Server (NTRS)
Yang, S. K.
1984-01-01
An Earth outgoing longwave radiation (OLWR) climate model was constructed for radiation budget study. Required information is provided by on empirical 100mb water vapor mixing ratio equation of the mixing ratio interpolation scheme. Cloud top temperature is adjusted so that the calculation would agree with NOAA scanning radiometer measurements. Both clear sky and cloudy sky cases are calculated and discussed for global average, zonal average and world-wide distributed cases. The results agree well with the satellite observations. The clear sky case shows that the OLWR field is highly modulated by water vapor, especially in the tropics. The strongest longitudinal variation occurs in the tropics. This variation can be mostly explained by the strong water vapor gradient. Although in the zonal average case the tropics have a minimum in OLWR, the minimum is essentially contributed by a few very low flux regions, such as the Amazon, Indonesian and the Congo.
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model. PMID:27077855
Berger, Perrine; Alouini, Mehdi; Bourderionnet, Jérôme; Bretenaker, Fabien; Dolfi, Daniel
2010-01-18
We developed an improved model in order to predict the RF behavior and the slow light properties of the SOA valid for any experimental conditions. It takes into account the dynamic saturation of the SOA, which can be fully characterized by a simple measurement, and only relies on material fitting parameters, independent of the optical intensity and the injected current. The present model is validated by showing a good agreement with experiments for small and large modulation indices. PMID:20173888
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna
2015-01-01
Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon
NASA Astrophysics Data System (ADS)
O'Brien, Edward P.; Morrison, Greg; Brooks, Bernard R.; Thirumalai, D.
2009-03-01
Single molecule Förster resonance energy transfer (FRET) experiments are used to infer the properties of the denatured state ensemble (DSE) of proteins. From the measured average FRET efficiency, ⟨E⟩, the distance distribution P(R ) is inferred by assuming that the DSE can be described as a polymer. The single parameter in the appropriate polymer model (Gaussian chain, wormlike chain, or self-avoiding walk) for P(R ) is determined by equating the calculated and measured ⟨E⟩. In order to assess the accuracy of this "standard procedure," we consider the generalized Rouse model (GRM), whose properties [⟨E⟩ and P(R )] can be analytically computed, and the Molecular Transfer Model for protein L for which accurate simulations can be carried out as a function of guanadinium hydrochloride (GdmCl) concentration. Using the precisely computed ⟨E⟩ for the GRM and protein L, we infer P(R ) using the standard procedure. We find that the mean end-to-end distance can be accurately inferred (less than 10% relative error) using ⟨E⟩ and polymer models for P(R ). However, the value extracted for the radius of gyration (Rg) and the persistence length (lp) are less accurate. For protein L, the errors in the inferred properties increase as the GdmCl concentration increases for all polymer models. The relative error in the inferred Rg and lp, with respect to the exact values, can be as large as 25% at the highest GdmCl concentration. We propose a self-consistency test, requiring measurements of ⟨E⟩ by attaching dyes to different residues in the protein, to assess the validity of describing DSE using the Gaussian model. Application of the self-consistency test to the GRM shows that even for this simple model, which exhibits an order→disorder transition, the Gaussian P(R ) is inadequate. Analysis of experimental data of FRET efficiencies with dyes at several locations for the cold shock protein, and simulations results for protein L, for which accurate FRET
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
Hewitt, Nicola J; Edwards, Robert J; Fritsche, Ellen; Goebel, Carsten; Aeby, Pierre; Scheel, Julia; Reisinger, Kerstin; Ouédraogo, Gladys; Duche, Daniel; Eilstein, Joan; Latil, Alain; Kenny, Julia; Moore, Claire; Kuehnl, Jochen; Barroso, Joao; Fautz, Rolf; Pfuhler, Stefan
2013-06-01
Several human skin models employing primary cells and immortalized cell lines used as monocultures or combined to produce reconstituted 3D skin constructs have been developed. Furthermore, these models have been included in European genotoxicity and sensitization/irritation assay validation projects. In order to help interpret data, Cosmetics Europe (formerly COLIPA) facilitated research projects that measured a variety of defined phase I and II enzyme activities and created a complete proteomic profile of xenobiotic metabolizing enzymes (XMEs) in native human skin and compared them with data obtained from a number of in vitro models of human skin. Here, we have summarized our findings on the current knowledge of the metabolic capacity of native human skin and in vitro models and made an overall assessment of the metabolic capacity from gene expression, proteomic expression, and substrate metabolism data. The known low expression and function of phase I enzymes in native whole skin were reflected in the in vitro models. Some XMEs in whole skin were not detected in in vitro models and vice versa, and some major hepatic XMEs such as cytochrome P450-monooxygenases were absent or measured only at very low levels in the skin. Conversely, despite varying mRNA and protein levels of phase II enzymes, functional activity of glutathione S-transferases, N-acetyltransferase 1, and UDP-glucuronosyltransferases were all readily measurable in whole skin and in vitro skin models at activity levels similar to those measured in the liver. These projects have enabled a better understanding of the contribution of XMEs to toxicity endpoints. PMID:23539547
Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.
Qu, Xiaohui; Persson, Kristin A
2016-09-13
A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744
Abdelnour, Farras; Voss, Henning U.; Raj, Ashish
2014-01-01
The relationship between anatomic connectivity of large-scale brain networks and their functional connectivity is of immense importance and an area of active research. Previous attempts have required complex simulations which model the dynamics of each cortical region, and explore the coupling between regions as derived by anatomic connections. While much insight is gained from these non-linear simulations, they can be computationally taxing tools for predicting functional from anatomic connectivities. Little attention has been paid to linear models. Here we show that a properly designed linear model appears to be superior to previous non-linear approaches in capturing the brain’s long-range second order correlation structure that governs the relationship between anatomic and functional connectivities. We derive a linear network of brain dynamics based on graph diffusion, whereby the diffusing quantity undergoes a random walk on a graph. We test our model using subjects who underwent diffusion MRI and resting state fMRI. The network diffusion model applied to the structural networks largely predicts the correlation structures derived from their fMRI data, to a greater extent than other approaches. The utility of the proposed approach is that it can routinely be used to infer functional correlation from anatomic connectivity. And since it is linear, anatomic connectivity can also be inferred from functional data. The success of our model confirms the linearity of ensemble average signals in the brain, and implies that their long-range correlation structure may percolate within the brain via purely mechanistic processes enacted on its structural connectivity pathways. PMID:24384152
Fast and accurate modeling of molecular atomization energies with machine learning.
Rupp, Matthias; Tkatchenko, Alexandre; Müller, Klaus-Robert; von Lilienfeld, O Anatole
2012-02-01
We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schrödinger equation is mapped onto a nonlinear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross validation over more than seven thousand organic molecules yields a mean absolute error of ∼10 kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves. PMID:22400967
Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
ERIC Educational Resources Information Center
Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.
2012-01-01
The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…
Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2013-01-01
The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
NASA Astrophysics Data System (ADS)
Yahja, A.; Kim, C.; Lin, Y.; Bajcsy, P.
2008-12-01
This paper addresses the problem of accurate estimation of geospatial models from a set of groundwater recharge & discharge (R&D) maps and from auxiliary remote sensing and terrestrial raster measurements. The motivation for our work is driven by the cost of field measurements, and by the limitations of currently available physics-based modeling techniques that do not include all relevant variables and allow accurate predictions only at coarse spatial scales. The goal is to improve our understanding of the underlying physical phenomena and increase the accuracy of geospatial models--with a combination of remote sensing, field measurements and physics-based modeling. Our approach is to process a set of R&D maps generated from interpolated sparse field measurements using existing physics-based models, and identify the R&D map that would be the most suitable for extracting a set of rules between the auxiliary variables of interest and the R&D map labels. We implemented this approach by ranking R&D maps using information entropy and mutual information criteria, and then by deriving a set of rules using a machine learning technique, such as the decision tree method. The novelty of our work is in developing a general framework for building geospatial models with the ultimate goal of minimizing cost and maximizing model accuracy. The framework is demonstrated for groundwater R&D rate models but could be applied to other similar studies, for instance, to understanding hypoxia based on physics-based models and remotely sensed variables. Furthermore, our key contribution is in designing a ranking method for R&D maps that allows us to analyze multiple plausible R&D maps with a different number of zones which was not possible in our earlier prototype of the framework called Spatial Pattern to Learn. We will present experimental results using examples R&D and other maps from an area in Wisconsin.
Polar firn layering in radiative transfer models
NASA Astrophysics Data System (ADS)
Linow, Stefanie; Hoerhold, Maria
2016-04-01
For many applications in the geosciences, remote sensing is the only feasible method of obtaining data from large areas with limited accessibility. This is especially true for the cryosphere, where light conditions and cloud coverage additionally limit the use of optical sensors. Here, instruments operating at microwave frequencies become important, for instance in polar snow parameters / SWE (snow water equivalent) mapping. However, the interaction between snow and microwave radiation is a complex process and still not fully understood. RT (radiative transfer) models to simulate snow-microwave interaction are available, but they require a number of input parameters such as microstructure and density, which are partly ill-constrained. The layering of snow and firn introduces an additional degree of complexity, as all snow parameters show a strong variability with depth. Many studies on RT modeling of polar firn deal with layer variability by using statistical properties derived from previous measurements, such as the standard deviations of density and microstructure, to configure model input. Here, the variability of microstructure parameters, such as density and particle size, are usually assumed to be independent of each other. However, in the case of the firn pack of the polar ice sheets, we observe that microstructure evolution depends on environmental parameters, such as temperature and snow deposition. Accordingly, density and microstructure evolve together within the snow and firn. Based on CT (computer tomography) microstructure measurements of antarctic firn, we can show that: first, the variability of density and effective grain size are linked and can thus be implemented in the RT models as a coupled set of parameters. Second, the magnitude of layering is captured by the measured standard deviation. Based on high-resolution density measurements of an Antarctic firn core, we study the effect of firn layering at different microwave wavelengths. By means of
An Accurate In Vitro Model of the E. coli Envelope
Clifton, Luke A.; Holt, Stephen A.; Hughes, Arwel V.; Daulton, Emma L.; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R.; Webster, John R. P.; Kinane, Christian J.
2015-01-01
Abstract Gram‐negative bacteria are an increasingly serious source of antibiotic‐resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir–Blodgett and Langmuir–Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:27346898
An accurate in vitro model of the E. coli envelope.
Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H
2015-10-01
Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir-Blodgett and Langmuir-Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
Features of creation of highly accurate models of triumphal pylons for archaeological reconstruction
NASA Astrophysics Data System (ADS)
Grishkanich, A. S.; Sidorov, I. S.; Redka, D. N.
2015-12-01
Cited a measuring operation for determining the geometric characteristics of objects in space and geodetic survey objects on the ground. In the course of the work, data were obtained on a relative positioning of the pylons in space. There are deviations from verticality. In comparison with traditional surveying this testing method is preferable because it allows you to get in semi-automated mode, the CAD model of the object is high for subsequent analysis that is more economical-ly advantageous.
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana
2016-04-01
Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.
Mathematical model accurately predicts protein release from an affinity-based delivery system.
Vulic, Katarina; Pakulska, Malgosia M; Sonthalia, Rohit; Ramachandran, Arun; Shoichet, Molly S
2015-01-10
Affinity-based controlled release modulates the delivery of protein or small molecule therapeutics through transient dissociation/association. To understand which parameters can be used to tune release, we used a mathematical model based on simple binding kinetics. A comprehensive asymptotic analysis revealed three characteristic regimes for therapeutic release from affinity-based systems. These regimes can be controlled by diffusion or unbinding kinetics, and can exhibit release over either a single stage or two stages. This analysis fundamentally changes the way we think of controlling release from affinity-based systems and thereby explains some of the discrepancies in the literature on which parameters influence affinity-based release. The rate of protein release from affinity-based systems is determined by the balance of diffusion of the therapeutic agent through the hydrogel and the dissociation kinetics of the affinity pair. Equations for tuning protein release rate by altering the strength (KD) of the affinity interaction, the concentration of binding ligand in the system, the rate of dissociation (koff) of the complex, and the hydrogel size and geometry, are provided. We validated our model by collapsing the model simulations and the experimental data from a recently described affinity release system, to a single master curve. Importantly, this mathematical analysis can be applied to any single species affinity-based system to determine the parameters required for a desired release profile. PMID:25449806
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.
Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics
NASA Astrophysics Data System (ADS)
Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.
2014-12-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
NASA Astrophysics Data System (ADS)
Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua
2015-05-01
Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.
Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum
NASA Astrophysics Data System (ADS)
Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.
2013-02-01
Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.
Localized Adaptive Inflation in Ensemble Data Assimilation: Application to a Radiation Belt Model
NASA Astrophysics Data System (ADS)
Koller, J.; Godinez, H. C.
2012-12-01
The Ensemble Kalman Filter (EnKF) has become an important data assimilation tool for numerical models in the geosciences. Recently, the EnKF has been applied to radiation belt models to accurately estimate Earth's radiation belt particle distribution. A particular concern in data assimilation for radiation belts is model deficiencies, due to lack of appropriate source and/or loss terms for trapped particles, which can adversely impact the solution of the assimilation. In this work we present a localized adaptive covariance inflation technique used to account for model uncertainty in EnKF. A one-dimensional radial diffusion model for phase space density, together with observational satellite data, is used in EnKF with the purpose of accurately estimating Earth's radiation belt particle distribution. Numerical results from identical-twin experiments, where data is generated from the same model, as well as the assimilation of real observational data, are presented. The results show improvement in the predictive skill of the model solution due to the proper inclusion of model errors in the data assimilation.
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
Evaluation of radiation partitioning models at Bushland, Texas
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop growth and soil-vegetation-atmosphere continuum energy transfer models often require estimates of net radiation components, such as photosynthetic, solar, and longwave radiation to both the canopy and soil. We evaluated the 1998 radiation partitioning model of Campbell and Norman, herein referr...
Key Issues for an Accurate Modelling of GaSb TPV Converters
NASA Astrophysics Data System (ADS)
Martín, Diego; Algora, Carlos
2003-01-01
GaSb TPV devices are commonly manufactured by Zn diffusion from the vapour phase on a n-type substrate, leading to very high doping concentrations in a narrow emitter. This fact emphasizes the need of a careful modelling that must include high doping effects to simulate the optoelectronic behaviour of devices. In this work the key parameters that have strong influence on the performance of GaSb TPV devices are underlined, more reliable values are suggested and our first results on the study of the absorption coefficient dependence with p-type high doping concentration are presented.
NASA Astrophysics Data System (ADS)
Green, Richard N.
1980-10-01
A parameter estimation technique is presented to estimate the radiative flux density distribution over the earn from a set of radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view, horizon to horizon. nadir pointing sensor with a mathematical technique to derive the radiative flux density estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation.The errors resulting from the assumed directional radiation model, spatial model and random measurement error have little effect an the global mean radiation. Zonal estimates were found to be more sensitive, to the spatial model than to the directional radiation model. Results from analysing medium field of view measurements showed a much greater sensitivity to the directional radiation model, even on a global scale.
Multiconjugate adaptive optics applied to an anatomically accurate human eye model.
Bedggood, P A; Ashman, R; Smith, G; Metha, A B
2006-09-01
Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics. PMID:19529172
Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis
NASA Astrophysics Data System (ADS)
Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C.; Matarrese, G.; Del Guerra, A.; Bisogni, M. G.
2013-08-01
It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM-FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.
Considering mask pellicle effect for more accurate OPC model at 45nm technology node
NASA Astrophysics Data System (ADS)
Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo
2008-11-01
Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.
Accurate modeling of light trapping in thin film silicon solar cells
Abouelsaood, A.A.; Ghannam, M.Y.; Poortmans, J.; Mertens, R.P.
1997-12-31
An attempt is made to assess the accuracy of the simplifying assumption of total retransmission of light inside the escape or loss cone which is made in many models of optical confinement in thin-film silicon solar cells. A closed form expression is derived for the absorption enhancement factor as a function of the refractive index in the low-absorption limit for a thin-film cell with a flat front surface and a lambertian back reflector. Numerical calculations are carried out to investigate similar systems with antireflection coatings, and the investigation of cells with a textured front surface is achieved using a modified version of the existing ray-tracing computer simulation program TEXTURE.
NASA Astrophysics Data System (ADS)
Chien Chang, Jia-Ren; Tai, Cheng-Chi
2006-07-01
This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Biomechanical modeling provides more accurate data for neuronavigation than rigid registration
Garlapati, Revanth Reddy; Roy, Aditi; Joldes, Grand Roman; Wittek, Adam; Mostayed, Ahmed; Doyle, Barry; Warfield, Simon Keith; Kikinis, Ron; Knuckey, Neville; Bunt, Stuart; Miller, Karol
2015-01-01
It is possible to improve neuronavigation during image-guided surgery by warping the high-quality preoperative brain images so that they correspond with the current intraoperative configuration of the brain. In this work, the accuracy of registration results obtained using comprehensive biomechanical models is compared to the accuracy of rigid registration, the technology currently available to patients. This comparison allows us to investigate whether biomechanical modeling provides good quality image data for neuronavigation for a larger proportion of patients than rigid registration. Preoperative images for 33 cases of neurosurgery were warped onto their respective intraoperative configurations using both biomechanics-based method and rigid registration. We used a Hausdorff distance-based evaluation process that measures the difference between images to quantify the performance of both methods of registration. A statistical test for difference in proportions was conducted to evaluate the null hypothesis that the proportion of patients for whom improved neuronavigation can be achieved, is the same for rigid and biomechanics-based registration. The null hypothesis was confidently rejected (p-value<10−4). Even the modified hypothesis that less than 25% of patients would benefit from the use of biomechanics-based registration was rejected at a significance level of 5% (p-value = 0.02). The biomechanics-based method proved particularly effective for cases experiencing large craniotomy-induced brain deformations. The outcome of this analysis suggests that our nonlinear biomechanics-based methods are beneficial to a large proportion of patients and can be considered for use in the operating theatre as one possible method of improving neuronavigation and surgical outcomes. PMID:24460486
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
A new radiation model for Baltic Sea ecosystem modelling
NASA Astrophysics Data System (ADS)
Neumann, Thomas; Siegel, Herbert; Gerth, Monika
2015-12-01
Photosynthetically available radiation (PAR) is one of the key requirements for primary production in the ocean. The ambient PAR is determined by incoming solar radiation and optical properties of sea water and the optically active water constituents along the radiation pathway. Especially in coastal waters, the optical properties are affected by terrigenous constituents like yellow substances as well as high primary production. Numerical models for marine ecosystems account for the optical attenuation process in different ways and details. For the consideration of coloured dissolved organic matter (CDOM) and shading effects of phytoplankton particles, we propose a dynamic parametrization for the Baltic Sea. Furthermore, products from biological turnover processes are implemented. Besides PAR and its attenuation coefficient, the model calculates the Secchi disk depth, a simple measurable parameter describing the transparency of the water column and a water quality parameter in the European Water Framework Directive. The components of the proposed optical model are partly implemented from other publications respectively derived from our own measurements for the area of investigation. The model allows a better representation of PAR with a more realistic spatial and temporal variability compared to former parametrizations. The effect is that regional changes of primary production, especially in the northern part of the Baltic Sea, show reduced productivity due to higher CDOM concentrations. The model estimates for Secchi disk depth are much more realistic now. In the northern Baltic Sea, simulated oxygen concentrations in deep water have improved considerably.
A Model of Radiative and Conductive Energy Transfer in Planetary Regoliths
NASA Technical Reports Server (NTRS)
Hapke, Bruce
1996-01-01
The thermal regime in planetary regoliths involves three processes: propagation of visible radiation, propagation of thermal radiation, and thermal conduction. The equations of radiative transfer and heat conduction are formulated for particulate media composed of anisotropically scattering particles. Although the equations are time dependent, only steady state problems are considered in this paper. Using the two-stream approximation, solutions are obtained for two cases: a layer of powder heated from below and an infinitely thick regolith illuminated by visible radiation. Radiative conductivity, subsurface temperature gradients, and the solid state greenhouse effect all appear intrinsically in the solutions without ad hoc additions. Although the equations are nonlinear, approximate analytic solutions that are accurate to a few percent are obtained. Analytic expressions are given for the temperature distribution, the optical and thermal radiance distributions, the hemispherical albedo, the hemispherical emissivity, and the directional emissivity. Additional applications of the new model to three problems of interest in planetary regoliths are presented by Hapke.
NASA Astrophysics Data System (ADS)
Ni-Meister, W.; Kiang, N.; Yang, W.
2007-12-01
The transmission of light through plant canopies results in vertical profiles of light intensity that affect the photosynthetic activity and gas exchange of plants, their competition for light, and the canopy energy balance. The accurate representation of the canopy light profile is then important for predicting ecological dynamics. The study presents a simple canopy radiative transfer scheme to characterize the impact of the horizontal and vertical vegetation structure heterogeneity on light profiles. Actual vertical foliage profile and a clumping factor which are functions of tree geometry, size and density and foliage density are used to characterize the vertical and horizontal vegetation structure heterogeneity. The simple scheme is evaluated using the ground and airborne lidar data collected in deciduous and coniferous forests and was also compared with the more complex Geometric Optical and Radiative Transfer (GORT) model and the two-stream scheme currently being used to describe light interactions with vegetation canopy in most GCMs. The simple modeled PAR profiles match well with the ground data, lidar and full GORT model prediction, it performs much better than the simple Beer's&plaw used in two stream scheme. This scheme will have the same computation cost as the current scheme being used in GCMs, but provides better photosynthesis, radiative fluxes and surface albedo estimates, thus is suitable for a global vegetation dynamic model embedded in GCMs.
NASA Astrophysics Data System (ADS)
Tao, Jianmin; Rappe, Andrew M.
2016-01-01
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.
Diffenderfer, Eric S.; Dolney, Derek; Schaettler, Maximilian; Sanzari, Jenine K.; Mcdonough, James; Cengel, Keith A.
2014-01-01
The space radiation environment imposes increased dangers of exposure to ionizing radiation, particularly during a solar particle event (SPE). These events consist primarily of low energy protons that produce a highly inhomogeneous dose distribution. Due to this inherent dose heterogeneity, experiments designed to investigate the radiobiological effects of SPE radiation present difficulties in evaluating and interpreting dose to sensitive organs. To address this challenge, we used the Geant4 Monte Carlo simulation framework to develop dosimetry software that uses computed tomography (CT) images and provides radiation transport simulations incorporating all relevant physical interaction processes. We found that this simulation accurately predicts measured data in phantoms and can be applied to model dose in radiobiological experiments with animal models exposed to charged particle (electron and proton) beams. This study clearly demonstrates the value of Monte Carlo radiation transport methods for two critically interrelated uses: (i) determining the overall dose distribution and dose levels to specific organ systems for animal experiments with SPE-like radiation, and (ii) interpreting the effect of random and systematic variations in experimental variables (e.g. animal movement during long exposures) on the dose distributions and consequent biological effects from SPE-like radiation exposure. The software developed and validated in this study represents a critically important new tool that allows integration of computational and biological modeling for evaluating the biological outcomes of exposures to inhomogeneous SPE-like radiation dose distributions, and has potential applications for other environmental and therapeutic exposure simulations. PMID:24309720
Inferences, Risk Modeling, and Prediction of Health Effects of Ionizing Radiation.
Dainiak, Nicholas
2016-03-01
The combined expertise of radiation epidemiologists and laboratory experimentalists is required to accurately define health risks from exposure to a low/very low radiation dose. Although stochastic risk can be estimated when a known threshold dose is exceeded, risk must be inferred from data transference at sub-threshold doses. The clinician's dilemma is evident when complying with accepted medical practice that is complicated by potential long-term, adverse outcomes. By contrast, radiation protection regulators must make prudent judgments without complete knowledge of the scope and consequences of their actions. Only by combining the strengths of epidemiological and experimental laboratory approaches can accurate predictive modeling be achieved after exposure to a low/very low dose. PMID:26808880
A Radiative Transport Model for Heating Paints using High Density Plasma Arc Lamps
Sabau, Adrian S; Duty, Chad E; Dinwiddie, Ralph Barton; Nichols, Mark; Blue, Craig A; Ott, Ronald D
2009-01-01
The energy distribution and ensuing temperature evolution within paint-like systems under the influence of infrared radiation was studied. Thermal radiation effects as well as those due to heat conduction were considered. A complete set of material properties was derived and discussed. Infrared measurements were conducted to obtain experimental data for the temperature in the paint film. The heat flux of the incident radiation from the plasma arc lamp was measured using a heat flux sensor with a very short response time. The comparison between the computed and experimental results for temperature show that the models that are based on spectral four-flux RTE and accurate optical properties yield accurate results for the black paint systems.
Lupaşcu, Carmen Alina; Tegolo, Domenico; Trucco, Emanuele
2013-12-01
We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy. PMID:24001930
Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S
2009-04-01
The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
NASA Astrophysics Data System (ADS)
Weber, Tobias K. D.; Riedel, Thomas
2015-04-01
Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.
Incorporation of multiple cloud layers for ultraviolet radiation modeling studies
NASA Technical Reports Server (NTRS)
Charache, Darryl H.; Abreu, Vincent J.; Kuhn, William R.; Skinner, Wilbert R.
1994-01-01
Cloud data sets compiled from surface observations were used to develop an algorithm for incorporating multiple cloud layers into a multiple-scattering radiative transfer model. Aerosol extinction and ozone data sets were also incorporated to estimate the seasonally averaged ultraviolet (UV) flux reaching the surface of the Earth in the Detroit, Michigan, region for the years 1979-1991, corresponding to Total Ozone Mapping Spectrometer (TOMS) version 6 ozone observations. The calculated UV spectrum was convolved with an erythema action spectrum to estimate the effective biological exposure for erythema. Calculations show that decreasing the total column density of ozone by 1% leads to an increase in erythemal exposure by approximately 1.1-1.3%, in good agreement with previous studies. A comparison of the UV radiation budget at the surface between a single cloud layer method and a multiple cloud layer method presented here is discussed, along with limitations of each technique. With improved parameterization of cloud properties, and as knowledge of biological effects of UV exposure increase, inclusion of multiple cloud layers may be important in accurately determining the biologically effective UV budget at the surface of the Earth.
Panagiotopoulou, O; Wilshin, S D; Rayfield, E J; Shefelbine, S J; Hutchinson, J R
2012-02-01
Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form-function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810
Future directions for LDEF ionizing radiation modeling and assessments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1993-01-01
A calculational program utilizing data from radiation dosimetry measurements aboard the Long Duration Exposure Facility (LDEF) satellite to reduce the uncertainties in current models defining the ionizing radiation environment is in progress. Most of the effort to date has been on using LDEF radiation dose measurements to evaluate models defining the geomagnetically trapped radiation, which has provided results applicable to radiation design assessments being performed for Space Station Freedom. Plans for future data comparisons, model evaluations, and assessments using additional LDEF data sets (LET spectra, induced radioactivity, and particle spectra) are discussed.
Angular radiation models for earth-atmosphere system. Volume 2: Longwave radiation
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Green, R. N.; Smith, G. L.; Wielicki, B. A.; Walker, I. J.; Taylor, V. R.; Stowe, L. L.
1989-01-01
The longwave angular radiation models that are required for analysis of satellite measurements of Earth radiation, such as those from the Earth Radiation Budget Experiment (ERBE) are presented. The models contain limb-darkening characteristics and mean fluxes. Limb-darkening characteristics are the longwave anisotropic factor and the standard deviation of the longwave radiance. Derivation of these models from the Nimbus 7 ERB (Earth Radiation Budget) data set is described. Tabulated values and computer-generated plots are included for the limb-darkening and mean-flux models.
Flavour dependent gauged radiative neutrino mass model
NASA Astrophysics Data System (ADS)
Baek, Seungwon; Okada, Hiroshi; Yagyu, Kei
2015-04-01
We propose a one-loop induced radiative neutrino mass model with anomaly free flavour dependent gauge symmetry: μ minus τ symmetry U(1) μ- τ . A neutrino mass matrix satisfying current experimental data can be obtained by introducing a weak isospin singlet scalar boson that breaks U(1) μ- τ symmetry, an inert doublet scalar field, and three right-handed neutrinos in addition to the fields in the standard model. We find that a characteristic structure appears in the neutrino mass matrix: two-zero texture form which predicts three non-zero neutrino masses and three non-zero CP-phases from five well measured experimental inputs of two squared mass differences and three mixing angles. Furthermore, it is clarified that only the inverted mass hierarchy is allowed in our model. In a favored parameter set from the neutrino sector, the discrepancy in the muon anomalous magnetic moment between the experimental data and the the standard model prediction can be explained by the additional neutral gauge boson loop contribution with mass of order 100 MeV and new gauge coupling of order 10-3.
Radiative models for the evaluation of the UV radiation at the ground.
Koepke, P
2009-12-01
The variety of radiative models for solar UV radiation is discussed. For the evaluation of measured UV radiation at the ground the basic problem is the availability of actual values of the atmospheric parameters that influence the UV radiation. The largest uncertainties are due to clouds and aerosol, which are highly variable. In the case of tilted receivers, like the human skin for most orientations, and for conditions like a street canyon or tree shadow, besides the classical radiative transfer in the atmosphere additional modelling is necessary. PMID:19828720
NASA Technical Reports Server (NTRS)
Krizmanic, John F.
2013-01-01
We have been assessing the effects of background radiation in low-Earth orbit for the next generation of X-ray and Cosmic-ray experiments, in particular for International Space Station orbit. Outside the areas of high fluxes of trapped radiation, we have been using parameterizations developed by the Fermi team to quantify the high-energy induced background. For the low-energy background, we have been using the AE8 and AP8 SPENVIS models to determine the orbit fractions where the fluxes of trapped particles are too high to allow for useful operation of the experiment. One area we are investigating is how the fluxes of SPENVIS predictions at higher energies match the fluxes at the low-energy end of our parameterizations. I will summarize our methodology for background determination from the various sources of cosmogenic and terrestrial radiation and how these compare to SPENVIS predictions in overlapping energy ranges.
NASA Astrophysics Data System (ADS)
Xin, Cui; Di-Yu, Zhang; Gao, Chen; Ji-Gen, Chen; Si-Liang, Zeng; Fu-Ming, Guo; Yu-Jun, Yang
2016-03-01
We demonstrate that the interference minima in the linear molecular harmonic spectra can be accurately predicted by a modified two-center model. Based on systematically investigating the interference minima in the linear molecular harmonic spectra by the strong-field approximation (SFA), it is found that the locations of the harmonic minima are related not only to the nuclear distance between the two main atoms contributing to the harmonic generation, but also to the symmetry of the molecular orbital. Therefore, we modify the initial phase difference between the double wave sources in the two-center model, and predict the harmonic minimum positions consistent with those simulated by SFA. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant Nos. 11274001, 11274141, 11304116, 11247024, and 11034003), and the Jilin Provincial Research Foundation for Basic Research, China (Grant Nos. 20130101012JC and 20140101168JC).
NASA Astrophysics Data System (ADS)
Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.
2015-12-01
There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
How to accurately bypass damage
Broyde, Suse; Patel, Dinshaw J.
2016-01-01
Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203
Dispersion model maps spread of Fukushima radiation
NASA Astrophysics Data System (ADS)
Schultz, Colin
2013-01-01
When water flooded the Japanese Fukushima Daiichi nuclear power plant on 11 March 2011, killing power to the plant and destroying its backup generators, the earthquake-triggered disaster resulted in a major nuclear accident, with the plant pouring radioactive material into the air and the water. Research into the effects of the radiation on humans and the environment has been ongoing, but to ensure the accuracy of these aftermath investigations requires understanding the precise concentrations, distribution patterns, and timing of the radionuclide emissions. To provide such an assessment for the marine environment, Estournel et al. used an ocean and atmosphere dispersion model to simulate the movements of radioactive cesium-137 throughout the Japanese coastal waters for 3.5 months following the earthquake.
Cavity radiation model for solar central receivers
Lipps, F.W.
1981-01-01
The Energy Laboratory of the University of Houston has developed a computer simulation program called CREAM (i.e., Cavity Radiations Exchange Analysis Model) for application to the solar central receiver system. The zone generating capability of CREAM has been used in several solar re-powering studies. CREAM contains a geometric configuration factor generator based on Nusselt's method. A formulation of Nusselt's method provides support for the FORTRAN subroutine NUSSELT. Numerical results from NUSSELT are compared to analytic values and values from Sparrow's method. Sparrow's method is based on a double contour integral and its reduction to a single integral which is approximated by Guassian methods. Nusselt's method is adequate for the intended engineering applications, but Sparrow's method is found to be an order of magnitude more efficient in many situations.
Localized adaptive inflation in ensemble data assimilation for a radiation belt model
NASA Astrophysics Data System (ADS)
Godinez, H. C.; Koller, J.
2012-08-01
In this work a one-dimensional radial diffusion model for phase space density, together with observational satellite data, is used in an ensemble data assimilation with the purpose of accurately estimating Earth's radiation belt particle distribution. A particular concern in data assimilation for radiation belt models are model deficiencies, which can adversely impact the solution of the assimilation. To adequately address these deficiencies, a localized adaptive covariance inflation technique is implemented in the data assimilation to account for model uncertainty. Numerical results from identical-twin experiments, where data is generated from the same model, as well as the assimilation of real observational data, are presented. The results show improvement in the predictive skill of the model solution due to the proper inclusion of model errors in the data assimilation.
The Chandra X-Ray Observatory Radiation Environment Model
NASA Technical Reports Server (NTRS)
Blackwell, W. C.; Minow, Joseph I.; Smith, Shawn; Swift, Wesley R.; ODell, Stephen L.; Cameron, Robert A.
2003-01-01
CRMFLX (Chandra Radiation Model of ion FluX) is an environmental risk mitigation tool for use as a decision aid in planning the operations times for Chandra's Advanced CCD Imaging Spectrometer (ACIS) detector. The accurate prediction of the proton flux environment with energies of 100 - 200 keV is needed in order to protect the ACIS detector against proton degradation. Unfortunately, protons of this energy are abundant in the region of space Chandra must operate, and the on-board Electron, Proton, and Helium Instrument (EPHIN) does not measure proton flux levels of the required energy range. In addition to the concerns arising from the radiation belts, substorm injections of plasma from the magnetotail may increase the protons flux by orders of magnitude in this energy range. The Earth's magnetosphere is a dynamic entity, with the size and location of the magnetopause driven by the highly variable solar wind parameters (number density, velocity, and magnetic field components). Operational times for the telescope must be made weeks in advance, decisions which are complicated by the variability of the environment. CRMFLX is an engineering model developed to address these problems and provides proton flux and fluence statistics for the terrestrial outer magnetosphere, magnetosheath, and solar wind for use in scheduling ACIS operations. CRMFLX implements a number of standard models to predict the bow shock, magnetopause, and plasma sheet boundaries based on the sampling of historical solar wind data sets. Measurements from the GEOTAIL and POLAR spacecraft are used to create the proton flux database. This paper describes the recently released CRMFLX v2 implementation that includes an algorithm that propagates flux from an observation location to other regions of the magnetosphere based on convective ExB and VB-curvature particle drift motions in electric and magnetic fields. This technique has the advantage of more completely filling out the database and makes maximum
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
Sethurajan, Athinthra Krishnaswamy; Krachkovskiy, Sergey A; Halalay, Ion C; Goward, Gillian R; Protas, Bartosz
2015-09-17
We used NMR imaging (MRI) combined with data analysis based on inverse modeling of the mass transport problem to determine ionic diffusion coefficients and transference numbers in electrolyte solutions of interest for Li-ion batteries. Sensitivity analyses have shown that accurate estimates of these parameters (as a function of concentration) are critical to the reliability of the predictions provided by models of porous electrodes. The inverse modeling (IM) solution was generated with an extension of the Planck-Nernst model for the transport of ionic species in electrolyte solutions. Concentration-dependent diffusion coefficients and transference numbers were derived using concentration profiles obtained from in situ (19)F MRI measurements. Material properties were reconstructed under minimal assumptions using methods of variational optimization to minimize the least-squares deviation between experimental and simulated concentration values with uncertainty of the reconstructions quantified using a Monte Carlo analysis. The diffusion coefficients obtained by pulsed field gradient NMR (PFG-NMR) fall within the 95% confidence bounds for the diffusion coefficient values obtained by the MRI+IM method. The MRI+IM method also yields the concentration dependence of the Li(+) transference number in agreement with trends obtained by electrochemical methods for similar systems and with predictions of theoretical models for concentrated electrolyte solutions, in marked contrast to the salt concentration dependence of transport numbers determined from PFG-NMR data. PMID:26247105
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W; Reeher, M; Galmarini, D
2014-06-01
Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms.
NASA Astrophysics Data System (ADS)
McCullagh, Nuala; Jeong, Donghui; Szalay, Alexander S.
2016-01-01
Accurate modelling of non-linearities in the galaxy bispectrum, the Fourier transform of the galaxy three-point correlation function, is essential to fully exploit it as a cosmological probe. In this paper, we present numerical and theoretical challenges in modelling the non-linear bispectrum. First, we test the robustness of the matter bispectrum measured from N-body simulations using different initial conditions generators. We run a suite of N-body simulations using the Zel'dovich approximation and second-order Lagrangian perturbation theory (2LPT) at different starting redshifts, and find that transients from initial decaying modes systematically reduce the non-linearities in the matter bispectrum. To achieve 1 per cent accuracy in the matter bispectrum at z ≤ 3 on scales k < 1 h Mpc-1, 2LPT initial conditions generator with initial redshift z ≳ 100 is required. We then compare various analytical formulas and empirical fitting functions for modelling the non-linear matter bispectrum, and discuss the regimes for which each is valid. We find that the next-to-leading order (one-loop) correction from standard perturbation theory matches with N-body results on quasi-linear scales for z ≥ 1. We find that the fitting formula in Gil-Marín et al. accurately predicts the matter bispectrum for z ≤ 1 on a wide range of scales, but at higher redshifts, the fitting formula given in Scoccimarro & Couchman gives the best agreement with measurements from N-body simulations.
Image-based modeling of radiation-induced foci
NASA Astrophysics Data System (ADS)
Costes, Sylvain; Cucinotta, Francis A.; Ponomarev, Artem; Barcellos-Hoff, Mary Helen; Chen, James; Chou, William; Gascard, Philippe
contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern was further characterized by "relative DNA image measurements". This novel imaging approach showed that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent than predicted in regions with lower DNA density. The same preferential nuclear location was also measured for RIF induced by 1 Gy of low-LET radiation. This deviation from random behavior was evident only 5 min after irradiation for phosphorylated ATM RIF, while γH2AX and 53BP1 RIF showed pronounced deviations up to 30 min after exposure. These data suggest that RIF within a few minutes following exposure to radiation cluster into open regions of the nucleus (i.e. euchromatin). It is possible that DNA lesions are collected in these nuclear sub-domains for more efficient repair. If so, this would imply that DSB are actively transported within the nucleus, a phenomenon that has not yet been considered in modeling DNA misrepair following exposure to radiation. These results are thus critical for more accurate risk models of radiation and we are actively working on characterizing further RIF movement in human nuclei using live cell imaging.
Geometry and mass model of ionizing radiation experiments on the LDEF satellite
NASA Technical Reports Server (NTRS)
Colborn, B. L.; Armstrong, T. W.
1992-01-01
Extensive measurements related to ionizing radiation environments and effects were made on the LDEF satellite during its mission lifetime of almost 6 years. These data, together with the opportunity they provide for evaluating predictive models and analysis methods, should allow more accurate assessments of the space radiation environment and related effects for future missions in low Earth orbit. The LDEF radiation dosimetry data is influenced to varying degrees by material shielding effects due to the dosimeter itself, nearby components and experiments, and the spacecraft structure. A geometry and mass model is generated of LDEF, incorporating sufficient detail that it can be applied in determining the influence of material shielding on ionizing radiation measurements and predictions. This model can be used as an aid in data interpretation by unfolding shielding effects from the LDEF radiation dosimeter responses. Use of the LDEF geometry/mass model, in conjunction with predictions and comparisons with LDEF dosimetry data currently underway, will also allow more definitive evaluations of current radiation models for future mission applications.
Geometry and mass model of ionizing radiation experiments on the LDEF satellite. Final Report
Colborn, B.L.; Armstrong, T.W.
1992-04-01
Extensive measurements related to ionizing radiation environments and effects were made on the LDEF satellite during its mission lifetime of almost 6 years. These data, together with the opportunity they provide for evaluating predictive models and analysis methods, should allow more accurate assessments of the space radiation environment and related effects for future missions in low Earth orbit. The LDEF radiation dosimetry data is influenced to varying degrees by material shielding effects due to the dosimeter itself, nearby components and experiments, and the spacecraft structure. A geometry and mass model is generated of LDEF, incorporating sufficient detail that it can be applied in determining the influence of material shielding on ionizing radiation measurements and predictions. This model can be used as an aid in data interpretation by unfolding shielding effects from the LDEF radiation dosimeter responses. Use of the LDEF geometry/mass model, in conjunction with predictions and comparisons with LDEF dosimetry data currently underway, will also allow more definitive evaluations of current radiation models for future mission applications.
Angular radiation models for Earth-atmosphere system. Volume 1: Shortwave radiation
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Green, R. N.; Minnis, P.; Smith, G. L.; Staylor, W. F.; Wielicki, B. A.; Walker, I. J.; Young, D. F.; Taylor, V. R.; Stowe, L. L.
1988-01-01
Presented are shortwave angular radiation models which are required for analysis of satellite measurements of Earth radiation, such as those fro the Earth Radiation Budget Experiment (ERBE). The models consist of both bidirectional and directional parameters. The bidirectional parameters are anisotropic function, standard deviation of mean radiance, and shortwave-longwave radiance correlation coefficient. The directional parameters are mean albedo as a function of Sun zenith angle and mean albedo normalized to overhead Sun. Derivation of these models from the Nimbus 7 ERB (Earth Radiation Budget) and Geostationary Operational Environmental Satellite (GOES) data sets is described. Tabulated values and computer-generated plots are included for the bidirectional and directional modes.
Radiative effects of aerosols at an urban location in southern India: Observations versus model
NASA Astrophysics Data System (ADS)
Satheesh, S. K.; Vinoj, V.; Krishna Moorthy, K.
2010-12-01
The radiative impact of aerosols is one of the largest sources of uncertainty in estimating anthropogenic climate perturbations. Here we have used independent ground-based radiometer measurements made simultaneously with comprehensive measurements of aerosol microphysical and optical properties at a highly populated urban site, Bangalore (13.02°N, 77.6°E) in southern India during a dedicated campaign during winter of 2004 and summer and pre-monsoon season of 2005. We have also used longer term measurements carried out at this site to present general features of aerosols over this region. The aerosol radiative impact assessments were made from direct measurements of ground reaching irradiance as well as by incorporating measured aerosol properties into a radiative transfer model. Large discrepancies were observed between measured and modeled (using radiative transfer models, which employed measured aerosol properties) radiative impacts. It appears that the presence of elevated aerosol layers and (or) inappropriate description of aerosol state of mixing are (is) responsible for the discrepancies. On a monthly scale reduction of surface irradiance due to the presence of aerosols (estimated using radiative flux measurements) varies from 30 to 65 W m -2. The lowest values in surface radiative impact were observed during June when there is large reduction in aerosol as a consequence of monsoon rainfall. Large increase in aerosol-induced surface radiative impact was observed from winter to summer. Our investigations re-iterate the inadequacy of aerosol measurements at the surface alone and importance of representing column properties (using vertical profiles) accurately in order to assess aerosol-induced climate changes accurately.
Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy
2014-07-01
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512
Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu
2015-01-01
Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement
A Unified Microwave Radiative Transfer Model with Jacobian for General Stratified Media
NASA Astrophysics Data System (ADS)
Tian, Miao
A unified microwave radiative transfer (UMRT) model is developed for rapid, stable and accurate level-centric calculation of the thermal radiation emitted from any geophysical medium comprised of planar layers of either densely or tenuously distributed, moderately sized spherical scatterers. The formulation includes rapid calculation of the tangent linear relationship (i.e., Jacobian) between the observed brightness temperature and any relevant radiative and geophysical layer parameters, such as the scattering and absorption coefficients, temperature, temperature lapse rate, and medium layer thickness. UMRT employs a rapid multistream scattering-based discrete ordinate eigenanalysis solution with a layer-adding algorithm stabilized by incorporating symmetrization of the discretized differential radiative transfer equations and analytical diagonalization and factorization of the resulting symmetric and positive definite matrices. It is based on the discrete ordinate tangent linear radiative transfer model of Voronovich et al. (2004), but extended to include both Mie and dense media scattering theories and employ refractive layers. Other nontrivial extensions are: 1) exact modeling of linearized temperature profiles and resulting radiation streams across medium layers, 2) compensation for refracted radiation streams using Snell's law, the Fresnel reflectivity and transmissivity coefficients, and a cubic spline interpolation matrix, and 3) seamless calculation of associated Jacobians for both sparse and dense medium parameters. Details of the UMRT Jacobian formulation are presented. The entire formulation has been programmed in Matlab and validated through both energy conservation and numerical Jacobian intercomparisons. Comparisons of the upwelling brightness temperatures over dry snow and ice from simulations and field measurements are presented and discussed.
Mathematical model of radiation effects on thrombopoiesis in rhesus macaques and humans.
Wentz, J M; Vainstein, V; Oldson, D; Gluzman-Poltorak, Z; Basile, L A; Stricklin, D
2015-10-21
A mathematical model that describes the effects of acute radiation exposure on thrombopoiesis in primates and humans is presented. Thrombopoiesis is a complex multistage dynamic process with potential differences between species. Due to known differences in cellular radiosensitivities, nadir times, and cytopenia durations, direct extrapolation from rhesus to human platelet dynamics is unrealistic. Developing mathematical models of thrombopoiesis for both humans and primates allows for the comparison of the system's response across species. Thus, data obtained in primate experiments can be extrapolated to predictions in humans. Parameter values for rhesus macaques and humans were obtained either from direct experimental measurements or through optimization procedures using dynamic data on platelet counts following radiation exposure. Model simulations accurately predict trends observed in platelet dynamics: at low radiation doses platelet counts decline after a time lag, and nadir depth is dose dependent. The models were validated using data that was not used during the parameterization process. In particular, additional experimental data was used for rhesus, and accident and platelet donor data was used for humans. The model aims to simulate the average response in rhesus and humans following irradiation. Variation in platelet dynamics due to individual variability can be modeled using Monte Carlo simulations in which parameter values are sampled from distributions. This model provides insight into the time course of the physiological effects of radiation exposure, information which could be valuable for disaster planning and survivability analysis and help in drug development of radiation medical countermeasures. PMID:26232694
Predictive model of radiative neutrino masses
NASA Astrophysics Data System (ADS)
Babu, K. S.; Julio, J.
2014-03-01
We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: the hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with δCP=π; and the effective mass in neutrinoless double beta decay lies in a narrow range, mββ=(17.6-18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tanβ, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The nonstandard neutral Higgs bosons, if they are moderately heavy, would decay dominantly into μ and τ with prescribed branching ratios. Observable rates for the decays μ →eγ and τ→3μ are predicted if these scalars have masses in the range of 150-500 GeV.
Badhwar, G.D.; Konradi, A.; Cucinotta, F.A.; Braby, L.A.
1994-09-01
A new class of tissue-equivalent proportional counters has been flown on two space shuttle flights. These detectors and their associated electronics cover a lineal energy range from 0.4 to 1250 keV/{mu}m with a multichannel analyzer resolution of 0.1 keV/{mu}m from 0.4 to 20 keV/{mu} and 5 keV/{mu}m from 20 to 1250 keV/{mu}m. These detectors provide the most complete dynamic range and highest resolution of any technique currently in use. On one mission, one detector was mounted in the Shuttle payload bay and another older modele in the mid-deck, thus providing information on the depth dependence of the lineal energy spectrum. A detailed comparison of the observed lineal energy and calculated LET spectra for galacic cosmic radiation shows that, although the radiation transport models provide a rather accurate description of the dose ({+-}15%) and equivalent dose ({+-}15%), the calculations significantly underestimate the frequency of events below about 100 keV/{mu}m. This difference cannot be explained by the inclusion of the contribution of splash protons. The contribution of the secondary pions, kaons and electrons produce in the Shuttle shielding, if included in the radiation transport model, may explain these differences. There are also significant differences between the model predictions and observations above 1440 keV/{mu}m, particularly for 28.5{degrees} inclination orbit. 24 refs., 9 figs., 1 tab.
A hybrid transport-diffusion model for radiative transfer in absorbing and scattering media
Roger, M.; Caliot, C.; Crouseilles, N.; Coelho, P.J.
2014-10-15
A new multi-scale hybrid transport-diffusion model for radiative transfer is proposed in order to improve the efficiency of the calculations close to the diffusive regime, in absorbing and strongly scattering media. In this model, the radiative intensity is decomposed into a macroscopic component calculated by the diffusion equation, and a mesoscopic component. The transport equation for the mesoscopic component allows to correct the estimation of the diffusion equation, and then to obtain the solution of the linear radiative transfer equation. In this work, results are presented for stationary and transient radiative transfer cases, in examples which concern solar concentrated and optical tomography applications. The Monte Carlo and the discrete-ordinate methods are used to solve the mesoscopic equation. It is shown that the multi-scale model allows to improve the efficiency of the calculations when the medium is close to the diffusive regime. The proposed model is a good alternative for radiative transfer at the intermediate regime where the macroscopic diffusion equation is not accurate enough and the radiative transfer equation requires too much computational effort.
A hybrid transport-diffusion model for radiative transfer in absorbing and scattering media
NASA Astrophysics Data System (ADS)
Roger, M.; Caliot, C.; Crouseilles, N.; Coelho, P. J.
2014-10-01
A new multi-scale hybrid transport-diffusion model for radiative transfer is proposed in order to improve the efficiency of the calculations close to the diffusive regime, in absorbing and strongly scattering media. In this model, the radiative intensity is decomposed into a macroscopic component calculated by the diffusion equation, and a mesoscopic component. The transport equation for the mesoscopic component allows to correct the estimation of the diffusion equation, and then to obtain the solution of the linear radiative transfer equation. In this work, results are presented for stationary and transient radiative transfer cases, in examples which concern solar concentrated and optical tomography applications. The Monte Carlo and the discrete-ordinate methods are used to solve the mesoscopic equation. It is shown that the multi-scale model allows to improve the efficiency of the calculations when the medium is close to the diffusive regime. The proposed model is a good alternative for radiative transfer at the intermediate regime where the macroscopic diffusion equation is not accurate enough and the radiative transfer equation requires too much computational effort.
Effective UV radiation from model calculations and measurements
NASA Technical Reports Server (NTRS)
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
Chang, Chih-Hao . E-mail: chchang@engineering.ucsb.edu; Liou, Meng-Sing . E-mail: meng-sing.liou@grc.nasa.gov
2007-07-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM{sup +} scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM{sup +}-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion.
NASA Astrophysics Data System (ADS)
Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart
2013-09-01
The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
A 3-dimensional DTI MRI-based model of GBM growth and response to radiation therapy.
Hathout, Leith; Patel, Vishal; Wen, Patrick
2016-09-01
Glioblastoma (GBM) is both the most common and the most aggressive intra-axial brain tumor, with a notoriously poor prognosis. To improve this prognosis, it is necessary to understand the dynamics of GBM growth, response to treatment and recurrence. The present study presents a mathematical diffusion-proliferation model of GBM growth and response to radiation therapy based on diffusion tensor (DTI) MRI imaging. This represents an important advance because it allows 3-dimensional tumor modeling in the anatomical context of the brain. Specifically, tumor infiltration is guided by the direction of the white matter tracts along which glioma cells infiltrate. This provides the potential to model different tumor growth patterns based on location within the brain, and to simulate the tumor's response to different radiation therapy regimens. Tumor infiltration across the corpus callosum is simulated in biologically accurate time frames. The response to radiation therapy, including changes in cell density gradients and how these compare across different radiation fractionation protocols, can be rendered. Also, the model can estimate the amount of subthreshold tumor which has extended beyond the visible MR imaging margins. When combined with the ability of being able to estimate the biological parameters of invasiveness and proliferation of a particular GBM from serial MRI scans, it is shown that the model has potential to simulate realistic tumor growth, response and recurrence patterns in individual patients. To the best of our knowledge, this is the first presentation of a DTI-based GBM growth and radiation therapy treatment model. PMID:27572745
Subgrid-scale model for radiative transfer in turbulent participating media
Soucasse, L.; Rivière, Ph.; École Centrale Paris, Grande Voie des Vignes, F-92290 Châtenay-Malabry ; Soufiani, A.
2014-01-15
The simulation of turbulent flows of radiating gases, taking into account all turbulence length scales with an accurate radiation transport solver, is computationally prohibitive for high Reynolds or Rayleigh numbers. This is particularly the case when the small structures are not optically thin. We develop in this paper a radiative transfer subgrid model suitable for the coupling with direct numerical simulations of turbulent radiating fluid flows. Owing to the linearity of the Radiative Transfer Equation (RTE), the emission source term is spatially filtered to define large-scale and subgrid-scale radiation intensities. The large-scale or filtered intensity is computed with a standard ray tracing method on a coarse grid, and the subgrid intensity is obtained analytically (in Fourier space) from the Fourier transform of the subgrid emission source term. A huge saving of computational time is obtained in comparison with direct ray tracing applied on the fine mesh. Model accuracy is checked for three 3D fluctuating temperature fields. The first field is stochastically generated and allows us to discuss the effects of the filtering level and of the optical thicknesses of the whole medium, of the integral length scale, and of the cutoff wave length. The second and third cases correspond respectively to turbulent natural convection of humid air in a cubical box, and to the flow of hot combustion products inside a channel. In all cases, the achieved accuracy on radiative powers and wall fluxes is about a few percents.
NASA Astrophysics Data System (ADS)
Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.
2012-03-01
Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.
Volterra network modeling of the nonlinear finite-impulse reponse of the radiation belt flux
NASA Astrophysics Data System (ADS)
Taylor, M.; Daglis, I. A.; Anastasiadis, A.; Vassiliadis, D.
2011-01-01
We show how a general class of spatio-temporal nonlinear impulse-response forecast networks (Volterra networks) can be constructed from a taxonomy of nonlinear autoregressive integrated moving average with exogenous inputs (NAR-MAX) input-output equations, and used to model the evolution of energetic particle f uxes in the Van Allen radiation belts. We present initial results for the nonlinear response of the radiation belts to conditions a month earlier. The essential features of spatio-temporal observations are recovered with the model echoing the results of state space models and linear f nite impulse-response models whereby the strongest coupling peak occurs in the preceding 1-2 days. It appears that such networks hold promise for the development of accurate and fully data-driven space weather modelling, monitoring and forecast tools.
Volterra network modeling of the nonlinear finite-impulse reponse of the radiation belt flux
Taylor, M.; Daglis, I. A.; Anastasiadis, A.; Vassiliadis, D.
2011-01-04
We show how a general class of spatio-temporal nonlinear impulse-response forecast networks (Volterra networks) can be constructed from a taxonomy of nonlinear autoregressive integrated moving average with exogenous inputs (NAR-MAX) input-output equations, and used to model the evolution of energetic particle f uxes in the Van Allen radiation belts. We present initial results for the nonlinear response of the radiation belts to conditions a month earlier. The essential features of spatio-temporal observations are recovered with the model echoing the results of state space models and linear f nite impulse-response models whereby the strongest coupling peak occurs in the preceding 1-2 days. It appears that such networks hold promise for the development of accurate and fully data-driven space weather modelling, monitoring and forecast tools.
LDEF geometry/mass model for radiation analyses
NASA Technical Reports Server (NTRS)
Colborn, B. L.; Armstrong, T. W.
1992-01-01
A three-dimensional geometry/mass model of LDEF is under development for ionizing radiation analyses. This model, together with ray tracing algorithms, is being programmed for use both as a stand alone code in determining three-dimensional shielding distributions at dosimetry locations and as a geometry module that can be interfaced with radiation transport codes.
General cloud cover modifier for clear sky solar radiation models
NASA Astrophysics Data System (ADS)
Myers, Daryl R.
2007-09-01
Worldwide lack of comprehensive measured solar radiation resource data for solar system design is well known. Several simple clear sky solar radiation models for computing hourly direct, diffuse and global hemispherical solar radiation have been developed over the past 25 years. The simple model of Richard Bird, Iqbal's parameterization C, and Gueymard's REST model are popular for estimating maximum hourly solar resources. We describe a simple polynomial in cloud cover (octa) modifier for these models that produces realistic time series of hourly solar radiation data representative of naturally occurring solar radiation conditions under all sky conditions. Surface cloud cover observations (Integrated Surface Hourly Data) from the National Climatic Data Center are the only additional (hourly) input data to model total hemispherical solar radiation under all sky conditions. Performance was evaluated using three years of hourly solar radiation data from 31 sites in the 1961-1990 National Solar Radiation Data Base. Mean bias errors range from - 10% to -20%, and are clear sky model dependant. Root mean square error of about 40%, are also dependent upon the particular model used and the uncertainty in the specific clear sky model inputs and lack of information on cloud type and spatial distributions.
Modelling of Radiation Heat Transfer in Reacting Hot Gas Flows
NASA Astrophysics Data System (ADS)
Thellmann, A.; Mundt, C.
2009-01-01
In this work the interaction between a turbulent flow including chemical reactions and radiation transport is investigated. As a first step, the state-of-the art radiation models P1 based on the moment method and Discrete Transfer Model (DTM) based on the discrete ordinate method are used in conjunction with the CFD code ANSYS CFX. The absorbing and emitting medium (water vapor) is modeled by Weighted Sum of Gray Gases. For the chemical reactions the standard Eddy dissipation model combined with the two equation turbulence model k-epsilon is employed. A demonstration experiment is identified which delivers temperature distribution, species concentration and radiative intensity distribution in the investigated combustion enclosure. The simulation results are compared with the experiment and reveals that the P1 model predicts the location of the maximal radiation intensity unphysically. On the other hand the DTM model does better but over predicts the maximum value of the radiation intensity. This radiation sensitivity study is a first step on the way to identify a suitable radiation transport and spectral model in order to implement both in an existing 3D Navier-Stokes Code. Including radiation heat transfer we intend to investigate the influence on the overall energy balance in a hydrogen/oxygen rocket combustion chamber.
Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.
2008-10-20
One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic
Numerical modeling of elastodynamic radiation and scattering
Savic, M.; Ziolkowski, A.M.
1994-12-31
This paper presents a study on two problems: the two-dimensional distributed surface load problem, and the scattering of elastodynamic waves from fractures. The analysis is done with the aid of the finite-difference technique. If the dimensions of a surface mechanical source (vibrator or piezoelectric transducer) are not small compared to the wavelength, one should not use the point source or plane wave representation when modeling radiation from such sources. Here the authors demonstrate the solution of the uniformly distributed surface load problem using the finite-difference (FD) technique. The scattering of transient elasto-dynamic waves from a fracture whose extent is large compared with the wavelength and whose width is small compared with the wavelength and whose width is small compared with the wavelength is one of the classical problems in seismology and non-destructive testing (NDT). Many researchers have provided analytical solutions based on different approximations for the unknown field (displacement or particle velocity) scattered from an idealized half-plane or the a strip of finite extent. Again, the authors demonstrate the full wavefield solution using the finite-difference technique. The technique presented here is aimed for the interpretation of seismic data from hydraulic fracturing experiments.
Shock Layer Radiation Modeling and Uncertainty for Mars Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth
2012-01-01
A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the
NASA Astrophysics Data System (ADS)
Myra, Eric S.; Hawkins, Wm. Daryl
2013-03-01
The Center for Radiative Shock Hydrodynamics (CRASH) is investigating methods of improving the predictive capability of numerical simulations for radiative shock waves that are produced in Omega laser experiments. The laser is used to shock, ionize, and accelerate a beryllium foil into a xenon-filled shock tube. These shock waves, when driven above a threshold velocity of about 60 km/s, become strongly radiative and convert much of the incident energy flux into radiation. Radiative shocks have properties that are significantly different from purely hydrodynamic shocks and, in modeling this phenomenon numerically, it is important to compute radiative effects accurately. In this article, we examine approaches to modeling radiation transport by comparing two methods: (i) a computationally efficient, multigroup, flux-limited-diffusion approximation, currently in use in the CRASH radiation-hydrodynamics code, with (ii) a more accurate discrete-ordinates treatment that is offered by the radiation-transport code PDT. We present a selection of results from a growing suite of code-to-code comparison tests, showing both results for idealized problems and for those that are representative of conditions found in the CRASH experiment.
NASA Astrophysics Data System (ADS)
Yogurtcu, Osman N.; Johnson, Margaret E.
2015-08-01
The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute
NASA Astrophysics Data System (ADS)
Barbour, San-Lian S.; Barbour, Randall L.; Koo, Ping C.; Graber, Harry L.; Chang, Jenghwa
1995-05-01
results reported are the first to demonstrate that high quality images of small added inclusions can be obtained from anatomically accurate models of thick tissues having arbitrary boundaries based on the analysis of diffusely sscattered light.
Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
The continued development and improvement of the viscous shock layer (VSL) nonequilibrium chemistry blunt body engineering code, the incorporation in a coupled manner of radiation models into the VSL code, and the initial development of appropriate precursor models are presented.
Computer modelling of statistical properties of SASE FEL radiation
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1997-06-01
The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY.
Howard Barker; Jason Cole
2012-05-17
Utilization of cloud-resolving models and multi-dimensional radiative transfer models to investigate the importance of 3D radiation effects on the numerical simulation of cloud fields and their properties.
Xiao, Zheng; Dunn, Elizabeth; Singh, Kanal; Khan, Imran S.; Yannone, Steven M.; Cowan, Morton J.
2009-01-01
Two Artemis-deficient (mArt-/-) mouse models, independently generated on 129/SvJ backgrounds have the expected T-B-NK+SCID phenotype. However, they fail to mimic the human disease due to CD4+ T-cell leakiness. Moreover, immune reconstitution in these leaky mouse models following hematopoietic stem cell transplantation (HSCT) is more easily achieved than that seen in Artemis-deficient humans. To develop a more clinically relevant animal model we backcrossed the mArt-/- mutation onto the C57Bl/6 (B6) background (99.9%) resulting in virtually no CD4+ T-cell leakiness compared to 129/SvJ mArt-/- mice (0.3±0.25% vs 19.5±15.1%, p<0.001). The non-leaky mouse was also uniquely resistant to engraftment using allogeneic mismatched HSC, comparable to what is seen with human Artemis deficiency. The genetic background also influenced Artemis-associated radiation sensitivity with differing degrees of x-ray hypersensitivity evident in 129/SvJ and B6 backgrounds with both the mArt-/- and mArt-/+ genotypes. Our results indicate that immunogenic and DNA repair phenotypes associated with Artemis deficiency are significantly altered by genetic background, which has important implications for SCID diagnosis and treatment. Moreover, the B6 mArt-/- mouse is a more accurate model for the human disease, and a more appropriate system for studying human Artemis-deficiency and for developing improved transplant and gene therapy regimens for the treatment of SCID children. PMID:19135937
Measurement and modeling of external radiation during 1984 from LAMPF atmospheric emissions
Bowen, B.M.; Olsen, W.A.; Van Etten, D.; Chen, I.
1986-07-01
An array of three portable, pressurized ionization chambers (PICs) measured short-term external radiation levels produced by air activation products from the Los Alamos Meson Physics Facility (LAMPF). The monitoring was at the closet offsite location, 700-900 m north and northeast of the source, and across a large, deep canyon. A Gaussian-type atmospheric dispersion model, using onsite meteorological and stack release data, was tested during their study. Monitoring results indicate that a persistent, local up-valley wind during the evening and early morning hours is largely responsible for causing the highest radiation levels to the northeast and north-northeast of LAMPF. Comparison of predicted and measured daily external radiation levels indicates a high degree of correlation. The model also gives accurate estimates of measured concentrations over longer periods of time.
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-12-01
Development of a physically accurate and computationally efficient photon migration model for turbid media is crucial for optical computed tomography such as diffuse optical tomography. For the development, this paper constructs a space-time coupling model of the radiative transport equation with the photon diffusion equation. In the coupling model, a space-time regime of the photon migration is divided into the ballistic and diffusive regimes with the interaction between the both regimes to improve the accuracy of the results and the efficiency of computation. The coupling model provides an accurate description of the photon migration in various turbid media in a wide range of the optical properties, and reduces computational loads when compared with those of full calculation of the RTE.
NASA Astrophysics Data System (ADS)
Stewart, Kristin J.
In this work we developed two new devices that aim to improve the accuracy of relative and reference dosimetry for radiation therapy: a guarded liquid ionization chamber (GLIC) and an electron sealed water (ESW) calorimeter. With the GLIC we aimed to develop a perturbation-free energy-independent detector with high spatial resolution for relative dosimetry. We achieved sufficient stability for short-term measurements using the GLIC-03, which has a sensitive volume of approximately 2 mm3. We evaluated ion recombination in pulsed photon beams using a theoretical model and also determined a new empirical method to correct for relative differences in general recombination which could be used in cases where the theoretical model was not applicable. The energy dependence of the GLIC-03 was 1.1% between 6 and 18 MV photon beams. Measurements in the build-up region of an 18 MV beam indicated that this detector produces minimal perturbation to the radiation field and confirmed the validity of the empirical recombination correction. The ESW calorimeter was designed to directly measure absorbed dose to water in clinical electron beams. We obtained reproducible measurements for 6 to 20 MeV beams. We determined corrections for perturbations to the radiation field caused by the glass calorimeter vessel and for conductive heat transfer due to the dose gradient and non-water materials. The overall uncertainty on the ESW calorimeter dose was 0.5% for the 9 to 20 MeV beams and 1.0% for 6 MeV, showing for the first time that the development of a water-calorimeter-based standard for electron beams over a wide range of energies is feasible. Comparison between measurements with the ESW calorimeter and the NRC photon beam standard calorimeter in a 6 MeV beam revealed a discrepancy of 0.7+/-0.2% which is still under investigation. Absorbed-dose beam quality conversion factors in electron beams were measured using the ESW calorimeter for the Exradin A12 and PTW Roos ionization chambers
NASA Astrophysics Data System (ADS)
Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.
2011-12-01
The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.
Implementing Badhwar-O'Neill Galactic Cosmic Ray Model for the Analysis of Space Radiation Exposure
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; O'Neill, Patrick M.; Slaba, Tony C.
2014-01-01
For the analysis of radiation risks to astronauts and planning exploratory space missions, accurate energy spectrum of galactic cosmic radiation (GCR) is necessary. Characterization of the ionizing radiation environment is challenging because the interplanetary plasma and radiation fields are modulated by solar disturbances and the radiation doses received by astronauts in interplanetary space are likewise influenced. A model of the Badhwar-O'Neill 2011 (BO11) GCR environment, which is represented by GCR deceleration potential theta, has been derived by utilizing all of the GCR measurements from balloons, satellites, and the newer NASA Advanced Composition Explorer (ACE). In the BO11 model, the solar modulation level is derived from the mean international sunspot numbers with time-delay, which has been calibrated with actual flight instrument measurements to produce better GCR flux data fit during solar minima. GCR fluxes provided by the BO11 model were compared with various spacecraft measurements at 1 AU, and further comparisons were made for the tissue equivalent proportional counters measurements at low Earth orbits using the high-charge and energy transport (HZETRN) code and various GCR models. For the comparison of the absorbed dose and dose equivalent calculations with the measurements by Radiation Assessment Detector (RAD) at Gale crater on Mars, the intensities and energies of GCR entering the heliosphere were calculated by using the BO11 model, which accounts for time-dependent attenuation of the local interstellar spectrum of each element. The BO11 model, which has emphasized for the last 24 solar minima, showed in relatively good agreement with the RAD data for the first 200 sols, but it was resulted in to be less well during near the solar maximum of solar cycle 24 due to subtleties in the changing heliospheric conditions. By performing the error analysis of the BO11 model and the optimization in reducing overall uncertainty, the resultant BO13 model
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
An accurate locally active memristor model for S-type negative differential resistance in NbOx
NASA Astrophysics Data System (ADS)
Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Vandenberghe, Ken; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R.
2016-01-01
A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or "S-type," negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a "selector," is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.
NASA Astrophysics Data System (ADS)
Reppert, Mike; Naibo, Virginia; Jankowiak, Ryszard
2010-07-01
Accurate lineshape functions for modeling fluorescence line narrowing (FLN) difference spectra (ΔFLN spectra) in the low-fluence limit are derived and examined in terms of the physical interpretation of various contributions, including photoproduct absorption and emission. While in agreement with the earlier results of Jaaniso [Proc. Est. Acad. Sci., Phys., Math. 34, 277 (1985)] and Fünfschilling et al. [J. Lumin. 36, 85 (1986)], the derived formulas differ substantially from functions used recently [e.g., M. Rätsep et al., Chem. Phys. Lett. 479, 140 (2009)] to model ΔFLN spectra. In contrast to traditional FLN spectra, it is demonstrated that for most physically reasonable parameters, the ΔFLN spectrum reduces simply to the single-site fluorescence lineshape function. These results imply that direct measurement of a bulk-averaged single-site fluorescence lineshape function can be accomplished with no complicated extraction process or knowledge of any additional parameters such as site distribution function shape and width. We argue that previous analysis of ΔFLN spectra obtained for many photosynthetic complexes led to strong artificial lowering of apparent electron-phonon coupling strength, especially on the high-energy side of the pigment site distribution function.
Filizola, Marta
2009-01-01
For years conventional drug design at G-protein coupled receptors (GPCRs) has mainly focused on the inhibition of a single receptor at a usually well-defined ligand-binding site. The recent discovery of more and more physiologically relevant GPCR dimers/oligomers suggests that selectively targeting these complexes or designing small molecules that inhibit receptor-receptor interactions might provide new opportunities for novel drug discovery. To uncover the fundamental mechanisms and dynamics governing GPCR dimerization/oligomerization, it is crucial to understand the dynamic process of receptor-receptor association, and to identify regions that are suitable for selective drug binding. This minireview highlights current progress in the development of increasingly accurate dynamic molecular models of GPCR oligomers based on structural, biochemical, and biophysical information that has recently appeared in the literature. In view of this new information, there has never been a more exciting time for computational research into GPCRs than at present. Information-driven modern molecular models of GPCR complexes are expected to efficiently guide the rational design of GPCR oligomer-specific drugs, possibly allowing researchers to reach for the high-hanging fruits in GPCR drug discovery, i.e. more potent and selective drugs for efficient therapeutic interventions. PMID:19465029
Długosz, Maciej; Antosiewicz, Jan M
2015-07-01
Proper treatment of hydrodynamic interactions is of importance in evaluation of rigid-body mobility tensors of biomolecules in Stokes flow and in simulations of their folding and solution conformation, as well as in simulations of the translational and rotational dynamics of either flexible or rigid molecules in biological systems at low Reynolds numbers. With macromolecules conveniently modeled in calculations or in dynamic simulations as ensembles of spherical frictional elements, various approximations to hydrodynamic interactions, such as the two-body, far-field Rotne-Prager approach, are commonly used, either without concern or as a compromise between the accuracy and the numerical complexity. Strikingly, even though the analytical Rotne-Prager approach fails to describe (both in the qualitative and quantitative sense) mobilities in the simplest system consisting of two spheres, when the distance between their surfaces is of the order of their size, it is commonly applied to model hydrodynamic effects in macromolecular systems. Here, we closely investigate hydrodynamic effects in two and three-body systems, consisting of bead-shell molecular models, using either the analytical Rotne-Prager approach, or an accurate numerical scheme that correctly accounts for the many-body character of hydrodynamic interactions and their short-range behavior. We analyze mobilities, and translational and rotational velocities of bodies resulting from direct forces acting on them. We show, that with the sufficient number of frictional elements in hydrodynamic models of interacting bodies, the far-field approximation is able to provide a description of hydrodynamic effects that is in a reasonable qualitative as well as quantitative agreement with the description resulting from the application of the virtually exact numerical scheme, even for small separations between bodies. PMID:26068580
NASA Astrophysics Data System (ADS)
Gritsyk, P. A.; Somov, B. V.
2016-08-01
The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.
Radiation exposure modeling and project schedule visualization
Jaquish, W.R.; Enderlin, V.R.
1995-10-01
This paper discusses two applications using IGRIP (Interactive Graphical Robot Instruction Program) to assist environmental remediation efforts at the Department of Energy (DOE) Hanford Site. In the first application, IGRIP is used to calculate the estimated radiation exposure to workers conducting tasks in radiation environments. In the second, IGRIP is used as a configuration management tool to detect interferences between equipment and personnel work areas for multiple projects occurring simultaneously in one area. Both of these applications have the capability to reduce environmental remediation costs by reducing personnel radiation exposure and by providing a method to effectively manage multiple projects in a single facility.
Linear radiation model for phase of thermal emission spectroscopy
NASA Astrophysics Data System (ADS)
Bennett, Ted D.; Yu, Fengling
2005-11-01
A linear radiation model is developed that overcomes the analytical complexity in phase of thermal emission spectroscopy. It is shown that the linear radiation model can result in a simple algebraic relation between the phase of thermal emission and four coating properties, enabling these properties to be determined by nonlinear regression analysis of experimental measurements. Suitability of the linear radiation model to various measurement conditions is explored, and the model is applied to the phase of thermal emission measurements performed on a thermal barrier coating.
Guidelines for effective radiation transport for cable SGEMP modeling.
Drumm, Clifton Russell; Fan, Wesley C.; Turner, C. David
2014-07-01
This report describes experiences gained in performing radiation transport computations with the SCEPTRE radiation transport code for System Generated ElectroMagnetic Pulse (SGEMP) applications. SCEPTRE is a complex code requiring a fairly sophisticated user to run the code effectively, so this report provides guidance for analysts interested in performing these types of calculations. One challenge in modeling coupled photon/electron transport for SGEMP is to provide a spatial mesh that is sufficiently resolved to accurately model surface charge emission and charge deposition near material interfaces. The method that has been most commonly used to date to compute cable SGEMP typically requires a sub-micron mesh size near material interfaces, which may be difficult for meshing software to provide for complex geometries. We present here an alternative method for computing cable SGEMP that appears to substantially relax this requirement. The report also investigates the effect of refining the energy mesh and increasing the order of the angular approximation to provide some guidance on determining reasonable parameters for the energy/angular approximation needed for x-ray environments. Conclusions for -ray environments may be quite different and will be treated in a subsequent report. In the course of the energy-mesh refinement studies, a bug in the cross-section generation software was discovered that may cause under prediction of the result by as much as an order of magnitude for the test problem studied here, when the electron energy group widths are much smaller than those for the photons. Results will be presented and compared using cross sections generated before and after the fix. We also describe adjoint modeling, which provides sensitivity of the total charge drive to the source energy and angle of incidence, which is quite useful for comparing the effect of changing the source environment and for determining most stressing angle of incidence and
Sundaramurthy, Aravind; Alai, Aaron; Ganpule, Shailesh; Holmberg, Aaron; Plougonven, Erwan; Chandra, Namas
2012-09-01
Blast waves generated by improvised explosive devices (IEDs) cause traumatic brain injury (TBI) in soldiers and civilians. In vivo animal models that use shock tubes are extensively used in laboratories to simulate field conditions, to identify mechanisms of injury, and to develop injury thresholds. In this article, we place rats in different locations along the length of the shock tube (i.e., inside, outside, and near the exit), to examine the role of animal placement location (APL) in the biomechanical load experienced by the animal. We found that the biomechanical load on the brain and internal organs in the thoracic cavity (lungs and heart) varied significantly depending on the APL. When the specimen is positioned outside, organs in the thoracic cavity experience a higher pressure for a longer duration, in contrast to APL inside the shock tube. This in turn will possibly alter the injury type, severity, and lethality. We found that the optimal APL is where the Friedlander waveform is first formed inside the shock tube. Once the optimal APL was determined, the effect of the incident blast intensity on the surface and intracranial pressure was measured and analyzed. Noticeably, surface and intracranial pressure increases linearly with the incident peak overpressures, though surface pressures are significantly higher than the other two. Further, we developed and validated an anatomically accurate finite element model of the rat head. With this model, we determined that the main pathway of pressure transmission to the brain was through the skull and not through the snout; however, the snout plays a secondary role in diffracting the incoming blast wave towards the skull. PMID:22620716
Canopy radiation transmission for an energy balance snowmelt model
NASA Astrophysics Data System (ADS)
Mahat, Vinod; Tarboton, David G.
2012-01-01
To better estimate the radiation energy within and beneath the forest canopy for energy balance snowmelt models, a two stream radiation transfer model that explicitly accounts for canopy scattering, absorption and reflection was developed. Upward and downward radiation streams represented by two differential equations using a single path assumption were solved analytically to approximate the radiation transmitted through or reflected by the canopy with multiple scattering. This approximation results in an exponential decrease of radiation intensity with canopy depth, similar to Beer's law for a deep canopy. The solution for a finite canopy is obtained by applying recursive superposition of this two stream single path deep canopy solution. This solution enhances capability for modeling energy balance processes of the snowpack in forested environments, which is important when quantifying the sensitivity of hydrologic response to input changes using physically based modeling. The radiation model was included in a distributed energy balance snowmelt model and results compared with observations made in three different vegetation classes (open, coniferous forest, deciduous forest) at a forest study area in the Rocky Mountains in Utah, USA. The model was able to capture the sensitivity of beneath canopy net radiation and snowmelt to vegetation class consistent with observations and achieve satisfactory predictions of snowmelt from forested areas from parsimonious practically available information. The model is simple enough to be applied in a spatially distributed way, but still relatively rigorously and explicitly represent variability in canopy properties in the simulation of snowmelt over a watershed.
Modeling Clinical Radiation Responses in the IMRT Era
NASA Astrophysics Data System (ADS)
Schwartz, J. L.; Murray, D.; Stewart, R. D.; Phillips, M. H.
2014-03-01
The purpose of this review is to highlight the critical issues of radiobiological models, particularly as they apply to clinical radiation therapy. Developing models of radiation responses has a long history that continues to the present time. Many different models have been proposed, but in the field of radiation oncology, the linear-quadratic (LQ) model has had the most impact on the design of treatment protocols. Questions have been raised as to the value of the LQ model given that the biological assumption underlying it has been challenged by molecular analyses of cell and tissue responses to radiation. There are also questions as to use of the LQ model for hypofractionation, especially for high dose treatments using a single fraction. While the LQ model might over-estimate the effects of large radiation dose fractions, there is insufficient information to fully justify the adoption of alternative models. However, there is increasing evidence in the literature that non-targeted and other indirect effects of radiation sometimes produce substantial deviations from LQ-like dose-response curves. As preclinical and clinical hypofractionation studies accumulate, new or refined dose-response models that incorporate high-dose/fraction non-targeted and indirect effects may be required, but for now the LQ model remains a simple, useful tool to guide the design of treatment protocols.
A radiation-derived temperature-index snow routine for the GSSHA hydrologic model
NASA Astrophysics Data System (ADS)
Follum, Michael L.; Downer, Charles W.; Niemann, Jeffrey D.; Roylance, Spencer M.; Vuyovich, Carrie M.
2015-10-01
Accurate estimation of snowpack is vital in many parts of the world for both water management and flood prediction. Temperature-index (TI) snowmelt models are commonly used for this purpose due to their simplicity and low data requirements. Although TI models work well within lumped watershed models, their reliance on air temperature (and potentially an assumed lapse rate) as the only external driver of snowmelt limits their ability to accurately simulate the spatial distribution of snowpack and thus the timing of snowmelt. This limitation significantly reduces the utility of the TI approach in distributed hydrologic models because spatial variability within the watershed, including snowpack and snowmelt, is usually the primary reason for selecting a distributed model. In this paper, a new radiation-derived temperature index (RTI) approach is presented that uses a spatially-varying proxy temperature in place of air temperature within the TI model of the fully-distributed Gridded Surface Subsurface Hydrologic Analysis (GSSHA) watershed model. The RTI is derived from a radiation balance and includes spatial heterogeneity in both shortwave and longwave radiation. Thus, the RTI accounts for more local variation in the available energy than air temperature alone. The RTI model in GSSHA is tested at the Senator Beck basin in southwestern Colorado where observations for snow water equivalent (SWE) and LandSat-derived images of snow cover area (SCA) are available. The TI and RTI approaches produce similar SWE estimates at two non-forested and relatively flat sites with SWE observations. However, the two models can produce very different SWE values at sites with forests or topographic slopes, which leads to significant differences in the basin-wide SWE values of the two models. Furthermore, the RTI model provides better basin-wide SCA estimates than the TI model in 75% of the LandSat images analyzed.
NASA Astrophysics Data System (ADS)
Tripathy, Madhumita; Raman, Mini; Chauhan, Prakash
2015-10-01
Photosynthetically available radiation (PAR) is an important variable for radiation budget, marine and terrestrial ecosystem models. OCEANSAT-1 Ocean Color Monitor (OCM) PAR was estimated using two different methods under both clear and cloudy sky conditions. In the first approach, aerosol optical depth (AOD) and cloud optical depth (COD) were estimated from OCEANSAT-1 OCM TOA (top-of-atmosphere) radiance data on a pixel by pixel basis and PAR was estimated from extraterrestrial solar flux for fifteen spectral bands using a radiative transfer model. The second approach used TOA radiances measured by OCM in the PAR spectral range to compute PAR. This approach also included surface albedo and cloud albedo as inputs. Comparison between OCEANSAT-1 OCM PAR at noon with in situ measured PAR shows that root mean square difference was 5.82% for the method I and 7.24% for the method II in daily time scales. Results indicate that methodology adopted to estimate PAR from OCEANSAT-1 OCM can produce reasonably accurate PAR estimates over the tropical Indian Ocean region. This approach can be extended to OCEANSAT-2 OCM and future OCEANSAT-3 OCM data for operational estimation of PAR for regional marine ecosystem applications.
NASA Astrophysics Data System (ADS)
Jolivet, L.; Cohen, M.; Ruas, A.
2015-08-01
Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.
Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.
2008-07-01
Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php
Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A
2010-01-01
Studies in the field of wound healing have utilized a variety of different housekeeping genes for reverse transcription-quantitative polymerase chain reaction (RT-qPCR) analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expression in wound healing, we examined all articles published in Wound Repair and Regeneration that cited RT-qPCR during the period of January/February 2008 until July/August 2009. We determined that ACTβ, GAPDH, 18S, and β2M were the most frequently used housekeeping genes in human, mouse, and pig studies. We also investigated nine commonly used housekeeping genes that are not generally used in wound healing models: GUS, TBP, RPLP2, ATP5B, SDHA, UBC, CANX, CYC1, and YWHAZ. We observed that wounded and unwounded tissues have contrasting housekeeping gene expression stability. The results demonstrate that commonly used housekeeping genes must be validated as accurate normalizing genes for each individual experimental condition. PMID:20731795
Econometric model for age- and population-dependent radiation exposures
Sandquist, G.M.; Slaughter, D.M. ); Rogers, V.C.
1991-01-01
The economic impact associated with ionizing radiation exposures in a given human population depends on numerous factors including the individual's mean economic status as a function age, the age distribution of the population, the future life expectancy at each age, and the latency period for the occurrence of radiation-induced health effects. A simple mathematical model has been developed that provides an analytical methodology for estimating the societal econometrics associated with radiation effects are to be assessed and compared for economic evaluation.
Treatment of cloud radiative effects in general circulation models
Wang, W.C.; Dudek, M.P.; Liang, X.Z.; Ding, M.
1996-04-01
We participate in the Atmospheric Radiation Measurement (ARM) program with two objectives: (1) to improve the general circulation model (GCM) cloud/radiation treatment with a focus on cloud verticle overlapping and layer cloud optical properties, and (2) to study the effects of cloud/radiation-climate interaction on GCM climate simulations. This report summarizes the project progress since the Fourth ARM Science Team meeting February 28-March 4, 1994, in Charleston, South Carolina.
Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot
2016-01-01
Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate
NASA Astrophysics Data System (ADS)
Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot
2016-01-01
Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate
Survey of current situation in radiation belt modeling
NASA Technical Reports Server (NTRS)
Fung, Shing F.
2004-01-01
The study of Earth's radiation belts is one of the oldest subjects in space physics. Despite the tremendous progress made in the last four decades, we still lack a complete understanding of the radiation belts in terms of their configurations, dynamics, and detailed physical accounts of their sources and sinks. The static nature of early empirical trapped radiation models, for examples, the NASA AP-8 and AE-8 models, renders those models inappropriate for predicting short-term radiation belt behaviors associated with geomagnetic storms and substorms. Due to incomplete data coverage, these models are also inaccurate at low altitudes (e.g., <1000 km) where many robotic and human space flights occur. The availability of radiation data from modern space missions and advancement in physical modeling and data management techniques have now allowed the development of new empirical and physical radiation belt models. In this paper, we will review the status of modern radiation belt modeling. Published by Elsevier Ltd on behalf of COSPAR.
Empirical and theoretical models of terrestrial trapped radiation
Panasyuk, M.I.
1996-07-01
A survey of current Skobeltsyn Institute of Nuclear Physics, Moscow State University (INP MSU) empirical and theoretical models of particles (electrons, protons and heavier irons) of the Earth{close_quote}s radiation belts developed to date is presented. Results of intercomparison of the different models as well as comparison with experimental data are reported. Aspects of further development of radiation condition modelling in near-Earth space are discussed. {copyright} {ital 1996 American Institute of Physics.}
1994-12-31
We are using a hierarchy of numerical models of cirrus and stratus clouds and radiative transfer to improve the reliability of general circulation models. Our detailed cloud microphysical model includes all of the physical processes believed to control the lifecycle of liquid and ice clouds in the troposphere. In our one-dimensional cirrus studies, we find that the ice crystal number and size in cirrus clouds are not very sensitive to the number of condensation nuclei which are present. We have compared our three-dimensional meoscale simulations of cirrus clouds with radar, lidar satellite and other observations of water vapor and cloud fields and find that the model accurately predicts the characteristics of a cirrus cloud system. The model results reproduce several features detected by remote sensing (lidar and radar) measurements, including the appearance of the high cirrus cloud at about 15 UTC and the thickening of the cloud at 20 UTC. We have developed a new parameterizations for production of ice crystals based on the detailed one-dimensional cloud model, and are presently testing the parameterization in three-dimensional simulations of the FIRE-II November 26 case study. We have analyzed NWS radiosonde humidity data from FIRE and ARM and found errors, biases, and uncertainties in the conversion of the sensed resistance to humidity.
Modeling of Plasma Conditions and Spectral Properties of Radiation-Heated Matter
NASA Astrophysics Data System (ADS)
Golovkin, Igor; Macfarlane, Joseph; Golovkina, Viktoriya; Nagayama, Taisuke; Bailey, James; Rochau, Gregory
2015-11-01
Opacity experiments at the Z facility provide important data for benchmarking opacity models and atomic data. The ability to accurately interpret the data obtained in these experiments increases the confidence in opacity calculations for a variety of astrophysical and laboratory problems. In the experiments, the Z dynamic hohlraum radiation source is used to both heat and backlight material samples. We will present the latest improvements to the simulation codes developed at Prism and how they affect the analysis of the experimental data. In particular, we will discuss angle-dependent radiation boundary condition recently implemented in the radiation-hydrodynamics code HELIOS. This improved modeling capability can potentially be important for studying behavior of plasmas driven by radiation sources that cannot be adequately described as neither directional nor Lambertian. We will also discuss atomic kinetics in radiatively heated samples and the possibility of its deviation from LTE. The effect of such deviation on both hydrodynamic evolution and radiative properties of these plasmas will be addressed.
NASA Space Radiation Program Integrative Risk Model Toolkit
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Hu, Shaowen; Plante, Ianik; Ponomarev, Artem L.; Sandridge, Chris
2015-01-01
NASA Space Radiation Program Element scientists have been actively involved in development of an integrative risk models toolkit that includes models for acute radiation risk and organ dose projection (ARRBOD), NASA space radiation cancer risk projection (NSCR), hemocyte dose estimation (HemoDose), GCR event-based risk model code (GERMcode), and relativistic ion tracks (RITRACKS), NASA radiation track image (NASARTI), and the On-Line Tool for the Assessment of Radiation in Space (OLTARIS). This session will introduce the components of the risk toolkit with opportunity for hands on demonstrations. The brief descriptions of each tools are: ARRBOD for Organ dose projection and acute radiation risk calculation from exposure to solar particle event; NSCR for Projection of cancer risk from exposure to space radiation; HemoDose for retrospective dose estimation by using multi-type blood cell counts; GERMcode for basic physical and biophysical properties for an ion beam, and biophysical and radiobiological properties for a beam transport to the target in the NASA Space Radiation Laboratory beam line; RITRACKS for simulation of heavy ion and delta-ray track structure, radiation chemistry, DNA structure and DNA damage at the molecular scale; NASARTI for modeling of the effects of space radiation on human cells and tissue by incorporating a physical model of tracks, cell nucleus, and DNA damage foci with image segmentation for the automated count; and OLTARIS, an integrated tool set utilizing HZETRN (High Charge and Energy Transport) intended to help scientists and engineers study the effects of space radiation on shielding materials, electronics, and biological systems.
Mui, K W; Wong, L T; Chung, L Y
2009-11-01
Atmospheric visibility impairment has gained increasing concern as it is associated with the existence of a number of aerosols as well as common air pollutants and produces unfavorable conditions for observation, dispersion, and transportation. This study analyzed the atmospheric visibility data measured in urban and suburban Hong Kong (two selected stations) with respect to time-matched mass concentrations of common air pollutants including nitrogen dioxide (NO(2)), nitrogen monoxide (NO), respirable suspended particulates (PM(10)), sulfur dioxide (SO(2)), carbon monoxide (CO), and meteorological parameters including air temperature, relative humidity, and wind speed. No significant difference in atmospheric visibility was reported between the two measurement locations (p > or = 0.6, t test); and good atmospheric visibility was observed more frequently in summer and autumn than in winter and spring (p < 0.01, t test). It was also found that atmospheric visibility increased with temperature but decreased with the concentrations of SO(2), CO, PM(10), NO, and NO(2). The results showed that atmospheric visibility was season dependent and would have significant correlations with temperature, the mass concentrations of PM(10) and NO(2), and the air pollution index API (correlation coefficients mid R: R mid R: > or = 0.7, p < or = 0.0001, t test). Mathematical expressions catering to the seasonal variations of atmospheric visibility were thus proposed. By comparison, the proposed visibility prediction models were more accurate than some existing regional models. In addition to improving visibility prediction accuracy, this study would be useful for understanding the context of low atmospheric visibility, exploring possible remedial measures, and evaluating the impact of air pollution and atmospheric visibility impairment in this region. PMID:18951139
NASA Technical Reports Server (NTRS)
Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen
2005-01-01
Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.
Models of Jovian decametric radiation. [astronomical models of decametric waves
NASA Technical Reports Server (NTRS)
Smith, R. A.
1975-01-01
A critical review is presented of theoretical models of Jovian decametric radiation, with particular emphasis on the Io-modulated emission. The problem is divided into three broad aspects: (1) the mechanism coupling Io's orbital motion to the inner exosphere, (2) the consequent instability mechanism by which electromagnetic waves are amplified, and (3) the subsequent propagation of the waves in the source region and the Jovian plasmasphere. At present there exists no comprehensive theory that treats all of these aspects quantitatively within a single framework. Acceleration of particles by plasma sheaths near Io is proposed as an explanation for the coupling mechanism, while most of the properties of the emission may be explained in the context of cyclotron instability of a highly anisotropic distribution of streaming particles.
Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.
2016-01-01
Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone compartment. Methods We directly compared SUV estimation for MR-based AC methods to reference CT AC in 16 patients undergoing same-day, single 18FDG dose PET/CT and PET/MR for suspected neurodegeneration. Three Dixon-based MR AC methods were compared to CT – standard Dixon 4-compartment segmentation alone, Dixon with a superimposed model-based bone compartment, and Dixon with a superimposed bone compartment and linear attenuation correction optimized specifically for brain tissue. The brain was segmented using a 3D T1-weighted volumetric MR sequence and SUV estimations compared to CT AC for whole-image, whole-brain and 91 FreeSurfer-based regions-of-interest. Results Modifying the linear AC value specifically for brain and superimposing a model-based bone compartment reduced whole-brain SUV estimation bias of Dixon-based PET/MR AC by 95% compared to reference CT AC (P < 0.05) – this resulted in a residual −0.3% whole-brain mean SUV bias. Further, brain regional analysis demonstrated only 3 frontal lobe regions with SUV estimation bias of 5% or greater (P < 0.05). These biases appeared to correlate with high individual variability in the frontal bone thickness and pneumatization. Conclusion Bone compartment and linear AC modifications result in a highly accurate MR AC method in subjects with suspected neurodegeneration. This prototype MR AC solution appears equivalent than other recently proposed solutions, and does not require additional MR sequences and scan time. These
Freezable Radiator Model Correlation Improvements and Fluids Study
NASA Technical Reports Server (NTRS)
Lillibridge, Sean; Navarro, Moses
2011-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the space craft s surroundings and because of different thermal rejection requirements during different mission phases. However, freezing and thawing (recovering) a radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. To attempt to improve this, tests were conducted in 2009 to determine whether the behavior of a simple stagnating radiator could be predicted or emulated in a Thermal Desktop(trademark) numerical model. A 50-50 mixture of DowFrost HD and water was used as the working fluid. Efforts to scale this model to a full scale design, as well as efforts to characterize various thermal control fluids at low temperatures are also discussed. Previous testing and modeling efforts showed that freezable radiators could be operated as intended, and be fairly, if not perfectly predicted by numerical models. This paper documents the improvements made to the numerical model, and outcomes of fluid studies that were determined necessary to go forward with further radiator testing.
Radiative seesaw in left-right symmetric model
Gu Peihong; Sarkar, Utpal
2008-10-01
There are some radiative origins for the neutrino masses in the conventional left-right symmetric models with the usual bidoublet and triplet Higgs scalars. These radiative contributions could dominate over the tree-level seesaw and could explain the observed neutrino masses.
Radiation transport modeling using extended quadrature method of moments
NASA Astrophysics Data System (ADS)
Vikas, V.; Hauck, C. D.; Wang, Z. J.; Fox, R. O.
2013-08-01
The radiative transfer equation describes the propagation of radiation through a material medium. While it provides a highly accurate description of the radiation field, the large phase space on which the equation is defined makes it numerically challenging. As a consequence, significant effort has gone into the development of accurate approximation methods. Recently, an extended quadrature method of moments (EQMOM) has been developed to solve univariate population balance equations, which also have a large phase space and thus face similar computational challenges. The distinct advantage of the EQMOM approach over other moment methods is that it generates moment equations that are consistent with a positive phase space density and has a moment inversion algorithm that is fast and efficient. The goal of the current paper is to present the EQMOM method in the context of radiation transport, to discuss advantages and disadvantages, and to demonstrate its performance on a set of standard one-dimensional benchmark problems that encompass optically thin, thick, and transition regimes. Special attention is given in the implementation to the issue of realizability—that is, consistency with a positive phase space density. Numerical results in one dimension are promising and lay the foundation for extending the same framework to multiple dimensions.
Stewart, P.C.
1992-09-01
This paper describes the incorporation of the Harshvardhan et al. (1987) radiation parameterization into the Naval Research Laboratory Limited Area Dynamical Weather Prediction Model. A comparison between model runs with the radiation scheme and runs without the scheme was made to examine three mesoscale phenomena along the west coast of the United States during the period 0000 UTC 02 May 1990 - 1200 UTC 03 %lay 1990: the land and sea breeze, the southerly surge and the Catalina eddy. In general the updated model with the radiation parameterization yielded a more accurate simulation of the layer temperatures, geopotential heights, cloud cover, and radiative processes as verified from synoptic, mesoscale: and satellite observations. Subsequently, the updated model also forecast a more realistic diurnal evolution of the sea and land breeze, the southerly surge and the Catalina eddy.
Future directions for LDEF ionizing radiation modeling and assessments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1992-01-01
Data from the ionizing radiation dosimetry aboard LDEF provide a unique opportunity for assessing the accuracy of current space radiation models and in identifying needed improvements for future mission applications. Details are given of the LDEF data available for radiation model evaluations. The status is given of model comparisons with LDEF data, along with future directions of planned modeling efforts and data comparison assessments. The methodology is outlined which is related to modeling being used to help insure that the LDEF ionizing radiation results can be used to address ionizing radiation issues for future missions. In general, the LDEF radiation modeling has emphasized quick-look predictions using simplified methods to make comparisons with absorbed dose measurements and induced radioactivity measurements of emissions. Modeling and LDEF data comparisons related to linear energy transfer spectra are of importance for several reasons which are outlined. The planned modeling and LDEF data comparisons for LET spectra is discussed, including components of the LET spectra due to different environment sources, contribution from different production mechanisms, and spectra in plastic detectors vs silicon.
An approximate local thermodynamic nonequilibrium radiation model for air
NASA Technical Reports Server (NTRS)
Gally, Thomas A.; Carlson, Leland A.
1992-01-01
A radiatively coupled viscous shock layer analysis program which includes chemical and thermal nonequilibrium is used to calculate stagnation point flow profiles for typical aeroassisted orbital transfer vehicle conditions. Two methods of predicting local thermodynamic nonequilibrium radiation effects are used as a first and second order approximation to this phenomena. Tabulated results for both nitrogen and air freestreams are given with temperature, species, and radiation profiles for some air conditions. Two body solution results are shown for 45 and 60 degree hyperboloid bodies at 12 km/sec and 80 km altitude. The presented results constitute an advancement in the engineering modeling of radiating nonequilibrium reentry flows.
Modeling of gamma-radiation impact on transmission characteristics of optical glasses
NASA Astrophysics Data System (ADS)
Gusarov, Andrei I.; Doyle, Dominic B.
2002-01-01
Optical systems operating in space must maintain their performance over long mission times. Glasses are well known to darken upon exposure to radiation, due to formation of color centers, resulting in performance degradation. Despite increasing demand, an approach which would allow accurate prediction of the end-of-life performance characteristics of space optical instruments, has not been elaborated yet. We propose here a phenomenological methodology that should help to solve the problem. In our model, functional dependencies describing defect generation and annealing are derived from mathematical models, taking into account dose levels and irradiation times relevant for space missions. Numerical values for the parameters come from experimental data. Our experimental methodology is also described. We apply this model to analyze the results obtained for BK7 glass subject to Co60 gamma-radiation.
Modeling Hematopoiesis and Responses to Radiation Countermeasures in a Bone Marrow-on-a-Chip.
Torisawa, Yu-Suke; Mammoto, Tadanori; Jiang, Elisabeth; Jiang, Amanda; Mammoto, Akiko; Watters, Alexander L; Bahinski, Anthony; Ingber, Donald E
2016-05-01
Studies on hematopoiesis currently rely on animal models because in vitro culture methods do not accurately recapitulate complex bone marrow physiology. We recently described a bone marrow-on-a-chip microfluidic device that enables the culture of living hematopoietic bone marrow and mimics radiation toxicity in vitro. In the present study, we used this microdevice to demonstrate continuous blood cell production in vitro and model bone marrow responses to potential radiation countermeasure drugs. The device maintained mouse hematopoietic stem and progenitor cells in normal proportions for at least 2 weeks in culture. Increases in the number of leukocytes and red blood cells into the microfluidic circulation also could be detected over time, and addition of erythropoietin induced a significant increase in erythrocyte production. Exposure of the bone marrow chip to gamma radiation resulted in reduction of leukocyte production, and treatment of the chips with two potential therapeutics, granulocyte-colony stimulating factor or bactericidal/permeability-increasing protein (BPI), induced significant increases in the number of hematopoietic stem cells and myeloid cells in the fluidic outflow. In contrast, BPI was not found to have any effect when analyzed using static marrow cultures, even though it has been previously shown to accelerate recovery from radiation-induced toxicity in vivo. These findings demonstrate the potential value of the bone marrow-on-a-chip for modeling blood cell production, monitoring responses to hematopoiesis-modulating drugs, and testing radiation countermeasures in vitro. PMID:26993746
Dall'Ora, M.; Botticella, M. T.; Della Valle, M.; Pumo, M. L.; Zampieri, L.; Tomasella, L.; Cappellaro, E.; Benetti, S.; Pignata, G.; Bufano, F.; Bayless, A. J.; Pritchard, T. A.; Taubenberger, S.; Benitez, S.; Kotak, R.; Inserra, C.; Fraser, M.; Elias-Rosa, N.; Haislip, J. B.; Harutyunyan, A.; and others
2014-06-01
We present an extensive optical and near-infrared photometric and spectroscopic campaign of the Type IIP supernova SN 2012aw. The data set densely covers the evolution of SN 2012aw shortly after the explosion through the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the {sup 56}Ni mass. Also included in our analysis is the previously published Swift UV data, therefore providing a complete view of the ultraviolet-optical-infrared evolution of the photospheric phase. On the basis of our data set, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass M {sub env} ∼ 20 M {sub ☉}, progenitor radius R ∼ 3 × 10{sup 13} cm (∼430 R {sub ☉}), explosion energy E ∼ 1.5 foe, and initial {sup 56}Ni mass ∼0.06 M {sub ☉}. These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and may suggest a progenitor mass higher than the observational limit of 16.5 ± 1.5 M {sub ☉} of the Type IIP events.
A Physical Model of Electron Radiation Belts of Saturn
NASA Astrophysics Data System (ADS)
Lorenzato, L.; Sicard-Piet, A.; Bourdarie, S.
2012-04-01
Radiation belts causes irreversible damages on on-board instruments materials. That's why for two decades, ONERA proposes studies about radiation belts of magnetized planets. First, in the 90's, the development of a physical model, named Salammbô, carried out a model of the radiation belts of the Earth. Then, for few years, analysis of the magnetosphere of Jupiter and in-situ data (Pioneer, Voyager, Galileo) allow to build a physical model of the radiation belts of Jupiter. Enrolling on the Cassini age and thanks to all information collected, this study permits to adapt Salammbô jovian radiation belts model to the case of Saturn environment. Indeed, some physical processes present in the kronian magnetosphere are similar to those present in the magnetosphere of Jupiter (radial diffusion; interaction of energetic electrons with rings, moons, atmosphere; synchrotron emission). However, some physical processes have to be added to the kronian model (compared to the jovian model) because of the particularity of the magnetosphere of Saturn: interaction of energetic electrons with neutral particles from Enceladus, and wave-particle interaction. This last physical process has been studied in details with the analysis of CASSINI/RPWS (Radio and Plasma Waves Science) data. The major importance of the wave particles interaction is now well known in the case of the radiation belts of the Earth but it is important to investigate on its role in the case of Saturn. So, importance of each physical process has been studied and analysis of Cassini MIMI-LEMMS and CAPS data allows to build a model boundary condition (at L = 6). Finally, results of this study lead to a kronian electrons radiation belts model including radial diffusion, interactions of energetic electrons with rings, moons and neutrals particles and wave-particle interaction (interactions of electrons with atmosphere particles and synchrotron emission are too weak to be taken into account in this model). Then, to
Preliminary results of a three-dimensional radiative transfer model
O`Hirok, W.
1995-09-01
Clouds act as the primary modulator of the Earth`s radiation at the top of the atmosphere, within the atmospheric column, and at the Earth`s surface. They interact with both shortwave and longwave radiation, but it is primarily in the case of shortwave where most of the uncertainty lies because of the difficulties in treating scattered solar radiation. To understand cloud-radiative interactions, radiative transfer models portray clouds as plane-parallel homogeneous entities to ease the computational physics. Unfortunately, clouds are far from being homogeneous, and large differences between measurement and theory point to a stronger need to understand and model cloud macrophysical properties. In an attempt to better comprehend the role of cloud morphology on the 3-dimensional radiation field, a Monte Carlo model has been developed. This model can simulate broadband shortwave radiation fluxes while incorporating all of the major atmospheric constituents. The model is used to investigate the cloud absorption anomaly where cloud absorption measurements exceed theoretical estimates and to examine the efficacy of ERBE measurements and cloud field experiments. 3 figs.
MODELING ACUTE EXPOSURE TO SOLAR RADIATION
One of the major technical challenges in calculating solar flux on the human form has been the complexity of the surface geometry (i.e., the surface normal vis a vis the incident radiation). The American Cancer Society reports that over 80% of skin cancers occur on the face, he...
Improving the Salammbo code modelling and using it to better predict radiation belts dynamics
NASA Astrophysics Data System (ADS)
Maget, Vincent; Sicard-Piet, Angelica; Grimald, Sandrine Rochel; Boscher, Daniel
2016-07-01
In the framework of the FP7-SPACESTORM project, one objective is to improve the reliability of the model-based predictions performed of the radiation belt dynamics (first developed during the FP7-SPACECAST project). In this purpose we have analyzed and improved the way the simulations using the ONERA Salammbô code are performed, especially in : - Better controlling the driving parameters of the simulation; - Improving the initialization of the simulation in order to be more accurate at most energies for L values between 4 to 6; - Improving the physics of the model. For first point a statistical analysis of the accuracy of the Kp index has been conducted. For point two we have based our method on a long duration simulation in order to extract typical radiation belt states depending on the solar wind stress and geomagnetic activity. For last point we have first improved separately the