Accurate spectral modeling for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Gupta, S. K.
1977-01-01
Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.
3ARM: A Fast, Accurate Radiative Transfer Model for Use in Climate Models
NASA Technical Reports Server (NTRS)
Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.
1996-01-01
A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.
3ARM: A Fast, Accurate Radiative Transfer Model for use in Climate Models
NASA Technical Reports Server (NTRS)
Bergstrom, R. W.; Kinne, S.; Sokolik, I. N.; Toon, O. B.; Mlawer, E. J.; Clough, S. A.; Ackerman, T. P.; Mather, J.
1996-01-01
A new radiative transfer model combining the efforts of three groups of researchers is discussed. The model accurately computes radiative transfer in a inhomogeneous absorbing, scattering and emitting atmospheres. As an illustration of the model, results are shown for the effects of dust on the thermal radiation.
NASA Astrophysics Data System (ADS)
Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.
2013-10-01
Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar
modeled temperature is 5-10K too low compared to measurements. This suggests that processes other than radiative heating/cooling by trace
Fu, Q.; Sun, W.B.; Yang, P.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
NASA Astrophysics Data System (ADS)
Fu, Qiang; Yang, Ping; Sun, W. B.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (Dge). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is 2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region
NASA Astrophysics Data System (ADS)
Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.
2016-04-01
Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.
Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.
2008-01-01
Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.
NASA Astrophysics Data System (ADS)
Garland, Ryan; Irwin, Patrick Gerard Joseph
2016-10-01
Exoplanetary and brown dwarf atmospheres are extremely diverse environments ranging over many different temperatures, pressures, and compositions. In order to model the spectra produced by the these objects, a commonplace approach in exoplanetary science is to use cross-sections of individual gases to quickly calculate the atmospheric opacities. However, when combining multiple gases with non-monochromatic absorption coefficients, the multiplication property of transmission does not hold. The resulting spectra are hence unreliable. Extensive work was carried out on Solar System radiative transfer models to find an efficient alternative to line-by-line calculations of opacity which was more accurate than combining cross-sections, resulting in many band models and the correlated-k method. Here we illustrate the effect of using cross-sections to model typical brown dwarf and exoplanetary atmospheres (e.g. HD189733b), and compare them to the spectra calculated using the correlated-k method. We verify our correlated-k method using a line-by-line model. For the same objects, we also present the effects of pressure broadening on the resulting spectra. Considering both the method of calculation (i.e. cross-section or correlated-k) and the treatment of pressure broadening, we show that the differences in the spectra are immediately obvious and hugely significant. Entire spectral features can appear or disappear, changing the morphology of the spectra. For the inspected brown dwarfs, these spectral features can vary by up to three orders of magnitude in luminosity. For our exoplanets, the transit depth can vary by up to 1%. We conclude that each effect would change the retrieved system parameters (i.e. temperature and abundances) considerably.
NASA Astrophysics Data System (ADS)
Qin, J.; Kamalabadi, F.; Makela, J. J.; Meier, R. J.
2015-12-01
Remote sensing of the nighttime OI 135.6-nm emission represents the primary means of quantifying the F-region ionospheric state from optical measurements. Despite its pervasive use for studying aeronomical processes, the interpretation of these emissions as a proxy for ionospheric state remains ambiguous in that the relative contributions of radiative recombination and mutual neutralization to the production and, especially, the effects of scattering and absorption on the transport of the 135.6-nm emissions have not been fully quantified. Moreover, an inversion algorithm, which is robust to varying ionospheric structures under different geophysical conditions, is yet to be developed for statistically optimal characterization of the ionospheric state. In this work, as part of the NASA ICON mission, we develop a comprehensive radiative transfer model from first principle to investigate the production and transport of the nighttime 135.6-nm emissions. The forward modeling investigation indicates that under certain conditions mutual neutralization can contribute up to ~38% to the 135.6-nm emissions. Moreover, resonant scattering and pure absorption can reduce the brightness observed in the limb direction by ~40% while enhancing the brightness in the nadir direction by ~25%. Further analysis shows that without properly addressing these effects in the inversion process, the peak electron density in the F-region ionosphere (NmF2) can be overestimated by up to ~24%. To address these issues, an inversion algorithm that properly accounts for the above-mentioned effects is proposed for accurate quantification of the ionospheric state using satellite measurements. The ill-posedness due to the intrinsic presence of noise in real data is coped with by incorporating proper regularizations that enforce either global smoothness or piecewise smoothness of the solution. Application to model-generated data with different signal-to-noise ratios show that the algorithm has achieved
ERIC Educational Resources Information Center
James, W. G. G.
1970-01-01
Discusses the historical development of both the wave and the corpuscular photon model of light. Suggests that students should be informed that the two models are complementary and that each model successfully describes a wide range of radiation phenomena. Cites 19 references which might be of interest to physics teachers and students. (LC)
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Universality: Accurate Checks in Dyson's Hierarchical Model
NASA Astrophysics Data System (ADS)
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
Przybylski, D.; Shelyag, S.; Cally, P. S.
2015-07-01
We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave.
GORRAM: Introducing accurate operational-speed radiative transfer Monte Carlo solvers
NASA Astrophysics Data System (ADS)
Buras-Schnell, Robert; Schnell, Franziska; Buras, Allan
2016-06-01
We present a new approach for solving the radiative transfer equation in horizontally homogeneous atmospheres. The motivation was to develop a fast yet accurate radiative transfer solver to be used in operational retrieval algorithms for next generation meteorological satellites. The core component is the program GORRAM (Generator Of Really Rapid Accurate Monte-Carlo) which generates solvers individually optimized for the intended task. These solvers consist of a Monte Carlo model capable of path recycling and a representative set of photon paths. Latter is generated using the simulated annealing technique. GORRAM automatically takes advantage of limitations on the variability of the atmosphere. Due to this optimization the number of photon paths necessary for accurate results can be reduced by several orders of magnitude. For the shown example of a forward model intended for an aerosol satellite retrieval, comparison with an exact yet slow solver shows that a precision of better than 1% can be achieved with only 36 photons. The computational time is at least an order of magnitude faster than any other type of radiative transfer solver. Merely the lookup table approach often used in satellite retrieval is faster, but on the other hand suffers from limited accuracy. This makes GORRAM-generated solvers an eligible candidate as forward model in operational-speed retrieval algorithms and data assimilation applications. GORRAM also has the potential to create fast solvers of other integrable equations.
Modeling the Space Radiation Environment
NASA Technical Reports Server (NTRS)
Xapsos, Michael A.
2006-01-01
There has been a renaissance of interest in space radiation environment modeling. This has been fueled by the growing need to replace long time standard AP-9 and AE-8 trapped particle models, the interplanetary exploration initiative, the modern satellite instrumentation that has led to unprecedented measurement accuracy, and the pervasive use of Commercial off the Shelf (COTS) microelectronics that require more accurate predictive capabilities. The objective of this viewgraph presentation was to provide basic understanding of the components of the space radiation environment and their variations, review traditional radiation effects application models, and present recent developments.
Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay
2012-07-01
Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.
ACCURATE TEMPERATURE MEASUREMENTS IN A NATURALLY-ASPIRATED RADIATION SHIELD
Kurzeja, R.
2009-09-09
Experiments and calculations were conducted with a 0.13 mm fine wire thermocouple within a naturally-aspirated Gill radiation shield to assess and improve the accuracy of air temperature measurements without the use of mechanical aspiration, wind speed or radiation measurements. It was found that this thermocouple measured the air temperature with root-mean-square errors of 0.35 K within the Gill shield without correction. A linear temperature correction was evaluated based on the difference between the interior plate and thermocouple temperatures. This correction was found to be relatively insensitive to shield design and yielded an error of 0.16 K for combined day and night observations. The correction was reliable in the daytime when the wind speed usually exceeds 1 m s{sup -1} but occasionally performed poorly at night during very light winds. Inspection of the standard deviation in the thermocouple wire temperature identified these periods but did not unambiguously locate the most serious events. However, estimates of sensor accuracy during these periods is complicated by the much larger sampling volume of the mechanically-aspirated sensor compared with the naturally-aspirated sensor and the presence of significant near surface temperature gradients. The root-mean-square errors therefore are upper limits to the aspiration error since they include intrinsic sensor differences and intermittent volume sampling differences.
A quick accurate model of nozzle backflow
NASA Technical Reports Server (NTRS)
Kuharski, R. A.
1991-01-01
Backflow from nozzles is a major source of contamination on spacecraft. If the craft contains any exposed high voltages, the neutral density produced by the nozzles in the vicinity of the craft needs to be known in order to assess the possibility of Paschen breakdown or the probability of sheath ionization around a region of the craft that collects electrons for the plasma. A model for backflow has been developed for incorporation into the Environment-Power System Analysis Tool (EPSAT) which quickly estimates both the magnitude of the backflow and the species makeup of the flow. By combining the backflow model with the Simons (1972) model for continuum flow it is possible to quickly estimate the density of each species from a nozzle at any position in space. The model requires only a few physical parameters of the nozzle and the gas as inputs and is therefore ideal for engineering applications.
Accurate Drawbead Modeling in Stamping Simulations
NASA Astrophysics Data System (ADS)
Sester, M.; Burchitz, I.; Saenz de Argandona, E.; Estalayo, F.; Carleer, B.
2016-08-01
An adaptive line bead model that continually updates according to the changing conditions during the forming process has been developed. In these calculations, the adaptive line bead's geometry is treated as a 3D object where relevant phenomena like hardening curve, yield surface, through thickness stress effects and contact description are incorporated. The effectiveness of the adaptive drawbead model will be illustrated by an industrial example.
Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
RRTMGP: A fast and accurate radiation code for the next decade
NASA Astrophysics Data System (ADS)
Mlawer, E. J.; Pincus, R.; Wehe, A.; Delamere, J.
2015-12-01
Atmospheric radiative processes are key drivers of the Earth's climate and must be accurately represented in global circulations models (GCMs) to allow faithful simulations of the planet's past, present, and future. The radiation code RRTMG is widely utilized by global modeling centers for both climate and weather predictions, but it has become increasingly out-of-date. The code's structure is not well suited for the current generation of computer architectures and its stored absorption coefficients are not consistent with the most recent spectroscopic information. We are developing a new broadband radiation code for the current generation of computational architectures. This code, called RRTMGP, will be a completely restructured and modern version of RRTMG. The new code preserves the strengths of the existing RRTMG parameterization, especially the high accuracy of the k-distribution treatment of absorption by gases, but the entire code is being rewritten to provide highly efficient computation across a range of architectures. Our redesign includes refactoring the code into discrete kernels corresponding to fundamental computational elements (e.g. gas optics), optimizing the code for operating on multiple columns in parallel, simplifying the subroutine interface, revisiting the existing gas optics interpolation scheme to reduce branching, and adding flexibility with respect to run-time choices of streams, need for consideration of scattering, aerosol and cloud optics, etc. The result of the proposed development will be a single, well-supported and well-validated code amenable to optimization across a wide range of platforms. Our main emphasis is on highly-parallel platforms including Graphical Processing Units (GPUs) and Many-Integrated-Core processors (MICs), which experience shows can accelerate broadband radiation calculations by as much as a factor of fifty. RRTMGP will provide highly efficient and accurate radiative fluxes calculations for coupled global
An articulated statistical shape model for accurate hip joint segmentation.
Kainmueller, Dagmar; Lamecker, Hans; Zachow, Stefan; Hege, Hans-Christian
2009-01-01
In this paper we propose a framework for fully automatic, robust and accurate segmentation of the human pelvis and proximal femur in CT data. We propose a composite statistical shape model of femur and pelvis with a flexible hip joint, for which we extend the common definition of statistical shape models as well as the common strategy for their adaptation. We do not analyze the joint flexibility statistically, but model it explicitly by rotational parameters describing the bent in a ball-and-socket joint. A leave-one-out evaluation on 50 CT volumes shows that image driven adaptation of our composite shape model robustly produces accurate segmentations of both proximal femur and pelvis. As a second contribution, we evaluate a fine grain multi-object segmentation method based on graph optimization. It relies on accurate initializations of femur and pelvis, which our composite shape model can generate. Simultaneous optimization of both femur and pelvis yields more accurate results than separate optimizations of each structure. Shape model adaptation and graph based optimization are embedded in a fully automatic framework. PMID:19964159
Saturn Radiation (SATRAD) Model
NASA Technical Reports Server (NTRS)
Garrett, H. B.; Ratliff, J. M.; Evans, R. W.
2005-01-01
The Saturnian radiation belts have not received as much attention as the Jovian radiation belts because they are not nearly as intense-the famous Saturnian particle rings tend to deplete the belts near where their peak would occur. As a result, there has not been a systematic development of engineering models of the Saturnian radiation environment for mission design. A primary exception is that of Divine (1990). That study used published data from several charged particle experiments aboard the Pioneer 1 1, Voyager 1, and Voyager 2 spacecraft during their flybys at Saturn to generate numerical models for the electron and proton radiation belts between 2.3 and 13 Saturn radii. The Divine Saturn radiation model described the electron distributions at energies between 0.04 and 10 MeV and the proton distributions at energies between 0.14 and 80 MeV. The model was intended to predict particle intensity, flux, and fluence for the Cassini orbiter. Divine carried out hand calculations using the model but never formally developed a computer program that could be used for general mission analyses. This report seeks to fill that void by formally developing a FORTRAN version of the model that can be used as a computer design tool for missions to Saturn that require estimates of the radiation environment around the planet. The results of that effort and the program listings are presented here along with comparisons with the original estimates carried out by Divine. In addition, Pioneer and Voyager data were scanned in from the original references and compared with the FORTRAN model s predictions. The results were statistically analyzed in a manner consistent with Divine s approach to provide estimates of the ability of the model to reproduce the original data. Results of a formal review of the model by a panel of experts are also presented. Their recommendations for further tests, analyses, and extensions to the model are discussed.
Shumway, R.W.
1987-10-01
The ATHENA computer program has many features that make it desirable to use as a space reactor evaluation tool. One of the missing features was a surface-to-surface thermal radiation model. A model was developed that allows any of the regular ATHENA heat slabs to radiate to any other heat slab. The view factors and surface emissivities must be specified by the user. To verify that the model was properly accounting for radiant energy transfer, two different types of test calculations were performed. Both calculations have excellent results. The updates have been used on both the INEL CDC-176 and the Livermore Cray. 7 refs., 2 figs., 6 tabs.
NASA Astrophysics Data System (ADS)
Smirnova, Olga
Biologically motivated mathematical models, which describe the dynamics of the major hematopoietic lineages (the thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems) in acutely/chronically irradiated humans are developed. These models are implemented as systems of nonlinear differential equations, which variables and constant parameters have clear biological meaning. It is shown that the developed models are capable of reproducing clinical data on the dynamics of these systems in humans exposed to acute radiation in the result of incidents and accidents, as well as in humans exposed to low-level chronic radiation. Moreover, the averaged value of the "lethal" dose rates of chronic irradiation evaluated within models of these four major hematopoietic lineages coincides with the real minimal dose rate of lethal chronic irradiation. The demonstrated ability of the models of the human thrombocytopoietic, lymphocytopoietic, granulocytopoietic, and erythropoietic systems to predict the dynamical response of these systems to acute/chronic irradiation in wide ranges of doses and dose rates implies that these mathematical models form an universal tool for the investigation and prediction of the dynamics of the major human hematopoietic lineages for a vast pattern of irradiation scenarios. In particular, these models could be applied for the radiation risk assessment for health of astronauts exposed to space radiation during long-term space missions, such as voyages to Mars or Lunar colonies, as well as for health of people exposed to acute/chronic irradiation due to environmental radiological events.
Methods for accurate homology modeling by global optimization.
Joo, Keehyoung; Lee, Jinwoo; Lee, Jooyoung
2012-01-01
High accuracy protein modeling from its sequence information is an important step toward revealing the sequence-structure-function relationship of proteins and nowadays it becomes increasingly more useful for practical purposes such as in drug discovery and in protein design. We have developed a protocol for protein structure prediction that can generate highly accurate protein models in terms of backbone structure, side-chain orientation, hydrogen bonding, and binding sites of ligands. To obtain accurate protein models, we have combined a powerful global optimization method with traditional homology modeling procedures such as multiple sequence alignment, chain building, and side-chain remodeling. We have built a series of specific score functions for these steps, and optimized them by utilizing conformational space annealing, which is one of the most successful combinatorial optimization algorithms currently available.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Modeling the radiation pattern of LEDs.
Moreno, Ivan; Sun, Ching-Cherng
2008-02-01
Light-emitting diodes (LEDs) come in many varieties and with a wide range of radiation patterns. We propose a general, simple but accurate analytic representation for the radiation pattern of the light emitted from an LED. To accurately render both the angular intensity distribution and the irradiance spatial pattern, a simple phenomenological model takes into account the emitting surfaces (chip, chip array, or phosphor surface), and the light redirected by both the reflecting cup and the encapsulating lens. Mathematically, the pattern is described as the sum of a maximum of two or three Gaussian or cosine-power functions. The resulting equation is widely applicable for any kind of LED of practical interest. We accurately model a wide variety of radiation patterns from several world-class manufacturers.
Status of LDEF radiation modeling
NASA Technical Reports Server (NTRS)
Watts, John W.; Armstrong, T. W.; Colborn, B. L.
1995-01-01
The current status of model prediction and comparison with LDEF radiation dosimetry measurements is summarized with emphasis on major results obtained in evaluating the uncertainties of present radiation environment model. The consistency of results and conclusions obtained from model comparison with different sets of LDEF radiation data (dose, activation, fluence, LET spectra) is discussed. Examples where LDEF radiation data and modeling results can be utilized to provide improved radiation assessments for planned LEO missions (e.g., Space Station) are given.
Accurate localization of optic radiation during neurosurgery in an interventional MRI suite.
Daga, Pankaj; Winston, Gavin; Modat, Marc; White, Mark; Mancini, Laura; Cardoso, M Jorge; Symms, Mark; Stretton, Jason; McEvoy, Andrew W; Thornton, John; Micallef, Caroline; Yousry, Tarek; Hawkes, David J; Duncan, John S; Ourselin, Sebastien
2012-04-01
Accurate localization of the optic radiation is key to improving the surgical outcome for patients undergoing anterior temporal lobe resection for the treatment of refractory focal epilepsy. Current commercial interventional magnetic resonance imaging (MRI) scanners are capable of performing anatomical and diffusion weighted imaging and are used for guidance during various neurosurgical procedures. We present an interventional imaging workflow that can accurately localize the optic radiation during surgery. The workflow is driven by a near real-time multichannel nonrigid image registration algorithm that uses both anatomical and fractional anisotropy pre- and intra-operative images. The proposed workflow is implemented on graphical processing units and we perform a warping of the pre-operatively parcellated optic radiation to the intra-operative space in under 3 min making the proposed algorithm suitable for use under the stringent time constraints of neurosurgical procedures. The method was validated using both a numerical phantom and clinical data using pre- and post-operative images from patients who had undergone surgery for treatment of refractory focal epilepsy and shows strong correlation between the observed post-operative visual field deficit and the predicted damage to the optic radiation. We also validate the algorithm using interventional MRI datasets from a small cohort of patients. This work could be of significant utility in image guided interventions and facilitate effective surgical treatments.
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
On the importance of having accurate data for astrophysical modelling
NASA Astrophysics Data System (ADS)
Lique, Francois
2016-06-01
The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.
Accurate method of modeling cluster scaling relations in modified gravity
NASA Astrophysics Data System (ADS)
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
The dynamic radiation environment assimilation model (DREAM)
Reeves, Geoffrey D; Koller, Josef; Tokar, Robert L; Chen, Yue; Henderson, Michael G; Friedel, Reiner H
2010-01-01
The Dynamic Radiation Environment Assimilation Model (DREAM) is a 3-year effort sponsored by the US Department of Energy to provide global, retrospective, or real-time specification of the natural and potential nuclear radiation environments. The DREAM model uses Kalman filtering techniques that combine the strengths of new physical models of the radiation belts with electron observations from long-term satellite systems such as GPS and geosynchronous systems. DREAM includes a physics model for the production and long-term evolution of artificial radiation belts from high altitude nuclear explosions. DREAM has been validated against satellites in arbitrary orbits and consistently produces more accurate results than existing models. Tools for user-specific applications and graphical displays are in beta testing and a real-time version of DREAM has been in continuous operation since November 2009.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Application of Improved Radiation Modeling to General Circulation Models
Michael J Iacono
2011-04-07
This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.
Chewing simulation with a physically accurate deformable model.
Pascale, Andra Maria; Ruge, Sebastian; Hauth, Steffen; Kordaß, Bernd; Linsen, Lars
2015-01-01
Nowadays, CAD/CAM software is being used to compute the optimal shape and position of a new tooth model meant for a patient. With this possible future application in mind, we present in this article an independent and stand-alone interactive application that simulates the human chewing process and the deformation it produces in the food substrate. Chewing motion sensors are used to produce an accurate representation of the jaw movement. The substrate is represented by a deformable elastic model based on the finite linear elements method, which preserves physical accuracy. Collision detection based on spatial partitioning is used to calculate the forces that are acting on the deformable model. Based on the calculated information, geometry elements are added to the scene to enhance the information available for the user. The goal of the simulation is to present a complete scene to the dentist, highlighting the points where the teeth came into contact with the substrate and giving information about how much force acted at these points, which therefore makes it possible to indicate whether the tooth is being used incorrectly in the mastication process. Real-time interactivity is desired and achieved within limits, depending on the complexity of the employed geometric models. The presented simulation is a first step towards the overall project goal of interactively optimizing tooth position and shape under the investigation of a virtual chewing process using real patient data (Fig 1). PMID:26389135
Accurate, low-cost 3D-models of gullies
NASA Astrophysics Data System (ADS)
Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine
2015-04-01
Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we
Towards Accurate Molecular Modeling of Plastic Bonded Explosives
NASA Astrophysics Data System (ADS)
Chantawansri, T. L.; Andzelm, J.; Taylor, D.; Byrd, E.; Rice, B.
2010-03-01
There is substantial interest in identifying the controlling factors that influence the susceptibility of polymer bonded explosives (PBXs) to accidental initiation. Numerous Molecular Dynamics (MD) simulations of PBXs using the COMPASS force field have been reported in recent years, where the validity of the force field in modeling the solid EM fill has been judged solely on its ability to reproduce lattice parameters, which is an insufficient metric. Performance of the COMPASS force field in modeling EMs and the polymeric binder has been assessed by calculating structural, thermal, and mechanical properties, where only fair agreement with experimental data is obtained. We performed MD simulations using the COMPASS force field for the polymer binder hydroxyl-terminated polybutadiene and five EMs: cyclotrimethylenetrinitramine, 1,3,5,7-tetranitro-1,3,5,7-tetra-azacyclo-octane, 2,4,6,8,10,12-hexantirohexaazazisowurzitane, 2,4,6-trinitro-1,3,5-benzenetriamine, and pentaerythritol tetranitate. Predicted EM crystallographic and molecular structural parameters, as well as calculated properties for the binder will be compared with experimental results for different simulation conditions. We also present novel simulation protocols, which improve agreement between experimental and computation results thus leading to the accurate modeling of PBXs.
Towards accurate observation and modelling of Antarctic glacial isostatic adjustment
NASA Astrophysics Data System (ADS)
King, M.
2012-04-01
The response of the solid Earth to glacial mass changes, known as glacial isostatic adjustment (GIA), has received renewed attention in the recent decade thanks to the Gravity Recovery and Climate Experiment (GRACE) satellite mission. GRACE measures Earth's gravity field every 30 days, but cannot partition surface mass changes, such as present-day cryospheric or hydrological change, from changes within the solid Earth, notably due to GIA. If GIA cannot be accurately modelled in a particular region the accuracy of GRACE estimates of ice mass balance for that region is compromised. This lecture will focus on Antarctica, where models of GIA are hugely uncertain due to weak constraints on ice loading history and Earth structure. Over the last years, however, there has been a step-change in our ability to measure GIA uplift with the Global Positioning System (GPS), including widespread deployments of permanent GPS receivers as part of the International Polar Year (IPY) POLENET project. I will particularly focus on the Antarctic GPS velocity field and the confounding effect of elastic rebound due to present-day ice mass changes, and then describe the construction and calibration of a new Antarctic GIA model for application to GRACE data, as well as highlighting areas where further critical developments are required.
An accurate and simple quantum model for liquid water.
Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A
2006-11-14
The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics
Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.
Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong
2015-09-01
The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.
Paganetti, H; Jiang, H; Lee, S Y; Kooy, H M
2004-07-01
Monte Carlo dosimetry calculations are essential methods in radiation therapy. To take full advantage of this tool, the beam delivery system has to be simulated in detail and the initial beam parameters have to be known accurately. The modeling of the beam delivery system itself opens various areas where Monte Carlo calculations prove extremely helpful, such as for design and commissioning of a therapy facility as well as for quality assurance verification. The gantry treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH) were modeled in detail using the GEANT4.5.2 Monte Carlo code. For this purpose, various novel solutions for simulating irregular shaped objects in the beam path, like contoured scatterers, patient apertures or patient compensators, were found. The four-dimensional, in time and space, simulation of moving parts, such as the modulator wheel, was implemented. Further, the appropriate physics models and cross sections for proton therapy applications were defined. We present comparisons between measured data and simulations. These show that by modeling the treatment nozzle with millimeter accuracy, it is possible to reproduce measured dose distributions with an accuracy in range and modulation width, in the case of a spread-out Bragg peak (SOBP), of better than 1 mm. The excellent agreement demonstrates that the simulations can even be used to generate beam data for commissioning treatment planning systems. The Monte Carlo nozzle model was used to study mechanical optimization in terms of scattered radiation and secondary radiation in the design of the nozzles. We present simulations on the neutron background. Further, the Monte Carlo calculations supported commissioning efforts in understanding the sensitivity of beam characteristics and how these influence the dose delivered. We present the sensitivity of dose distributions in water with respect to various beam parameters and geometrical misalignments
Accurate Accumulation of Dose for Improved Understanding of Radiation Effects in Normal Tissue
Jaffray, David A.; Lindsay, Patricia E.; Brock, Kristy K.; Deasy, Joseph O.; Tome, W.A.
2010-03-01
The actual distribution of radiation dose accumulated in normal tissues over the complete course of radiation therapy is, in general, poorly quantified. Differences in the patient anatomy between planning and treatment can occur gradually (e.g., tumor regression, resolution of edema) or relatively rapidly (e.g., bladder filling, breathing motion) and these undermine the accuracy of the planned dose distribution. Current efforts to maximize the therapeutic ratio require models that relate the true accumulated dose to clinical outcome. The needed accuracy can only be achieved through the development of robust methods that track the accumulation of dose within the various tissues in the body. Specific needs include the development of segmentation methods, tissue-mapping algorithms, uncertainty estimation, optimal schedules for image-based monitoring, and the development of informatics tools to support subsequent analysis. These developments will not only improve radiation outcomes modeling but will address the technical demands of the adaptive radiotherapy paradigm. The next 5 years need to see academia and industry bring these tools into the hands of the clinician and the clinical scientist.
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Kouznetsov, Alexei; Tambasco, Mauro
2011-03-15
Purpose: To develop and validate a fast and accurate method that uses computed tomography (CT) voxel data to estimate absorbed radiation dose at a point of interest (POI) or series of POIs from a kilovoltage (kV) imaging procedure. Methods: The authors developed an approach that computes absorbed radiation dose at a POI by numerically evaluating the linear Boltzmann transport equation (LBTE) using a combination of deterministic and Monte Carlo (MC) techniques. This hybrid approach accounts for material heterogeneity with a level of accuracy comparable to the general MC algorithms. Also, the dose at a POI is computed within seconds using the Intel Core i7 CPU 920 2.67 GHz quad core architecture, and the calculations are performed using CT voxel data, making it flexible and feasible for clinical applications. To validate the method, the authors constructed and acquired a CT scan of a heterogeneous block phantom consisting of a succession of slab densities: Tissue (1.29 cm), bone (2.42 cm), lung (4.84 cm), bone (1.37 cm), and tissue (4.84 cm). Using the hybrid transport method, the authors computed the absorbed doses at a set of points along the central axis and x direction of the phantom for an isotropic 125 kVp photon spectral point source located along the central axis 92.7 cm above the phantom surface. The accuracy of the results was compared to those computed with MCNP, which was cross-validated with EGSnrc, and served as the benchmark for validation. Results: The error in the depth dose ranged from -1.45% to +1.39% with a mean and standard deviation of -0.12% and 0.66%, respectively. The error in the x profile ranged from -1.3% to +0.9%, with standard deviations of -0.3% and 0.5%, respectively. The number of photons required to achieve these results was 1x10{sup 6}. Conclusions: The voxel-based hybrid method evaluates the LBTE rapidly and accurately to estimate the absorbed x-ray dose at any POI or series of POIs from a kV imaging procedure.
Infrared radiation models for atmospheric methane
NASA Technical Reports Server (NTRS)
Cess, R. D.; Kratz, D. P.; Caldwell, J.; Kim, S. J.
1986-01-01
Mutually consistent line-by-line, narrow-band and broad-band infrared radiation models are presented for methane, a potentially important anthropogenic trace gas within the atmosphere. Comparisons of the modeled band absorptances with existing laboratory data produce the best agreement when, within the band models, spurious band intensities are used which are consistent with the respective laboratory data sets, but which are not consistent with current knowledge concerning the intensity of the infrared fundamental band of methane. This emphasizes the need for improved laboratory band absorptance measurements. Since, when applied to atmospheric radiation calculations, the line-by-line model does not require the use of scaling approximations, the mutual consistency of the band models provides a means of appraising the accuracy of scaling procedures. It is shown that Curtis-Godson narrow-band and Chan-Tien broad-band scaling provide accurate means of accounting for atmospheric temperature and pressure variations.
RRTM: A rapid radiative transfer model
Mlawer, E.J.; Taubman, S.J.; Clough, S.A.
1996-04-01
A rapid radiative transfer model (RRTM) for the calculation of longwave clear-sky fluxes and cooling rates has been developed. The model, which uses the correlated-k method, is both accurate and computationally fast. The foundation for RRTM is the line-by-line radiative transfer model (LBLRTM) from which the relevant k-distributions are obtained. LBLRTM, which has been extensively validated against spectral observations e.g., the high-resolution sounder and the Atmospheric Emitted Radiance Interferometer, is used to validate the flux and cooling rate results from RRTM. Validations of RRTM`s results have been performed for the tropical, midlatitude summer, and midlatitude winter atmospheres, as well as for the four Intercomparison of Radiation Codes in Climate Models (ICRCCM) cases from the Spectral Radiance Experiment (SPECTRE). Details of some of these validations are presented below. RRTM has the identical atmospheric input module as LBLRTM, facilitating intercomparisons with LBLRTM and application of the model at the Atmospheric Radiation Measurement Cloud and Radiation Testbed sites.
Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.
Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M
2014-12-01
Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration.
Accurate patient dosimetry of kilovoltage cone-beam CT in radiation therapy
Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.
2008-03-15
The increased utilization of x-ray imaging in image-guided radiotherapy has dramatically improved the radiation treatment and the lives of cancer patients. Daily imaging procedures, such as cone-beam computed tomography (CBCT), for patient setup may significantly increase the dose to the patient's normal tissues. This study investigates the dosimetry from a kilovoltage (kV) CBCT for real patient geometries. Monte Carlo simulations were used to study the kV beams from a Varian on-board imager integrated into the Trilogy accelerator. The Monte Carlo calculated results were benchmarked against measurements and good agreement was obtained. The authors developed a novel method to calibrate Monte Carlo simulated beams with measurements using an ionization chamber in which the air-kerma calibration factors are obtained from an Accredited Dosimetry Calibration Laboratory. The authors have introduced a new Monte Carlo calibration factor, f{sub MCcal}, which is determined from the calibration procedure. The accuracy of the new method was validated by experiment. When a Monte Carlo simulated beam has been calibrated, the simulated beam can be used to accurately predict absolute dose distributions in the irradiated media. Using this method the authors calculated dose distributions to patient anatomies from a typical CBCT acquisition for different treatment sites, such as head and neck, lung, and pelvis. Their results have shown that, from a typical head and neck CBCT, doses to soft tissues, such as eye, spinal cord, and brain can be up to 8, 6, and 5 cGy, respectively. The dose to the bone, due to the photoelectric effect, can be as much as 25 cGy, about three times the dose to the soft tissue. The study provides detailed information on the additional doses to the normal tissues of a patient from a typical kV CBCT acquisition. The methodology of the Monte Carlo beam calibration developed and introduced in this study allows the user to calculate both relative and absolute
Applying an accurate spherical model to gamma-ray burst afterglow observations
NASA Astrophysics Data System (ADS)
Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.
2013-05-01
We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.
Models in biology: 'accurate descriptions of our pathetic thinking'.
Gunawardena, Jeremy
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as 'predictive', in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
Clarifying types of uncertainty: when are models accurate, and uncertainties small?
Cox, Louis Anthony Tony
2011-10-01
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.
Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel
2016-02-01
The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.
Analytical modeling of the steady radiative shock
NASA Astrophysics Data System (ADS)
Boireau, L.; Bouquet, S.; Michaut, C.; Clique, C.
2006-06-01
In a paper dated 2000 [1], a fully analytical theory of the radiative shock has been presented. This early model had been used to design [2] radiative shock experiments at the Laboratory for the Use of Intense Lasers (LULI) [3 5]. It became obvious from numerical simulations [6, 7] that this model had to be improved in order to accurately recover experiments. In this communication, we present a new theory in which the ionization rates in the unshocked (bar{Z_1}) and shocked (bar{Z_2} neq bar{Z_1}) material, respectively, are included. Associated changes in excitation energy are also taken into account. We study the influence of these effects on the compression and temperature in the shocked medium.
Accurate Model Selection of Relaxed Molecular Clocks in Bayesian Phylogenetics
Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J.; Suchard, Marc A.; Lemey, Philippe
2013-01-01
Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike’s information criterion through Markov chain Monte Carlo (AICM), in Bayesian model selection of demographic and molecular clock models. Almost simultaneously, a Bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets. PMID:23090976
Towards an Accurate Performance Modeling of Parallel SparseFactorization
Grigori, Laura; Li, Xiaoye S.
2006-05-26
We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.
Validation of the Poisson Stochastic Radiative Transfer Model
NASA Technical Reports Server (NTRS)
Zhuravleva, Tatiana; Marshak, Alexander
2004-01-01
A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.
Accurate Low-mass Stellar Models of KOI-126
NASA Astrophysics Data System (ADS)
Feiden, Gregory A.; Chaboyer, Brian; Dotter, Aaron
2011-10-01
The recent discovery of an eclipsing hierarchical triple system with two low-mass stars in a close orbit (KOI-126) by Carter et al. appeared to reinforce the evidence that theoretical stellar evolution models are not able to reproduce the observational mass-radius relation for low-mass stars. We present a set of stellar models for the three stars in the KOI-126 system that show excellent agreement with the observed radii. This agreement appears to be due to the equation of state implemented by our code. A significant dispersion in the observed mass-radius relation for fully convective stars is demonstrated; indicative of the influence of physics currently not incorporated in standard stellar evolution models. We also predict apsidal motion constants for the two M dwarf companions. These values should be observationally determined to within 1% by the end of the Kepler mission.
Inflation model building with an accurate measure of e -folding
NASA Astrophysics Data System (ADS)
Chongchitnan, Sirichai
2016-08-01
It has become standard practice to take the logarithmic growth of the scale factor as a measure of the amount of inflation, despite the well-known fact that this is only an approximation for the true amount of inflation required to solve the horizon and flatness problems. The aim of this work is to show how this approximation can be completely avoided using an alternative framework for inflation model building. We show that using the inverse Hubble radius, H =a H , as the key dynamical parameter, the correct number of e -folding arises naturally as a measure of inflation. As an application, we present an interesting model in which the entire inflationary dynamics can be solved analytically and exactly, and, in special cases, reduces to the familiar class of power-law models.
Magnetic field models of nine CP stars from "accurate" measurements
NASA Astrophysics Data System (ADS)
Glagolevskij, Yu. V.
2013-01-01
The dipole models of magnetic fields in nine CP stars are constructed based on the measurements of metal lines taken from the literature, and performed by the LSD method with an accuracy of 10-80 G. The model parameters are compared with the parameters obtained for the same stars from the hydrogen line measurements. For six out of nine stars the same type of structure was obtained. Some parameters, such as the field strength at the poles B p and the average surface magnetic field B s differ considerably in some stars due to differences in the amplitudes of phase dependences B e (Φ) and B s (Φ), obtained by different authors. It is noted that a significant increase in the measurement accuracy has little effect on the modelling of the large-scale structures of the field. By contrast, it is more important to construct the shape of the phase dependence based on a fairly large number of field measurements, evenly distributed by the rotation period phases. It is concluded that the Zeeman component measurement methods have a strong effect on the shape of the phase dependence, and that the measurements of the magnetic field based on the lines of hydrogen are more preferable for modelling the large-scale structures of the field.
NASA Technical Reports Server (NTRS)
Horwitz, James L.
1992-01-01
The purpose of this work was to assist with the development of analytical techniques for the interpretation of infrared observations. We have done the following: (1) helped to develop models for continuum absorption calculations for water vapor in the far infrared spectral region; (2) worked on models for pressure-induced absorption for O2 and N2 and their comparison with available observations; and (3) developed preliminary studies of non-local thermal equilibrium effects in the upper stratosphere and mesosphere for infrared gases. These new techniques were employed for analysis of balloon-borne far infrared data by a group at the Harvard-Smithsonian Center for Astrophysics. The empirical continuum absorption model for water vapor in the far infrared spectral region and the pressure-induced N2 absorption model were found to give satisfactory results in the retrieval of the mixing ratios of a number of stratospheric trace constituents from balloon-borne far infrared observations.
Accurate first principles model potentials for intermolecular interactions.
Gordon, Mark S; Smith, Quentin A; Xu, Peng; Slipchenko, Lyudmila V
2013-01-01
The general effective fragment potential (EFP) method provides model potentials for any molecule that is derived from first principles, with no empirically fitted parameters. The EFP method has been interfaced with most currently used ab initio single-reference and multireference quantum mechanics (QM) methods, ranging from Hartree-Fock and coupled cluster theory to multireference perturbation theory. The most recent innovations in the EFP model have been to make the computationally expensive charge transfer term much more efficient and to interface the general EFP dispersion and exchange repulsion interactions with QM methods. Following a summary of the method and its implementation in generally available computer programs, these most recent new developments are discussed.
Simulation model accurately estimates total dietary iodine intake.
Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C
2009-07-01
One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (<5%) were at risk of intakes that were too low. In the scenario of a potential future situation using lower salt iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.
Mouse models for radiation-induced cancers.
Rivina, Leena; Davoren, Michael J; Schiestl, Robert H
2016-09-01
Potential ionising radiation exposure scenarios are varied, but all bring risks beyond the simple issues of short-term survival. Whether accidentally exposed to a single, whole-body dose in an act of terrorism or purposefully exposed to fractionated doses as part of a therapeutic regimen, radiation exposure carries the consequence of elevated cancer risk. The long-term impact of both intentional and unintentional exposure could potentially be mitigated by treatments specifically developed to limit the mutations and precancerous replication that ensue in the wake of irradiation The development of such agents would undoubtedly require a substantial degree of in vitro testing, but in order to accurately recapitulate the complex process of radiation-induced carcinogenesis, well-understood animal models are necessary. Inbred strains of the laboratory mouse, Mus musculus, present the most logical choice due to the high number of molecular and physiological similarities they share with humans. Their small size, high rate of breeding and fully sequenced genome further increase its value for use in cancer research. This chapter will review relevant m. musculus inbred and F1 hybrid animals of radiation-induced myeloid leukemia, thymic lymphoma, breast and lung cancers. Method of cancer induction and associated molecular pathologies will also be described for each model. PMID:27209205
Accurate numerical solutions for elastic-plastic models. [LMFBR
Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.
1980-03-01
The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.
2015-12-01
We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.
Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael
2014-05-01
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of
Development of a New Model for Accurate Prediction of Cloud Water Deposition on Vegetation
NASA Astrophysics Data System (ADS)
Katata, G.; Nagai, H.; Wrzesinsky, T.; Klemm, O.; Eugster, W.; Burkard, R.
2006-12-01
Scarcity of water resources in arid and semi-arid areas is of great concern in the light of population growth and food shortages. Several experiments focusing on cloud (fog) water deposition on the land surface suggest that cloud water plays an important role in water resource in such regions. A one-dimensional vegetation model including the process of cloud water deposition on vegetation has been developed to better predict cloud water deposition on the vegetation. New schemes to calculate capture efficiency of leaf, cloud droplet size distribution, and gravitational flux of cloud water were incorporated in the model. Model calculations were compared with the data acquired at the Norway spruce forest at the Waldstein site, Germany. High performance of the model was confirmed by comparisons of calculated net radiation, sensible and latent heat, and cloud water fluxes over the forest with measurements. The present model provided a better prediction of measured turbulent and gravitational fluxes of cloud water over the canopy than the Lovett model, which is a commonly used cloud water deposition model. Detailed calculations of evapotranspiration and of turbulent exchange of heat and water vapor within the canopy and the modifications are necessary for accurate prediction of cloud water deposition. Numerical experiments to examine the dependence of cloud water deposition on the vegetation species (coniferous and broad-leaved trees, flat and cylindrical grasses) and structures (Leaf Area Index (LAI) and canopy height) are performed using the presented model. The results indicate that the differences of leaf shape and size have a large impact on cloud water deposition. Cloud water deposition also varies with the growth of vegetation and seasonal change of LAI. We found that the coniferous trees whose height and LAI are 24 m and 2.0 m2m-2, respectively, produce the largest amount of cloud water deposition in all combinations of vegetation species and structures in the
O`Brien, E.; Lissauer, D.; McCorkle, S.; Polychronakos, V.; Takai, H.; Chi, C.Y.; Nagamiya, S.; Sippach, W.; Toy, M.; Wang, D.; Wang, Y.F.; Wiggins, C.; Willis, W.; Cherniatin, V.; Dolgoshein, B.; Bennett, M.; Chikanian, A.; Kumar, S.; Mitchell, J.T.; Pope, K.
1991-12-31
We describe the results of a test ran involving a Transition Radiation Detector that can both distinguish electrons from pions which momenta greater titan 0.7 GeV/c and simultaneously track particles passing through the detector. The particle identification is accomplished through a combination of the detection of Transition Radiation from the electron and the differences in electron and pion energy loss (dE/dx) in the detector. The dE/dx particle separation is most, efficient below 2 GeV/c while particle ID utilizing Transition Radiation effective above 1.5 GeV/c. Combined, the electron-pion separation is-better than 5 {times} 10{sup 2}. The single-wire, track-position resolution for the TRD is {approximately}230 {mu}m.
Accurate Modeling of the Terrestrial Gamma-Ray Background for Homeland Security Applications
Sandness, Gerald A.; Schweppe, John E.; Hensley, Walter K.; Borgardt, James D.; Mitchell, Allison L.
2009-10-24
Abstract–The Pacific Northwest National Laboratory has developed computer models to simulate the use of radiation portal monitors to screen vehicles and cargo for the presence of illicit radioactive material. The gamma radiation emitted by the vehicles or cargo containers must often be measured in the presence of a relatively large gamma-ray background mainly due to the presence of potassium, uranium, and thorium (and progeny isotopes) in the soil and surrounding building materials. This large background is often a significant limit to the detection sensitivity for items of interest and must be modeled accurately for analyzing homeland security situations. Calculations of the expected gamma-ray emission from a disk of soil and asphalt were made using the Monte Carlo transport code MCNP and were compared to measurements made at a seaport with a high-purity germanium detector. Analysis revealed that the energy spectrum of the measured background could not be reproduced unless the model included gamma rays coming from the ground out to distances of at least 300 m. The contribution from beyond about 50 m was primarily due to gamma rays that scattered in the air before entering the detectors rather than passing directly from the ground to the detectors. These skyshine gamma rays contribute tens of percent to the total gamma-ray spectrum, primarily at energies below a few hundred keV. The techniques that were developed to efficiently calculate the contributions from a large soil disk and a large air volume in a Monte Carlo simulation are described and the implications of skyshine in portal monitoring applications are discussed.
Accurate tumor localization and tracking in radiation therapy using wireless body sensor networks.
Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark
2014-07-01
Radiation therapy is an effective method to combat cancerous tumors by killing the malignant cells or controlling their growth. Knowing the exact position of the tumor is a very critical prerequisite in radiation therapy. Since the position of the tumor changes during the process of radiation therapy due to the patient׳s movements and respiration, a real-time tumor tracking method is highly desirable in order to deliver a sufficient dose of radiation to the tumor region without damaging the surrounding healthy tissues. In this paper, we develop a novel tumor positioning method based on spatial sparsity. We estimate the position by processing the received signals from only one implantable RF transmitter. The proposed method uses less number of sensors compared to common magnetic transponder based approaches. The performance of the proposed method is evaluated in two different cases: (1) when the tissue configuration is perfectly determined (acquired beforehand by MRI or CT) and (2) when there are some uncertainties about the tissue boundaries. The results demonstrate the high accuracy and performance of the proposed method, even when the tissue boundaries are imperfectly known. PMID:24832352
Accurate tumor localization and tracking in radiation therapy using wireless body sensor networks.
Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark
2014-07-01
Radiation therapy is an effective method to combat cancerous tumors by killing the malignant cells or controlling their growth. Knowing the exact position of the tumor is a very critical prerequisite in radiation therapy. Since the position of the tumor changes during the process of radiation therapy due to the patient׳s movements and respiration, a real-time tumor tracking method is highly desirable in order to deliver a sufficient dose of radiation to the tumor region without damaging the surrounding healthy tissues. In this paper, we develop a novel tumor positioning method based on spatial sparsity. We estimate the position by processing the received signals from only one implantable RF transmitter. The proposed method uses less number of sensors compared to common magnetic transponder based approaches. The performance of the proposed method is evaluated in two different cases: (1) when the tissue configuration is perfectly determined (acquired beforehand by MRI or CT) and (2) when there are some uncertainties about the tissue boundaries. The results demonstrate the high accuracy and performance of the proposed method, even when the tissue boundaries are imperfectly known.
Radiation Belt Analysis and Modeling
NASA Astrophysics Data System (ADS)
Bass, J. N.; Dasgupta, U.; Hein, C. A.; Griffin, J. M.; Reynolds, D. S.
1995-04-01
Efforts have been conducted in modeling of radiation belts, and cosmic radiation, principally in connection with the CRRES mission. Statistical studies of solar particle events have been conducted in a search for predictors of the occurrence of geomagnetic storms. Certain spectral and temporal properties of protons and electrons were found to correlate with the occurrence of storms. Comparative studies of solar proton fluxes observed at locations inside (using CRRES and GOES-7) and outside (using INP-8) the inner magnetosphere were performed in an attempt to measure penetration of solar protons to various L shells as functions of time during a proton event and the subsequent magnetic storm. The failure to observe large increases in proton fluxes at the sudden commencement of the great magnetic storm of March, 1991, indicates a magnetospheric process was involved. An attempt was made to model the acceleration of radiation belt protons by magnetospheric compression during this event. The access of Helium into the inner magnetosphere was studied during this event. Modeling of instrument contamination and dosage were performed to enhance interpretation of measurements by the Proton Telescope and the Space Radiation Dosimeter. Support software packages developed include a science summary data base, a data processing system for the microelectronics package, and software to analyze measurements by the Low Energy Plasma Analyzer to produce a three dimensional plasma distribution function.
An Improved Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Optimum satellite orbits for accurate measurement of the earth's radiation budget, summary
NASA Technical Reports Server (NTRS)
Campbell, G. G.; Vonderhaar, T. H.
1978-01-01
The optimum set of orbit inclinations for the measurement of the earth radiation budget from spacially integrating sensor systems was estimated for two and three satellite systems. The best set of the two were satellites at orbit inclinations of 80 deg and 50 deg; of three the inclinations were 80 deg, 60 deg and 50 deg. These were chosen on the basis of a simulation of flat plate and spherical detectors flying over a daily varying earth radiation field as measured by the Nimbus 3 medium resolution scanners. A diurnal oscillation was also included in the emitted flux and albedo to give a source field as realistic as possible. Twenty three satellites with different inclinations and equator crossings were simulated, allowing the results of thousand of multisatellite sets to be intercompared. All were circular orbits of radius 7178 kilometers.
NASA Astrophysics Data System (ADS)
Oh, K.; Han, M.; Kim, K.; Heo, Y.; Moon, C.; Park, S.; Nam, S.
2016-02-01
For quality assurance in radiation therapy, several types of dosimeters are used such as ionization chambers, radiographic films, thermo-luminescent dosimeter (TLD), and semiconductor dosimeters. Among them, semiconductor dosimeters are particularly useful for in vivo dosimeters or high dose gradient area such as the penumbra region because they are more sensitive and smaller in size compared to typical dosimeters. In this study, we developed and evaluated Cadmium Telluride (CdTe) dosimeters, one of the most promising semiconductor dosimeters due to their high quantum efficiency and charge collection efficiency. Such CdTe dosimeters include single crystal form and polycrystalline form depending upon the fabrication process. Both types of CdTe dosimeters are commercially available, but only the polycrystalline form is suitable for radiation dosimeters, since it is less affected by volumetric effect and energy dependence. To develop and evaluate polycrystalline CdTe dosimeters, polycrystalline CdTe films were prepared by thermal evaporation. After that, CdTeO3 layer, thin oxide layer, was deposited on top of the CdTe film by RF sputtering to improve charge carrier transport properties and to reduce leakage current. Also, the CdTeO3 layer which acts as a passivation layer help the dosimeter to reduce their sensitivity changes with repeated use due to radiation damage. Finally, the top and bottom electrodes, In/Ti and Pt, were used to have Schottky contact. Subsequently, the electrical properties under high energy photon beams from linear accelerator (LINAC), such as response coincidence, dose linearity, dose rate dependence, reproducibility, and percentage depth dose, were measured to evaluate polycrystalline CdTe dosimeters. In addition, we compared the experimental data of the dosimeter fabricated in this study with those of the silicon diode dosimeter and Thimble ionization chamber which widely used in routine dosimetry system and dose measurements for radiation
Accurate and fast stray radiation calculation based on improved backward ray tracing.
Yang, Liu; XiaoQiang, An; Qian, Wang
2013-02-01
An improved method of backward ray tracing is proposed according to the theory of geometrical optics and thermal radiation heat transfer. The accuracy is essentially raised comparing to the traditional backward ray tracing because ray orders and weight factors are taken into account and the process is designed as sequential and recurring steps to trace and calculate different order stray lights. Meanwhile, it needs very small computation comparing to forward ray tracing because irrelevant surfaces and rays are excluded from the tracing. The effectiveness was verified in the stray radiation analysis for a cryogenic infrared (IR) imaging system, as the results coincided with the actual stray radiation irradiance distributions in the real images. The computation amount was compared with that of forward ray tracing in the narcissus calculation for another cryogenic IR imaging system, it was found that to produce the same accuracy result, the computation of the improved backward ray tracing is far smaller than that of forward ray tracing by at least 2 orders of magnitude.
Radiation dosimetry and biophysical models of space radiation effects
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wu, Honglu; Shavers, Mark R.; George, Kerry
2003-01-01
Estimating the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons, and the absence of epidemiology data for these radiation types. Developing useful biophysical parameters or models that relate energy deposition by space particles to the probabilities of biological outcomes is a complex problem. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra. In contrast to conventional dosimetric methods, models of radiation track structure provide descriptions of energy deposition events in biomolecules, cells, or tissues, which can be used to develop biophysical models of radiation risks. In this paper, we address the biophysical description of heavy particle tracks in the context of the interpretation of both space radiation dosimetry and radiobiology data, which may provide insights into new approaches to these problems.
Radiation dosimetry and biophysical models of space radiation effects.
Cucinotta, Francis A; Wu, Honglu; Shavers, Mark R; George, Kerry
2003-06-01
Estimating the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons, and the absence of epidemiology data for these radiation types. Developing useful biophysical parameters or models that relate energy deposition by space particles to the probabilities of biological outcomes is a complex problem. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra. In contrast to conventional dosimetric methods, models of radiation track structure provide descriptions of energy deposition events in biomolecules, cells, or tissues, which can be used to develop biophysical models of radiation risks. In this paper, we address the biophysical description of heavy particle tracks in the context of the interpretation of both space radiation dosimetry and radiobiology data, which may provide insights into new approaches to these problems. PMID:12959127
EXAMPLES OF RADIATION SHIELDING MODELS
Willison, J
2006-07-27
The attached pictures are examples of shielding models used by WSMS. The models were used in shielding evaluations for Tank 50 pump replacement. They show the relative location of shielding to radiation sources for pumps and pipes. None of the calculations that were associated with these models involved UCNI. The last page contains two pictures from a shielding calculation for the saltstone area. The upper picture is a conceptual drawing. The lower picture is an image copied from the website of a supplier for the project.
Slot Region Radiation Environment Models
NASA Astrophysics Data System (ADS)
Sandberg, Ingmar; Daglis, Ioannis; Heynderickx, Daniel; Evans, Hugh; Nieminen, Petteri
2013-04-01
Herein we present the main characteristics and first results of the Slot Region Radiation Environment Models (SRREMs) project. The statistical models developed in SRREMs aim to address the variability of trapped electron and proton fluxes in the region between the inner and the outer electron radiation belt. The energetic charged particle fluxes in the slot region are highly dynamic and are known to vary by several orders of magnitude on both short and long timescales. During quiet times, the particle fluxes are much lower than those found at the peak of the inner and outer belts and the region is considered benign. During geospace magnetic storms, though, this region can fill with energetic particles as the peak of the outer belt is pushed Earthwards and the fluxes can increase drastically. There has been a renewed interest in the potential operation of commercial satellites in orbits that are at least partially contained within the Slot Region. Hence, there is a need to improve the current radiation belt models, most of which do not model the extreme variability of the slot region and instead provide long-term averages between the better-known low and medium Earth orbits (LEO and MEO). The statistical models developed in the SRREMs project are based on the analysis of a large volume of available data and on the construction of a virtual database of slot region particle fluxes. The analysis that we have followed retains the long-term temporal, spatial and spectral variations in electron and proton fluxes as well as the short-term enhancement events at altitudes and inclinations relevant for satellites in the slot region. A large number of datasets have been used for the construction, evaluation and inter-calibration of the SRREMs virtual dataset. Special emphasis has been given on the use and analysis of ESA Standard Radiation Environment Monitor (SREM) data from the units on-board PROBA-1, INTEGRAL, and GIOVE-B due to the sufficient spatial and long temporal
Models for infrared atmospheric radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.
1976-01-01
Line and band models for infrared spectral absorption are discussed. Radiative transmittance and integrated absorptance of Lorentz, Doppler, and voigt line profiles were compared for a range of parameters. It was found that, for the intermediate path lengths, the combined Lorentz-Doppler (Voigt) profile is essential in calculating the atmospheric transmittance. Narrow band model relations for absorptance were used to develop exact formulations for total absorption by four wide band models. Several continuous correlations for the absorption of a wide band model were compared with the numerical solutions of the wide band models. By employing the line-by-line and quasi-random band model formulations, computational procedures were developed for evaluating transmittance and upwelling atmospheric radiance. Homogeneous path transmittances were calculated for selected bands of CO, CO2, and N2O and compared with experimental measurements. The upwelling radiance and signal change in the wave number interval of the CO fundamental band were also calculated.
How accurate is image guided radiation therapy (IGRT) delivered with a micro-irradiator?
Oldham, M; Newton, J; Rankine, L; Adamovics, J; Kirsch, D; Das, S
2013-01-01
There is significant interest in delivering precisely targeted small-volume radiation treatments, in the pre-clinical setting, to study dose-volume relationships with tumor control and normal tissue damage. In this work we investigate the IGRT targeting accuracy of the XRad225Cx system from Precision x-Ray using high resolution 3D dosimetry techniques. Initial results revealed a significant targeting error of about 2.4mm. This error was reduced to within 0.5mm after the IGRT hardware and software had been recalibrated. The facility for 3D dosimetry was essential to gain a comprehensive understanding of the targeting error in 3D. PMID:24454521
Space shuttle main engine plume radiation model
NASA Technical Reports Server (NTRS)
Reardon, J. E.; Lee, Y. C.
1978-01-01
The methods are described which are used in predicting the thermal radiation received by space shuttles, from the plumes of the main engines. Radiation to representative surface locations were predicted using the NASA gaseous plume radiation GASRAD program. The plume model is used with the radiative view factor (RAVFAC) program to predict sea level radiation at specified body points. The GASRAD program is described along with the predictions. The RAVFAC model is also discussed.
A Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
2000-01-01
This paper describes a radiative transfer model developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. We use a newly developed k-distribution model for both the thermal and solar parts of the spectrum. We employ a generalized two-stream approximation for the scattering by aerosol and clouds. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. We perform several calculations focussing primarily on the question of absorption of solar radiation by gases and aerosols. We estimate the accuracy of the k-distribution to be approx. 1 W/sq m for the gaseous absorption in the solar spectrum. We estimate the accuracy of the two-stream method to be 3-12 W/sq m for the downward solar flux and 1-5 W/sq m for the upward solar flux at the top of atmosphere depending on the optical depth of the aerosol layer. We also show that the effect of ignoring aerosol absorption on the downward solar flux at the surface is 50 W/sq m for the TARFOX aerosol for an optical depth of 0.5 and 150 W/sq m for a highly absorbing mineral aerosol. Thus, we conclude that the uncertainty introduced by the aerosol solar radiative properties (and merely assuming some "representative" model) can be considerably larger than the error introduced by the use of a two-stream method.
Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle
2016-04-01
In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.
Accurate mask model implementation in OPC model for 14nm nodes and beyond
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle
2015-10-01
In a previous work [1] we demonstrated that current OPC model assuming the mask pattern to be analogous to the designed data is no longer valid. Indeed as depicted in figure 1, an extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason an accurate mask model, for a 14nm logic gate level has been calibrated. A model with a total RMS of 1.38nm at mask level was obtained. 2D structures such as line-end shortening and corner rounding were well predicted using SEM pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular, as depicted in figure 2.
NASA Astrophysics Data System (ADS)
Skowron, A.; Lee, D. S.; De León, R. R.
2013-08-01
Aviation emissions of NOx result in the formation of tropospheric ozone (warming) and destruction of a small amount of methane (cooling), positive and negative radiative forcing effects. In addition, the reduction of methane results in a small long-term reduction in tropospheric ozone (cooling) and, in addition, a long-term reduction in water vapour in the stratosphere (cooling) from reduced oxidation of methane, both negative radiative forcing impacts. Taking all these radiative effects together, aircraft NOx is still thought to result in a positive (warming) radiative effect under constant emissions assumptions. Previously, comparative modelling studies have focussed on the variability between models, using the same emissions database. In this study, we rather quantify the variability and uncertainty arising from different estimations of present-day aircraft NOx emissions. Six different aircraft NOx emissions inventories were used in the global chemical transport model, MOZART v3. The inventories were normalized to give the same global emission of NOx in order to remove one element of uncertainty. Emissions differed in the normalized cases by 23% at cruise altitudes (283-200 hPa, where the bulk of emission occurs, globally). However, the resultant short-term ozone chemical perturbation varied by 15% between the different inventories. Once all the effects that give rise to positive and negative radiative impacts were accounted for, the variability of net radiative forcing impacts was 94%. Using these radiative effects to formulate a net aviation NOx Global Warming Potential (GWP) for a 100-year time horizon resulted in GWPs ranging from 60 to 4, over an order of magnitude. It is concluded that the detailed placement of emissions at chemically sensitive cruise altitudes strongly affects the assessment of the total radiative impact, introducing a hitherto previously unidentified large fraction of the uncertainty of impacts between different modelling assessments. It
MONA: An accurate two-phase well flow model based on phase slippage
Asheim, H.
1984-10-01
In two phase flow, holdup and pressure loss are related to interfacial slippage. A model based on the slippage concept has been developed and tested using production well data from Forties, the Ekofisk area, and flowline data from Prudhoe Bay. The model developed turned out considerably more accurate than the standard models used for comparison.
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
NASA Astrophysics Data System (ADS)
Murphy, Kyle R.; Mann, Ian R.; Rae, I. Jonathan; Sibeck, David G.; Watt, Clare E. J.
2016-08-01
Wave-particle interactions play a crucial role in energetic particle dynamics in the Earth's radiation belts. However, the relative importance of different wave modes in these dynamics is poorly understood. Typically, this is assessed during geomagnetic storms using statistically averaged empirical wave models as a function of geomagnetic activity in advanced radiation belt simulations. However, statistical averages poorly characterize extreme events such as geomagnetic storms in that storm-time ultralow frequency wave power is typically larger than that derived over a solar cycle and Kp is a poor proxy for storm-time wave power.
Collins, William; Iacono, Michael J.; Delamere, Jennifer S.; Mlawer, Eli J.; Shephard, Mark W.; Clough, Shepard A.; Collins, William D.
2008-04-01
A primary component of the observed, recent climate change is the radiative forcing from increased concentrations of long-lived greenhouse gases (LLGHGs). Effective simulation of anthropogenic climate change by general circulation models (GCMs) is strongly dependent on the accurate representation of radiative processes associated with water vapor, ozone and LLGHGs. In the context of the increasing application of the Atmospheric and Environmental Research, Inc. (AER) radiation models within the GCM community, their capability to calculate longwave and shortwave radiative forcing for clear sky scenarios previously examined by the radiative transfer model intercomparison project (RTMIP) is presented. Forcing calculations with the AER line-by-line (LBL) models are very consistent with the RTMIP line-by-line results in the longwave and shortwave. The AER broadband models, in all but one case, calculate longwave forcings within a range of -0.20 to 0.23 W m{sup -2} of LBL calculations and shortwave forcings within a range of -0.16 to 0.38 W m{sup -2} of LBL results. These models also perform well at the surface, which RTMIP identified as a level at which GCM radiation models have particular difficulty reproducing LBL fluxes. Heating profile perturbations calculated by the broadband models generally reproduce high-resolution calculations within a few hundredths K d{sup -1} in the troposphere and within 0.15 K d{sup -1} in the peak stratospheric heating near 1 hPa. In most cases, the AER broadband models provide radiative forcing results that are in closer agreement with high 20 resolution calculations than the GCM radiation codes examined by RTMIP, which supports the application of the AER models to climate change research.
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-01
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
Getting a Picture that Is Both Accurate and Stable: Situation Models and Epistemic Validation
ERIC Educational Resources Information Center
Schroeder, Sascha; Richter, Tobias; Hoever, Inga
2008-01-01
Text comprehension entails the construction of a situation model that prepares individuals for situated action. In order to meet this function, situation model representations are required to be both accurate and stable. We propose a framework according to which comprehenders rely on epistemic validation to prevent inaccurate information from…
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean
NASA Astrophysics Data System (ADS)
Phalippou, L.; Demeestere, F.
2011-12-01
The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response
Estimating solar radiation for plant simulation models
Hodges, T.; French, V.; Leduc, S.
1985-01-01
Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.
Ultraviolet radiation therapy and UVR dose models
Grimes, David Robert
2015-01-15
Ultraviolet radiation (UVR) has been an effective treatment for a number of chronic skin disorders, and its ability to alleviate these conditions has been well documented. Although nonionizing, exposure to ultraviolet (UV) radiation is still damaging to deoxyribonucleic acid integrity, and has a number of unpleasant side effects ranging from erythema (sunburn) to carcinogenesis. As the conditions treated with this therapy tend to be chronic, exposures are repeated and can be high, increasing the lifetime probability of an adverse event or mutagenic effect. Despite the potential detrimental effects, quantitative ultraviolet dosimetry for phototherapy is an underdeveloped area and better dosimetry would allow clinicians to maximize biological effect whilst minimizing the repercussions of overexposure. This review gives a history and insight into the current state of UVR phototherapy, including an overview of biological effects of UVR, a discussion of UVR production, illness treated by this modality, cabin design and the clinical implementation of phototherapy, as well as clinical dose estimation techniques. Several dose models for ultraviolet phototherapy are also examined, and the need for an accurate computational dose estimation method in ultraviolet phototherapy is discussed.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
High fidelity chemistry and radiation modeling for oxy -- combustion scenarios
NASA Astrophysics Data System (ADS)
Abdul Sater, Hassan A.
To account for the thermal and chemical effects associated with the high CO2 concentrations in an oxy-combustion atmosphere, several refined gas-phase chemistry and radiative property models have been formulated for laminar to highly turbulent systems. This thesis examines the accuracies of several chemistry and radiative property models employed in computational fluid dynamic (CFD) simulations of laminar to transitional oxy-methane diffusion flames by comparing their predictions against experimental data. Literature review about chemistry and radiation modeling in oxy-combustion atmospheres considered turbulent systems where the predictions are impacted by the interplay and accuracies of the turbulence, radiation and chemistry models. Thus, by considering a laminar system we minimize the impact of turbulence and the uncertainties associated with turbulence models. In the first section of this thesis, an assessment and validation of gray and non-gray formulations of a recently proposed weighted-sum-of-gray gas model in oxy-combustion scenarios was undertaken. Predictions of gas, wall temperatures and flame lengths were in good agreement with experimental measurements. The temperature and flame length predictions were not sensitive to the radiative property model employed. However, there were significant variations between the gray and non-gray model radiant fraction predictions with the variations in general increasing with decrease in Reynolds numbers possibly attributed to shorter flames and steeper temperature gradients. The results of this section confirm that non-gray model predictions of radiative heat fluxes are more accurate than gray model predictions especially at steeper temperature gradients. In the second section, the accuracies of three gas-phase chemistry models were assessed by comparing their predictions against experimental measurements of temperature, species concentrations and flame lengths. The chemistry was modeled employing the Eddy
Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young
2015-07-01
This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.
Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers
NASA Astrophysics Data System (ADS)
Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas
2016-10-01
A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement.
Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers
Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas
2016-01-01
A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496
Treatment of Solar and Thermal Radiation in Global Climate Models
NASA Astrophysics Data System (ADS)
Lacis, A. A.; Oinas, V.
2015-12-01
It is the interaction of solar and thermal radiation with the climate system constituents that determines the prevailing climate on Earth. The principal radiative constituents of the climate system are clouds, aerosols, greenhouse gases, and the ground surface. Accurate rendering of their interaction with the incident solar radiation and the outgoing thermal radiation is required if a climate model is to be capable of simulating and predicting the complex changes that take place in the terrestrial climate system. In the GISS climate model, these radiative tasks are accomplished with a GCM radiation model that utilizes the correlated k-distribution treatment that closely matches Line-by-Line accuracy (Lacis and Oinas, 1991) for the gaseous absorbers, and an adaptation of the doubling/adding method (Lacis and Hansen, 1974) to compute multiple scattering by clouds and aerosols. The radiative parameters to model the spectral dependence of solar and longwave radiation (UV to microwave) utilizes Mie scattering and T-matrix calculations covering the broad range of particle sizes and compositions encountered in the climate system. Cloud treatment also incorporates an empirical representation of sub-grid inhomogeneity and space-time variability of cloud optical properties (derived from ISCCP data) that utilizes a Monte Carlo-based re-scaling parameterization of the cloud plane-parallel radiative parameters (Cairns et al, 2001). The longwave calculations compute correlated k-distribution radiances at three quadrature points (without scattering), and include the effects of cloud scattering in parameterized form for the outgoing and downwelling LW fluxes. For hygroscopic aerosols (e.g., sulfates, nitrates, sea salt), the effects of changing relative humidity on particle size and refractive index are explicitly taken into account. In this way, the GISS GCM radiation model calculates the SW and LW radiative fluxes, and the corresponding radiative heating and cooling rates in
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Built-in templates speed up process for making accurate models
NASA Technical Reports Server (NTRS)
1964-01-01
From accurate scale drawings of a model, photographic negatives of the cross sections are printed on thin sheets of aluminum. These cross-section images are cut out and mounted, and mahogany blocks placed between them. The wood can be worked down using the aluminum as a built-in template.
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.
A Computational Model of Cellular Response to Modulated Radiation Fields
McMahon, Stephen J.; Butterworth, Karl T.; McGarry, Conor K.; Trainor, Colman; O'Sullivan, Joe M.; Hounsell, Alan R.; Prise, Kevin M.
2012-09-01
Purpose: To develop a model to describe the response of cell populations to spatially modulated radiation exposures of relevance to advanced radiotherapies. Materials and Methods: A Monte Carlo model of cellular radiation response was developed. This model incorporated damage from both direct radiation and intercellular communication including bystander signaling. The predictions of this model were compared to previously measured survival curves for a normal human fibroblast line (AGO1522) and prostate tumor cells (DU145) exposed to spatially modulated fields. Results: The model was found to be able to accurately reproduce cell survival both in populations which were directly exposed to radiation and those which were outside the primary treatment field. The model predicts that the bystander effect makes a significant contribution to cell killing even in uniformly irradiated cells. The bystander effect contribution varies strongly with dose, falling from a high of 80% at low doses to 25% and 50% at 4 Gy for AGO1522 and DU145 cells, respectively. This was verified using the inducible nitric oxide synthase inhibitor aminoguanidine to inhibit the bystander effect in cells exposed to different doses, which showed significantly larger reductions in cell killing at lower doses. Conclusions: The model presented in this work accurately reproduces cell survival following modulated radiation exposures, both in and out of the primary treatment field, by incorporating a bystander component. In addition, the model suggests that the bystander effect is responsible for a significant portion of cell killing in uniformly irradiated cells, 50% and 70% at doses of 2 Gy in AGO1522 and DU145 cells, respectively. This description is a significant departure from accepted radiobiological models and may have a significant impact on optimization of treatment planning approaches if proven to be applicable in vivo.
Predictive models of radiative neutrino masses
NASA Astrophysics Data System (ADS)
Julio, J.
2016-06-01
We discuss two models of radiative neutrino mass generation. The first model features one-loop Zee model with Z4 symmetry. The second model is the two-loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
Radiation dose modeling using IGRIP and Deneb/ERGO
Vickers, D.S.; Davis, K.R.; Breazeal, N.L.; Watson, R.A.; Ford, M.S.
1995-12-31
The Radiological Environment Modeling System (REMS) quantifies dose to humans in radiation environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO (Ergonomics) simulation software products. These commercially available products are augmented with custom C code to provide the radiation exposure information to and collect the radiation dose information from the workcell simulations. The emphasis of this paper is on the IGRIP and Deneb/ERGO parts of REMS, since that represents the extension to existing capabilities developed by the authors. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these database files to compute and accumulate dose to human devices (Deneb`s ERGO human) during simulated operations around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. REMS was developed because the proposed reduction in the yearly radiation exposure limit will preclude or require changes in many of the manual operations currently being utilized in the Weapons Complex. This is particularly relevant in the area of dismantlement activities at the Pantex Plant in Amarillo, TX. Therefore, a capability was needed to be able to quantify the dose associated with certain manual processes so that the benefits of automation could be identified and understood.
Near-Earth Space Radiation Models
NASA Technical Reports Server (NTRS)
Xapsos, Michael A.; O'Neill, Patrick M.; O'Brien, T. Paul
2012-01-01
Review of models of the near-Earth space radiation environment is presented, including recent developments in trapped proton and electron, galactic cosmic ray and solar particle event models geared toward spacecraft electronics applications.
Development of modified cable models to simulate accurate neuronal active behaviors.
Elbasiouny, Sherif M
2014-12-01
In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted.
NASA Astrophysics Data System (ADS)
Lazzeroni, Marta; Brahme, Anders
2015-09-01
In the present study we develop a new technique for the production of clean quasi-monochromatic 11C positron emitter beams for accurate radiation therapy and PET-CT dose delivery imaging and treatment verification. The 11C ion beam is produced by projectile fragmentation using a primary 12C ion beam. The practical elimination of the energy spread of the secondary 11C fragments and other beam contaminating fragments is described. Monte Carlo calculation with the SHIELD-HIT10+ code and analytical methods for the transport of the ions in matter are used in the analysis. Production yields, as well as energy, velocity and magnetic rigidity distributions of the fragments generated in a cylindrical target are scored as a function of the depth within 1 cm thick slices for an optimal target consisting of a fixed 20 cm section of liquid hydrogen followed by a variable thickness section of polyethylene. The wide energy and magnetic rigidity spread of the 11C ion beam can be reduced to values around 1% by using a variable monochromatizing wedge-shaped degrader in the beam line. Finally, magnetic rigidity and particle species selection, as well as discrimination of the particle velocity through a combined Time of Flight and Radio Frequency-driven Velocity filter purify the beam from similar magnetic rigidity contaminating fragments (mainly 7Be and 3He fragments). A beam purity of about 99% is expected by the combined method.
Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments
Kuruganti, Phani Teja
2007-01-01
As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.
NASA Astrophysics Data System (ADS)
Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent
2013-11-01
The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.
Bai, O; Nakamura, M; Kanda, M; Nagamine, T; Shibasaki, H
2001-11-01
This study introduces a method for accurate identification of the waveform of the evoked potentials by decomposing the component responses. The decomposition was achieved by zero-pole modeling of the evoked potentials in the discrete cosine transform (DCT) domain. It was found that the DCT coefficients of a component response in the evoked potentials could be modeled sufficiently by a second order transfer function in the DCT domain. The decomposition of the component responses was approached by using partial expansion of the estimated model for the evoked potentials, and the effectiveness of the decomposition method was evaluated both qualitatively and quantitatively. Because of the overlap of the different component responses, the proposed method enables an accurate identification of the evoked potentials, which is useful for clinical and neurophysiological investigations.
A rapid radiative transfer model for reflection of solar radiation
NASA Technical Reports Server (NTRS)
Xiang, X.; Smith, E. A.; Justus, C. G.
1994-01-01
A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.
A Rapid Radiative Transfer Model for Reflection of Solar Radiation.
NASA Astrophysics Data System (ADS)
Xiang, X.; Smith, E. A.; Justus, C. G.
1994-07-01
A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.
Method for modeling radiative transport in luminescent particulate media.
Hughes, Michael D; Borca-Tasciuc, Diana-Andra; Kaminski, Deborah A
2016-04-20
Modeling radiative transport in luminescent particulate media is important to a variety of applications, from biomedical imaging to solar power harvesting. When absorption and scattering from individual particles must be considered, the description of radiative transport is not straightforward. For large particles and interparticle spacing, geometrical optics can be employed. However, this approach requires accurate knowledge of several particle properties, such as index of refraction and absorption coefficient, along with particle geometry and positioning. Because the determination of these variables is often nontrivial, we developed an approach for modeling radiative transport in such media, which combines two simple experiments with Monte Carlo simulations to determine the particle extinction coefficient (Γ) and the probability of absorption of light by a particle (P_{A}). The method is validated on samples consisting of luminescent phosphor powder dispersed in a silicone matrix. PMID:27140095
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other. PMID:19229307
Highly physical penumbra solar radiation pressure modeling with atmospheric effects
NASA Astrophysics Data System (ADS)
Robertson, Robert; Flury, Jakob; Bandikova, Tamara; Schilling, Manuel
2015-10-01
We present a new method for highly physical solar radiation pressure (SRP) modeling in Earth's penumbra. The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. However, we aim to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects are tabulated to significantly reduce computational cost. We present new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the high spatial and temporal variability in lower atmospheric conditions. Modeled penumbra SRP accelerations for the Gravity Recovery and Climate Experiment (GRACE) satellites are compared to the sub-nm/s2 precision GRACE accelerometer data. Comparisons to accelerometer data and a traditional penumbra SRP model illustrate the improved accuracy which our methods provide. Sensitivity analyses illustrate the significance of various atmospheric parameters and modeled effects on penumbra SRP. While this model is more complex than a traditional penumbra SRP model, we demonstrate its utility and propose that a highly physical model which considers atmospheric effects should be the basis for any simplified approach to penumbra SRP modeling.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z
2016-09-01
The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models.
Accurate and efficient halo-based galaxy clustering modelling with simulations
NASA Astrophysics Data System (ADS)
Zheng, Zheng; Guo, Hong
2016-06-01
Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707
5D model for accurate representation and visualization of dynamic cardiac structures
NASA Astrophysics Data System (ADS)
Lin, Wei-te; Robb, Richard A.
2000-05-01
Accurate cardiac modeling is challenging due to the intricate structure and complex contraction patterns of myocardial tissues. Fast imaging techniques can provide 4D structural information acquired as a sequence of 3D images throughout the cardiac cycle. To mode. The beating heart, we created a physics-based surface model that deforms between successive time point in the cardiac cycle. 3D images of canine hearts were acquired during one complete cardiac cycle using the DSR and the EBCT. The left ventricle of the first time point is reconstructed as a triangular mesh. A mass-spring physics-based deformable mode,, which can expand and shrink with local contraction and stretching forces distributed in an anatomically accurate simulation of cardiac motion, is applied to the initial mesh and allows the initial mesh to deform to fit the left ventricle in successive time increments of the sequence. The resulting 4D model can be interactively transformed and displayed with associated regional electrical activity mapped onto anatomic surfaces, producing a 5D model, which faithfully exhibits regional cardiac contraction and relaxation patterns over the entire heart. The model faithfully represents structural changes throughout the cardiac cycle. Such models provide the framework for minimizing the number of time points required to usefully depict regional motion of myocardium and allow quantitative assessment of regional myocardial motion. The electrical activation mapping provides spatial and temporal correlation within the cardiac cycle. In procedures which as intra-cardiac catheter ablation, visualization of the dynamic model can be used to accurately localize the foci of myocardial arrhythmias and guide positioning of catheters for optimal ablation.
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation.
Radiative accelerations for evolutionary model calculations
Richer, J.; Michaud, G.; Rogers, F.; Iglesias, C.; Turcotte, S.; LeBlanc, F.
1998-01-01
Monochromatic opacities from the OPAL database have been used to calculate radiative accelerations for the 21 included chemical species. The 10{sup 4} frequencies used are sufficient to calculate the radiative accelerations of many elements for T{gt}10{sup 5}K, using frequency sampling. This temperature limit is higher for less abundant elements. As the abundances of Fe, He, or O are varied, the radiative acceleration of other elements changes, since abundant elements modify the frequency dependence of the radiative flux and the Rosseland opacity. Accurate radiative accelerations for a given element can only be obtained by allowing the abundances of the species that contribute most to the Rosseland opacity to vary during the evolution and recalculating the radiative accelerations and the Rosseland opacity during the evolution. There are physical phenomena that cannot be included in the calculations if one uses only the OPAL data. For instance, one should correct for the momentum given to the electron in a photoionization. Such effects are evaluated using atomic data from Opacity Project, and correction factors are given. {copyright} {ital 1998} {ital The American Astronomical Society}
Dean, J; Welsh, L; Gulliford, S; Harrington, K; Nutting, C
2014-06-01
Purpose: The significant morbidity caused by radiation-induced acute oral mucositis means that studies aiming to elucidate dose-response relationships in this tissue are a high priority. However, there is currently no standardized method for delineating the mucosal structures within the oral cavity. This report describes the development of a methodology to delineate the oral mucosa accurately on CT scans in a semi-automated manner. Methods: An oral mucosa atlas for automated segmentation was constructed using the RayStation Atlas-Based Segmentation (ABS) module. A radiation oncologist manually delineated the full surface of the oral mucosa on a planning CT scan of a patient receiving radiotherapy (RT) to the head and neck region. A 3mm fixed annulus was added to incorporate the mucosal wall thickness. This structure was saved as an atlas template. ABS followed by model-based segmentation was performed on four further patients sequentially, adding each patient to the atlas. Manual editing of the automatically segmented structure was performed. A dose comparison between these contours and previously used oral cavity volume contours was performed. Results: The new approach was successful in delineating the mucosa, as assessed by an experienced radiation oncologist, when applied to a new series of patients receiving head and neck RT. Reductions in the mean doses obtained when using the new delineation approach, compared with the previously used technique, were demonstrated for all patients (median: 36.0%, range: 25.6% – 39.6%) and were of a magnitude that might be expected to be clinically significant. Differences in the maximum dose that might reasonably be expected to be clinically significant were observed for two patients. Conclusion: The method developed provides a means of obtaining the dose distribution delivered to the oral mucosa more accurately than has previously been achieved. This will enable the acquisition of high quality dosimetric data for use in
A biokinetic model for zinc for use in radiation protection
Leggett, Richard Wayne
2012-01-01
The physiology of the essential trace element zinc has been studied extensively in human subjects using kinetic analysis of time-dependent measurements of administered zinc tracers. A number of biokinetic models describing zinc exchange between plasma and tissues and loss of systemic zinc in excreta have been developed from the derived data. More rudimentary biokinetic models for zinc have been developed to estimate radiation doses from internally deposited radioisotopes of zinc. The latter models are designed to provide broadly accurate estimates of cumulative decays of zinc radioisotopes in tissues and are not intended as realistic descriptions of the directions of movement of zinc in the body. This paper reviews biokinetic data for zinc and proposes a physiologically meaningful biokinetic model for systemic zinc for use in radiation protection. The proposed model bears some resemblance to zinc models developed in physiological studies but depicts a finer division of systemic zinc and is based on a broader spectrum of data than previous models. The proposed model and current radiation protection model for zinc yield broadly similar estimates of effective dose from internally deposited radioisotopes of zinc but substantially different dose estimates for several individual tissues, particularly the liver.
Guide to Modeling Earth's Trapped Radiation Environment
NASA Technical Reports Server (NTRS)
Garrett, H.
1999-01-01
The report will close with a detailed discussion of the current status of modeling of the radiation environment and recommend a long range plan for enhancing capabilities in this important environmental area.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.
2016-06-01
We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k < 10 h Mpc-1, and we present theoretically motivated extensions to cover non-minimally coupled scalar fields, massive neutrinos and Vainshtein screened modified gravity models that result in few per cent accurate power spectra for k < 10 h Mpc-1. For chameleon screened models, we achieve only 10 per cent accuracy for the same range of scales. Finally, we use our halo model to investigate degeneracies between different extensions to the standard cosmological model, finding that the impact of baryonic feedback on the non-linear matter power spectrum can be considered independently of modified gravity or massive neutrino extensions. In contrast, considering the impact of modified gravity and massive neutrinos independently results in biased estimates of power at the level of 5 per cent at scales k > 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.
Radiatively driven convection in marine stratocumulus clouds: Numerical modeling
Norris, P.M.; Rogers, D.P.
1994-12-31
The entrainment of warm dry air from above the inversion into a stratocumulus deck may play an important role in the dissipation of the cloud. A quantitative understanding of radiatively induced convection at cloud top is necessary in order to produce accurate entrainment rates and predictions of the diurnal evolution of a cloud layer. A three dimensional numerical model is used to study such convection. The model has been used extensively to study Rayleigh-Benard convection in an approximate geophysical setting. Here the authors model an idealized, non-sheared, nocturnal marine boundary layer to investigate the development of convection generated by cloud radiative cooling. Cloud forcing rather than surface forcing is investigated.
NASA Astrophysics Data System (ADS)
Qiuyang, He; Yue, Xu; Feifei, Zhao
2013-10-01
An accurate and complete circuit simulation model for single-photon avalanche diodes (SPADs) is presented. The derived model is not only able to simulate the static DC and dynamic AC behaviors of an SPAD operating in Geiger-mode, but also can emulate the second breakdown and the forward bias behaviors. In particular, it considers important statistical effects, such as dark-counting and after-pulsing phenomena. The developed model is implemented using the Verilog-A description language and can be directly performed in commercial simulators such as Cadence Spectre. The Spectre simulation results give a very good agreement with the experimental results reported in the open literature. This model shows a high simulation accuracy and very fast simulation rate.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
NASA Astrophysics Data System (ADS)
Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-01
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL
NASA Astrophysics Data System (ADS)
Ciambur, B. C.
2015-09-01
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em
2010-05-19
Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary.
A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics
Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em
2010-01-01
Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
NASA Technical Reports Server (NTRS)
Tsay, Si-Chee; Ji, Q. Jack
2011-01-01
Earth's climate is driven primarily by solar radiation. As summarized in various IPCC reports, the global average of radiative forcing for different agents and mechanisms, such as aerosols or CO2 doubling, is in the range of a few W/sq m. However, when solar irradiance is measured by broadband radiometers, such as the fleet of Eppley Precision Solar Pyranometers (PSP) and equivalent instrumentation employed worldwide, the measurement uncertainty is larger than 2% (e.g., WMO specification of pyranometer, 2008). Thus, out of the approx. 184 W/sq m (approx.263 W/sq m if cloud-free) surface solar insolation (Trenberth et al. 2009), the measurement uncertainty is greater than +/-3.6 W/sq m, overwhelming the climate change signals. To discern these signals, less than a 1 % measurement uncertainty is required and is currently achievable only by means of a newly developed methodology employing a modified PSP-like pyranometer and an updated calibration equation to account for its thermal effects (li and Tsay, 2010). In this talk, we will show that some auxiliary measurements, such as those from a collocated pyrgeometer or air temperature sensors, can help correct historical datasets. Additionally, we will also demonstrate that a pyrheliometer is not free of the thermal effect; therefore, comparing to a high cost yet still not thermal-effect-free "direct + diffuse" approach in measuring surface solar irradiance, our new method is more economical, and more likely to be suitable for correcting a wide variety of historical datasets. Modeling simulations will be presented that a corrected solar irradiance measurement has a significant impact on aerosol forcing, and thus plays an important role in climate studies.
Band models and correlations for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.
1975-01-01
Absorption of infrared radiation by various line and band models are briefly reviewed. Narrow band model relations for absorptance are used to develop 'exact' formulations for total absorption by four wide band models. Application of a wide band model to a particular gas largely depends upon the spectroscopic characteristic of the absorbing-emitting molecule. Seven continuous correlations for the absorption of a wide band model are presented and each one of these is compared with the exact (numerical) solutions of the wide band models. Comparison of these results indicate the validity of a correlation for a particular radiative transfer application. In radiative transfer analyses, use of continuous correlations for total band absorptance provides flexibilities in various mathematical operations.
An Earth radiation budget climate model
NASA Technical Reports Server (NTRS)
Bartman, Fred L.
1988-01-01
A 2-D Earth Radiation Budget Climate Model has been constructed from an OLWR (Outgoing Longwave Radiation) model and an Earth albedo model. Each of these models uses the same cloud cover climatology modified by a factor GLCLC which adjusts the global annual average cloud cover. The two models are linked by a set of equations which relate the cloud albedos to the cloud top temperatures of the OLWR model. These equations are derived from simultaneous narrow band satellite measurements of cloud top temperature and albedo. Initial results include global annual average values of albedo and latitude/longitude radiation for 45 percent and 57 percent global annual average cloud cover and two different forms of the cloud albedo-cloud top temperature equations.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756
Mason, Philip E; Wernersson, Erik; Jungwirth, Pavel
2012-07-19
The carbonate ion plays a central role in the biochemical formation of the shells of aquatic life, which is an important path for carbon dioxide sequestration. Given the vital role of carbonate in this and other contexts, it is imperative to develop accurate models for such a high charge density ion. As a divalent ion, carbonate has a strong polarizing effect on surrounding water molecules. This raises the question whether it is possible to describe accurately such systems without including polarization. It has recently been suggested the lack of electronic polarization in nonpolarizable water models can be effectively compensated by introducing an electronic dielectric continuum, which is with respect to the forces between atoms equivalent to rescaling the ionic charges. Given how widely nonpolarizable models are used to model electrolyte solutions, establishing the experimental validity of this suggestion is imperative. Here, we examine a stringent test for such models: a comparison of the difference of the neutron scattering structure factors of K2CO3 vs KNO3 solutions and that predicted by molecular dynamics simulations for various models of the same systems. We compare standard nonpolarizable simulations in SPC/E water to analogous simulations with effective ion charges, as well as simulations in explicitly polarizable POL3 water (which, however, has only about half the experimental polarizability). It is found that the simulation with rescaled charges is in a very good agreement with the experimental data, which is significantly better than for the nonpolarizable simulation and even better than for the explicitly polarizable POL3 model.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
NASA Astrophysics Data System (ADS)
Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.
2012-11-01
A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.
Double cluster heads model for secure and accurate data fusion in wireless sensor networks.
Fu, Jun-Song; Liu, Yun
2015-01-19
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1992-01-01
The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.
Nuclear model calculations and their role in space radiation research
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Cucinotta, F. A.; Heilbronn, L. H.
2002-01-01
Proper assessments of spacecraft shielding requirements and concomitant estimates of risk to spacecraft crews from energetic space radiation requires accurate, quantitative methods of characterizing the compositional changes in these radiation fields as they pass through thick absorbers. These quantitative methods are also needed for characterizing accelerator beams used in space radiobiology studies. Because of the impracticality/impossibility of measuring these altered radiation fields inside critical internal body organs of biological test specimens and humans, computational methods rather than direct measurements must be used. Since composition changes in the fields arise from nuclear interaction processes (elastic, inelastic and breakup), knowledge of the appropriate cross sections and spectra must be available. Experiments alone cannot provide the necessary cross section and secondary particle (neutron and charged particle) spectral data because of the large number of nuclear species and wide range of energies involved in space radiation research. Hence, nuclear models are needed. In this paper current methods of predicting total and absorption cross sections and secondary particle (neutrons and ions) yields and spectra for space radiation protection analyses are reviewed. Model shortcomings are discussed and future needs presented. c2002 COSPAR. Published by Elsevier Science Ltd. All right reserved.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J
2013-07-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
The NIAID Radiation Countermeasures Program Business Model
Hafer, Nathaniel; Maidment, Bert W.
2010-01-01
The National Institute of Allergy and Infectious Diseases (NIAID) Radiation/Nuclear Medical Countermeasures Development Program has developed an integrated approach to providing the resources and expertise required for the research, discovery, and development of radiation/nuclear medical countermeasures (MCMs). These resources and services lower the opportunity costs and reduce the barriers to entry for companies interested in working in this area and accelerate translational progress by providing goal-oriented stewardship of promising projects. In many ways, the radiation countermeasures program functions as a “virtual pharmaceutical firm,” coordinating the early and mid-stage development of a wide array of radiation/nuclear MCMs. This commentary describes the radiation countermeasures program and discusses a novel business model that has facilitated product development partnerships between the federal government and academic investigators and biopharmaceutical companies. PMID:21142762
The NIAID Radiation Countermeasures Program business model.
Hafer, Nathaniel; Maidment, Bert W; Hatchett, Richard J
2010-12-01
The National Institute of Allergy and Infectious Diseases (NIAID) Radiation/Nuclear Medical Countermeasures Development Program has developed an integrated approach to providing the resources and expertise required for the research, discovery, and development of radiation/nuclear medical countermeasures (MCMs). These resources and services lower the opportunity costs and reduce the barriers to entry for companies interested in working in this area and accelerate translational progress by providing goal-oriented stewardship of promising projects. In many ways, the radiation countermeasures program functions as a "virtual pharmaceutical firm," coordinating the early and mid-stage development of a wide array of radiation/nuclear MCMs. This commentary describes the radiation countermeasures program and discusses a novel business model that has facilitated product development partnerships between the federal government and academic investigators and biopharmaceutical companies.
Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Gu, Lizhi
2015-09-01
The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and
Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges
2014-04-01
Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation
Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges
2014-04-01
Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation
Radiation Environment Modeling for Spacecraft Design: New Model Developments
NASA Technical Reports Server (NTRS)
Barth, Janet; Xapsos, Mike; Lauenstein, Jean-Marie; Ladbury, Ray
2006-01-01
A viewgraph presentation on various new space radiation environment models for spacecraft design is described. The topics include: 1) The Space Radiatio Environment; 2) Effects of Space Environments on Systems; 3) Space Radiatio Environment Model Use During Space Mission Development and Operations; 4) Space Radiation Hazards for Humans; 5) "Standard" Space Radiation Environment Models; 6) Concerns about Standard Models; 7) Inadequacies of Current Models; 8) Development of New Models; 9) New Model Developments: Proton Belt Models; 10) Coverage of New Proton Models; 11) Comparison of TPM-1, PSB97, AP-8; 12) New Model Developments: Electron Belt Models; 13) Coverage of New Electron Models; 14) Comparison of "Worst Case" POLE, CRESELE, and FLUMIC Models with the AE-8 Model; 15) New Model Developments: Galactic Cosmic Ray Model; 16) Comparison of NASA, MSU, CIT Models with ACE Instrument Data; 17) New Model Developmemts: Solar Proton Model; 18) Comparison of ESP, JPL91, KIng/Stassinopoulos, and PSYCHIC Models; 19) New Model Developments: Solar Heavy Ion Model; 20) Comparison of CREME96 to CREDO Measurements During 2000 and 2002; 21) PSYCHIC Heavy ion Model; 22) Model Standardization; 23) Working Group Meeting on New Standard Radiation Belt and Space Plasma Models; and 24) Summary.
Detailed modeling analysis for soot formation and radiation in microgravity gas jet diffusion flames
NASA Technical Reports Server (NTRS)
Ku, Jerry C.; Tong, LI; Greenberg, Paul S.
1995-01-01
Radiation heat transfer in combustion systems has been receiving increasing interest. In the case of hydrocarbon fuels, a significant portion of the radiation comes from soot particles, justifying the need for detailed soot formation model and radiation transfer calculations. For laminar gas jet diffusion flames, results from this project (4/1/91 8/22/95) and another NASA study show that flame shape, soot concentration, and radiation heat fluxes are substantially different under microgravity conditions. Our emphasis is on including detailed soot transport models and a detailed solution for radiation heat transfer, and on coupling them with the flame structure calculations. In this paper, we will discuss the following three specific areas: (1) Comparing two existing soot formation models, and identifying possible improvements; (2) A simple yet reasonably accurate approach to calculating total radiative properties and/or fluxes over the spectral range; and (3) Investigating the convergence of iterations between the flame structure solver and the radiation heat transfer solver.
Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various
Modeling Impaired Hippocampal Neurogenesis after Radiation Exposure.
Cacao, Eliedonna; Cucinotta, Francis A
2016-03-01
Radiation impairment of neurogenesis in the hippocampal dentate gyrus is one of several factors associated with cognitive detriments after treatment of brain cancers in children and adults with radiation therapy. Mouse models have been used to study radiation-induced changes in neurogenesis, however the models are limited in the number of doses, dose fractions, age and time after exposure conditions that have been studied. The purpose of this study is to develop a novel predictive mathematical model of radiation-induced changes to neurogenesis using a system of nonlinear ordinary differential equations (ODEs) to represent the time, age and dose-dependent changes to several cell populations participating in neurogenesis as reported in mouse experiments exposed to low-LET radiation. We considered four compartments to model hippocampal neurogenesis and, consequently, the effects of radiation treatment in altering neurogenesis: (1) neural stem cells (NSCs), (2) neuronal progenitor cells or neuroblasts (NB), (3) immature neurons (ImN) and (4) glioblasts (GB). Because neurogenesis is decreasing with increasing mouse age, a description of the age-related dynamics of hippocampal neurogenesis is considered in the model, which is shown to be an important factor in comparisons to experimental data. A key feature of the model is the description of negative feedback regulation on early and late neuronal proliferation after radiation exposure. The model is augmented with parametric descriptions of the dose and time after irradiation dependences of activation of microglial cells and a possible shift of NSC proliferation from neurogenesis to gliogenesis reported at higher doses (∼10 Gy). Predictions for dose-fractionation regimes and for different mouse ages, and prospects for future work are then discussed.
Modeling Impaired Hippocampal Neurogenesis after Radiation Exposure.
Cacao, Eliedonna; Cucinotta, Francis A
2016-03-01
Radiation impairment of neurogenesis in the hippocampal dentate gyrus is one of several factors associated with cognitive detriments after treatment of brain cancers in children and adults with radiation therapy. Mouse models have been used to study radiation-induced changes in neurogenesis, however the models are limited in the number of doses, dose fractions, age and time after exposure conditions that have been studied. The purpose of this study is to develop a novel predictive mathematical model of radiation-induced changes to neurogenesis using a system of nonlinear ordinary differential equations (ODEs) to represent the time, age and dose-dependent changes to several cell populations participating in neurogenesis as reported in mouse experiments exposed to low-LET radiation. We considered four compartments to model hippocampal neurogenesis and, consequently, the effects of radiation treatment in altering neurogenesis: (1) neural stem cells (NSCs), (2) neuronal progenitor cells or neuroblasts (NB), (3) immature neurons (ImN) and (4) glioblasts (GB). Because neurogenesis is decreasing with increasing mouse age, a description of the age-related dynamics of hippocampal neurogenesis is considered in the model, which is shown to be an important factor in comparisons to experimental data. A key feature of the model is the description of negative feedback regulation on early and late neuronal proliferation after radiation exposure. The model is augmented with parametric descriptions of the dose and time after irradiation dependences of activation of microglial cells and a possible shift of NSC proliferation from neurogenesis to gliogenesis reported at higher doses (∼10 Gy). Predictions for dose-fractionation regimes and for different mouse ages, and prospects for future work are then discussed. PMID:26943452
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Jenny, Patrick
2013-08-01
Different simulation methods are applicable to study turbulent mixing. When applying probability density function (PDF) methods, turbulent transport, and chemical reactions appear in closed form, which is not the case in second moment closure methods (RANS). Moreover, PDF methods provide the entire joint velocity-scalar PDF instead of a limited set of moments. In PDF methods, however, a mixing model is required to account for molecular diffusion. In joint velocity-scalar PDF methods, mixing models should also account for the joint velocity-scalar statistics, which is often under appreciated in applications. The interaction by exchange with the conditional mean (IECM) model accounts for these joint statistics, but requires velocity-conditional scalar means that are expensive to compute in spatially three dimensional settings. In this work, two alternative mixing models are presented that provide more accurate PDF predictions at reduced computational cost compared to the IECM model, since no conditional moments have to be computed. All models are tested for different mixing benchmark cases and their computational efficiencies are inspected thoroughly. The benchmark cases involve statistically homogeneous and inhomogeneous settings dealing with three streams that are characterized by two passive scalars. The inhomogeneous case clearly illustrates the importance of accounting for joint velocity-scalar statistics in the mixing model. Failure to do so leads to significant errors in the resulting scalar means, variances and other statistics.
Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.
Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit
2015-05-01
A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies.
Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.
Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit
2015-05-01
A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies. PMID:25502920
NASA Astrophysics Data System (ADS)
Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.
2015-04-01
We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.
The JPL Uranian Radiation Model (UMOD)
NASA Technical Reports Server (NTRS)
Garrett, Henry; Martinez-Sierra, Luz Maria; Evans, Robin
2015-01-01
The objective of this study is the development of a comprehensive radiation model (UMOD) of the Uranian environment for JPL mission planning. The ultimate goal is to provide a description of the high energy electron and proton environments and the magnetic field at Uranus that can be used for engineering design. Currently no model exists at JPL. A preliminary electron radiation model employing Voyager 2 data was developed by Selesnick and Stone in 1991. The JPL Uranian Radiation Model extends that analysis, which modeled electrons between 0.7 MeV and 2.5 MeV based on the Voyager Cosmic Ray Subsystem electron telescope, down to an energy of 0.022 MeV for electrons and from 0.028 MeV to 3.5 MeV for protons. These latter energy ranges are based on measurements by the Applied Physics Laboratory Low Energy Charged Particle Detector on Voyager 2. As in previous JPL radiation models, the form of the Uranian model is based on magnetic field coordinates and requires a conversion from spacecraft coordinates to Uranian-centered magnetic "B-L" coordinates. Two magnetic field models have been developed for Uranus: 1) a simple "offset, tilted dipole" (OTD), and 2) a complex, multi-pole expansion model ("Q3"). A review of the existing data on Uranus and a search of the NASA Planetary Data System (PDS) were completed to obtain the latest, up to date descriptions of the Uranian high energy particle environment. These data were fit in terms of the Q3 B-L coordinates to extend and update the original Selesnick and Stone electron model in energy and to develop the companion proton flux model. The flux predictions of the new model were used to estimate the total ionizing dose for the Voyager 2 flyby, and a movie illustrating the complex radiation belt variations was produced to document the uses of the model for planning purposes.
Radiation budget measurement/model interface
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.
1983-01-01
This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.
NASA Technical Reports Server (NTRS)
Kopasakis, George
2014-01-01
The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.
O'Connor, James P B; Boult, Jessica K R; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff J M; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P
2016-02-15
There is a clinical need for noninvasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning, and therapy monitoring. Oxygen-enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed "Oxy-R fraction") would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here, we demonstrate that OE-MRI signals are accurate, precise, and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia noninvasively and is immediately translatable to the clinic.
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2016-01-01
In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2016-01-01
In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553
Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, J. A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.
Bornefalk, Hans; Persson, Mats; Danielsson, Mats
2015-03-01
Material basis decomposition in the sinogram domain requires accurate knowledge of the forward model in spectral computed tomography (CT). Misspecifications over a certain limit will result in biased estimates and make quantum limited (where statistical noise dominates) quantitative CT difficult. We present a method whereby users can determine the degree of allowed misspecification error in a spectral CT forward model and still have quantification errors that are limited by the inherent statistical uncertainty. For a particular silicon detector based spectral CT system, we conclude that threshold determination is the most critical factor and that the bin edges need to be known to within 0.15 keV in order to be able to perform quantum limited material basis decomposition. The method as such is general to all multibin systems.
The simplest models of radiative neutrino mass
NASA Astrophysics Data System (ADS)
Law, Sandy S. C.; McDonald, Kristian L.
2014-04-01
The complexity of radiative neutrino-mass models can be judged by: (i) whether they require the imposition of ad hoc symmetries, (ii) the number of new multiplets they introduce and (iii) the number of arbitrary parameters that appear. Considering models that do not employ new symmetries, the simplest models have two new multiplets and a minimal number of new parameters. With this in mind, we search for the simplest models of radiative neutrino mass. We are led to two models, containing a real scalar triplet and a charged scalar doublet (respectively), in addition to the charged singlet scalar considered by Zee [h+ (1, 1, 2)]. These models are essentially simplified versions of the Zee model and appear to be the simplest models of radiative neutrino mass. However, despite successfully generating nonzero masses, present-day data is sufficient to rule these simple models out. The Zee and Zee-Babu models therefore remain as the simplest viable models. Moving beyond the minimal cases, we find a new model of two-loop masses that employs the charged doublet Φ (1, 2, 3) and the doubly-charged scalar k++ (1, 1, 4). This is the sole remaining model that employs only three new noncolored multiplets.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach.
Saa, Pedro A; Nielsen, Lars K
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
String Fragmentation Model in Space Radiation Problems
NASA Technical Reports Server (NTRS)
Tang, Alfred; Johnson, Eloise (Editor); Norbury, John W.; Tripathi, R. K.
2002-01-01
String fragmentation models such as the Lund Model fit experimental particle production cross sections very well in the high-energy limit. This paper gives an introduction of the massless relativistic string in the Lund Model and shows how it can be modified with a simple assumption to produce formulas for meson production cross sections for space radiation research. The results of the string model are compared with inclusive pion production data from proton-proton collision experiments.
NASA Astrophysics Data System (ADS)
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
Are Quasi-Steady-State Approximated Models Suitable for Quantifying Intrinsic Noise Accurately?
Sengupta, Dola; Kar, Sandip
2015-01-01
Large gene regulatory networks (GRN) are often modeled with quasi-steady-state approximation (QSSA) to reduce the huge computational time required for intrinsic noise quantification using Gillespie stochastic simulation algorithm (SSA). However, the question still remains whether the stochastic QSSA model measures the intrinsic noise as accurately as the SSA performed for a detailed mechanistic model or not? To address this issue, we have constructed mechanistic and QSSA models for few frequently observed GRNs exhibiting switching behavior and performed stochastic simulations with them. Our results strongly suggest that the performance of a stochastic QSSA model in comparison to SSA performed for a mechanistic model critically relies on the absolute values of the mRNA and protein half-lives involved in the corresponding GRN. The extent of accuracy level achieved by the stochastic QSSA model calculations will depend on the level of bursting frequency generated due to the absolute value of the half-life of either mRNA or protein or for both the species. For the GRNs considered, the stochastic QSSA quantifies the intrinsic noise at the protein level with greater accuracy and for larger combinations of half-life values of mRNA and protein, whereas in case of mRNA the satisfactory accuracy level can only be reached for limited combinations of absolute values of half-lives. Further, we have clearly demonstrated that the abundance levels of mRNA and protein hardly matter for such comparison between QSSA and mechanistic models. Based on our findings, we conclude that QSSA model can be a good choice for evaluating intrinsic noise for other GRNs as well, provided we make a rational choice based on experimental half-life values available in literature. PMID:26327626
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-01-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
Infrared radiation models for atmospheric ozone
NASA Technical Reports Server (NTRS)
Kratz, David P.; Ces, Robert D.
1988-01-01
A hierarchy of line-by-line, narrow-band, and broadband infrared radiation models are discussed for ozone, a radiatively important atmospheric trace gas. It is shown that the narrow-band (Malkmus) model is in near-precise agreement with the line-by-line model, thus providing a means of testing narrow-band Curtis-Godson scaling, and it is found that this scaling procedure leads to errors in atmospheric fluxes of up to 10 percent. Moreover, this is a direct consequence of the altitude dependence of the ozone mixing ratio. Somewhat greater flux errors arise with use of the broadband model, due to both a lesser accuracy of the broadband scaling procedure and to inherent errors within the broadband model, despite the fact that this model has been tuned to the line-by-line model.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model
NASA Astrophysics Data System (ADS)
Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.
2007-05-01
Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Threshold models in radiation carcinogenesis
Hoel, D.G.; Li, P.
1998-09-01
Cancer incidence and mortality data from the atomic bomb survivors cohort has been analyzed to allow for the possibility of a threshold dose response. The same dose-response models as used in the original papers were fit to the data. The estimated cancer incidence from the fitted models over-predicted the observed cancer incidence in the lowest exposure group. This is consistent with a threshold or nonlinear dose-response at low-doses. Thresholds were added to the dose-response models and the range of possible thresholds is shown for both solid tumor cancers as well as the different leukemia types. This analysis suggests that the A-bomb cancer incidence data agree more with a threshold or nonlinear dose-response model than a purely linear model although the linear model is statistically equivalent. This observation is not found with the mortality data. For both the incidence data and the mortality data the addition of a threshold term significantly improves the fit to the linear or linear-quadratic dose response for both total leukemias and also for the leukemia subtypes of ALL, AML, and CML.
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1990-01-01
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). SPECTRE will establish an absolute standard against which to compare models, and will aim to remove the hidden variables'' (unknown humidities, aerosols, etc.) which radiation modelers have invoked to excuse disagreements with observation. The data to be collected during SPECTRE will form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used to climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. Overall, the project is proceeding much as had been anticipated in the original proposal. The most significant accomplishments to date include the completion of the analysis of the original ICRCCM calculations, the completion of the initial sensitivity analysis of the radiation calculations for the effects of uncertainties in the measurement of water vapor and temperature and the acquisition and testing of the inexpensive spectrometers for use in the field experiment. The sensitivity analysis and the spectrometer tests given us much more confidence that the field experiment will yield the quality of data necessary to make a significant tests of and improvements to radiative transfer models used in climate studies.
Radiative transfer model: matrix operator method.
Liu, Q; Ruprecht, E
1996-07-20
A radiative transfer model, the matrix operator method, is discussed here. The matrix operator method is applied to a plane-parallel atmosphere within three spectral ranges: the visible, the infrared, and the microwave. For a homogeneous layer with spherical scattering, the radiative transfer equation can be solved analytically. The vertically inhomogeneous atmosphere can be subdivided into a set of homogeneous layers. The solution of the radiative transfer equation for the vertically inhomogeneous atmosphere is obtained recurrently from the analytical solutions for the subdivided layers. As an example for the application of the matrix operator method, the effects of the cirrus and the stratocumulus clouds on the net radiation at the surface and at the top of the atmosphere are investigated. The relationship between the polarization in the microwave range and the rain rates is also studied. Copies of the FORTRAN program and the documentation of the FORTRAN program on a diskette are available.
Atmospheric radiation model for water surfaces
NASA Technical Reports Server (NTRS)
Turner, R. E.; Gaskill, D. W.; Lierzer, J. R.
1982-01-01
An atmospheric correction model was extended to account for various atmospheric radiation components in remotely sensed data. Components such as the atmospheric path radiance which results from singly scattered sky radiation specularly reflected by the water surface are considered. A component which is referred to as the virtual Sun path radiance, i.e. the singly scattered path radiance which results from the solar radiation which is specularly reflected by the water surface is also considered. These atmospheric radiation components are coded into a computer program for the analysis of multispectral remote sensor data over the Great Lakes of the United States. The user must know certain parameters, such as the visibility or spectral optical thickness of the atmosphere and the geometry of the sensor with respect to the Sun and the target elements under investigation.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782
Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?
Searcy, Christopher A; Shaffer, H Bradley
2016-04-01
Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071
Lito, Patrícia F; Magalhães, Ana L; Gomes, José R B; Silva, Carlos M
2013-05-17
In this work it is presented a new model for accurate calculation of binary diffusivities (D12) of solutes infinitely diluted in gas, liquid and supercritical solvents. It is based on a Lennard-Jones (LJ) model, and contains two parameters: the molecular diameter of the solvent and a diffusion activation energy. The model is universal since it is applicable to polar, weakly polar, and non-polar solutes and/or solvents, over wide ranges of temperature and density. Its validation was accomplished with the largest database ever compiled, namely 487 systems with 8293 points totally, covering polar (180 systems/2335 points) and non-polar or weakly polar (307 systems/5958 points) mixtures, for which the average errors were 2.65% and 2.97%, respectively. With regard to the physical states of the systems, the average deviations achieved were 1.56% for gaseous (73 systems/1036 points), 2.90% for supercritical (173 systems/4398 points), and 2.92% for liquid (241 systems/2859 points). Furthermore, the model exhibited excellent prediction ability. Ten expressions from the literature were adopted for comparison, but provided worse results or were not applicable to polar systems. A spreadsheet for D12 calculation is provided online for users in Supplementary Data.
An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion
NASA Astrophysics Data System (ADS)
Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.
2014-11-01
Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.
The use of sparse CT datasets for auto-generating accurate FE models of the femur and pelvis.
Shim, Vickie B; Pitto, Rocco P; Streicher, Robert M; Hunter, Peter J; Anderson, Iain A
2007-01-01
The finite element (FE) method when coupled with computed tomography (CT) is a powerful tool in orthopaedic biomechanics. However, substantial data is required for patient-specific modelling. Here we present a new method for generating a FE model with a minimum amount of patient data. Our method uses high order cubic Hermite basis functions for mesh generation and least-square fits the mesh to the dataset. We have tested our method on seven patient data sets obtained from CT assisted osteodensitometry of the proximal femur. Using only 12 CT slices we generated smooth and accurate meshes of the proximal femur with a geometric root mean square (RMS) error of less than 1 mm and peak errors less than 8 mm. To model the complex geometry of the pelvis we developed a hybrid method which supplements sparse patient data with data from the visible human data set. We tested this method on three patient data sets, generating FE meshes of the pelvis using only 10 CT slices with an overall RMS error less than 3 mm. Although we have peak errors about 12 mm in these meshes, they occur relatively far from the region of interest (the acetabulum) and will have minimal effects on the performance of the model. Considering that linear meshes usually require about 70-100 pelvic CT slices (in axial mode) to generate FE models, our method has brought a significant data reduction to the automatic mesh generation step. The method, that is fully automated except for a semi-automatic bone/tissue boundary extraction part, will bring the benefits of FE methods to the clinical environment with much reduced radiation risks and data requirement.
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
NASA Astrophysics Data System (ADS)
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
NASA Technical Reports Server (NTRS)
Livne, Eli
1989-01-01
A method is presented for generating mode shapes for model order reduction in a way that leads to accurate calculation of eigenvalue derivatives and eigenvalues for a class of control augmented structures. The method is based on treating degrees of freedom where control forces act or masses are changed in a manner analogous to that used for boundary degrees of freedom in component mode synthesis. It is especially suited for structures controlled by a small number of actuators and/or tuned by a small number of concentrated masses whose positions are predetermined. A control augmented multispan beam with closely spaced natural frequencies is used for numerical experimentation. A comparison with reduced-order eigenvalue sensitivity calculations based on the normal modes of the structure shows that the method presented produces significant improvements in accuracy.
Development of a new Global RAdiation Belt model: GRAB
NASA Astrophysics Data System (ADS)
Sicard-Piet, Angelica; Lazaro, Didier; Maget, Vincent; Rolland, Guy; Ecoffet, Robert; Bourdarie, Sébastien; Boscher, Daniel; Standarovski, Denis
2016-07-01
The well known AP8 and AE8 NASA models are commonly used in the industry to specify the radiation belt environment. Unfortunately, there are some limitations in the use of these models, first due to the covered energy range, but also because in some regions of space, there are discrepancies between the predicted average values and the measurements. Therefore, our aim is to develop a radiation belt model, covering a large region of space and energy, from LEO altitudes to GEO and above, and from plasma to relativistic particles. The aim for the first version is to correct the AP8 and AE8 models where they are deficient or not defined. At geostationary, we developed ten years ago for electrons the IGE-2006 model which was proven to be more accurate than AE8, and used commonly in the industry, covering a broad energy range, from 1keV to 5MeV. From then, a proton model for geostationary orbit was also developed for material applications, followed by the OZONE model covering a narrower energy range but the whole outer electron belt, a SLOT model to asses average electron values for 2
An Accurately Stable Thermo-Hydro-Mechanical Model for Geo-Environmental Simulations
NASA Astrophysics Data System (ADS)
Gambolati, G.; Castelletto, N.; Ferronato, M.
2011-12-01
In real-world applications involving complex 3D heterogeneous domains the use of advanced numerical algorithms is of paramount importance to stabily, accurately and efficiently solve the coupled system of partial differential equations governing the mass and the energy balance in deformable porous media. The present communication discusses a novel coupled 3-D numerical model based on a suitable combination of Finite Elements (FEs), Mixed FEs (MFEs), and Finite Volumes (FVs) developed with the aim at stabilizing the numerical solution. Elemental pressures and temperatures, nodal displacements and face normal Darcy and Fourier fluxes are the selected primary variables. Such an approach provides an element-wise conservative velocity field, with both pore pressure and stress having the same order of approximation, and allows for the accurate prediction of sharp temperature convective fronts. In particular, the flow-deformation problem is addressed jointly by FEs and MFEs and is coupled to the heat transfer equation using an ad hoc time splitting technique that separates the time temperature evolution into two partial differential equations, accounting for the convective and the diffusive contribution, respectively. The convective part is addressed by a FV scheme which proves effective in treating sharp convective fronts, while the diffusive part is solved by a MFE formulation. A staggered technique is then implemented for the global solution of the coupled thermo-hydro-mechanical problem, solving iteratively the flow-deformation and the heat transport at each time step. Finally, the model is successfully experimented with in realistic applications dealing with geothermal energy extraction and injection.
NASA Astrophysics Data System (ADS)
Nielsen, Jens; d'Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-01
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
The S-model: A highly accurate MOST model for CAD
NASA Astrophysics Data System (ADS)
Satter, J. H.
1986-09-01
A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.
Modeling of Radiative Transfer in Protostellar Disks
NASA Technical Reports Server (NTRS)
VonAllmen, Paul; Turner, Neal
2007-01-01
This program implements a spectral line, radiative transfer tool for interpreting Spitzer Space Telescope observations by matching them with models of protostellar disks for improved understanding of planet and star formation. The Spitzer Space Telescope detects gas phase molecules in the infrared spectra of protostellar disks, with spectral lines carrying information on the chemical composition of the material from which planets form. Input to the software includes chemical models developed at JPL. The products are synthetic images and spectra for comparison with Spitzer measurements. Radiative transfer in a protostellar disk is primarily affected by absorption and emission processes in the dust and in molecular gases such as H2, CO, and HCO. The magnitude of the optical absorption and emission is determined by the population of the electronic, vibrational, and rotational energy levels. The population of the molecular level is in turn determined by the intensity of the radiation field. Therefore, the intensity of the radiation field and the population of the molecular levels are inter-dependent quantities. To meet the computational challenges of solving for the coupled radiation field and electronic level populations in disks having wide ranges of optical depths and spatial scales, the tool runs in parallel on the JPL Dell Cluster supercomputer with C++ and Fortran compiler with a Message Passing Interface. Because this software has been developed on a distributed computing platform, the modeling of systems previously beyond the reach of available computational resources is possible.
Helium Reionization Simulations. I. Modeling Quasars as Radiation Sources
NASA Astrophysics Data System (ADS)
La Plante, Paul; Trac, Hy
2016-09-01
We introduce a new project to understand helium reionization using fully coupled N-body, hydrodynamics, and radiative transfer simulations. This project aims to capture correctly the thermal history of the intergalactic medium as a result of reionization and make predictions about the Lyα forest and baryon temperature-density relation. The dominant sources of radiation for this transition are quasars, so modeling the source population accurately is very important for making reliable predictions. In this first paper, we present a new method for populating dark matter halos with quasars. Our set of quasar models includes two different light curves, a lightbulb (simple on/off) and symmetric exponential model, and luminosity-dependent quasar lifetimes. Our method self-consistently reproduces an input quasar luminosity function given a halo catalog from an N-body simulation, and propagates quasars through the merger history of halo hosts. After calibrating quasar clustering using measurements from the Baryon Oscillation Spectroscopic Survey, we find that the characteristic mass of quasar hosts is {M}h˜ 2.5× {10}12 {h}-1 {M}⊙ for the lightbulb model, and {M}h˜ 2.3× {10}12 {h}-1 {M}⊙ for the exponential model. In the latter model, the peak quasar luminosity for a given halo mass is larger than that in the former, typically by a factor of 1.5-2. The effective lifetime for quasars in the lightbulb model is 59 Myr, and in the exponential case, the effective time constant is about 15 Myr. We include semi-analytic calculations of helium reionization, and discuss how to include these quasars as sources of ionizing radiation for full hydrodynamics with radiative transfer simulations in order to study helium reionization.
Helium Reionization Simulations. I. Modeling Quasars as Radiation Sources
NASA Astrophysics Data System (ADS)
La Plante, Paul; Trac, Hy
2016-09-01
We introduce a new project to understand helium reionization using fully coupled N-body, hydrodynamics, and radiative transfer simulations. This project aims to capture correctly the thermal history of the intergalactic medium as a result of reionization and make predictions about the Lyα forest and baryon temperature–density relation. The dominant sources of radiation for this transition are quasars, so modeling the source population accurately is very important for making reliable predictions. In this first paper, we present a new method for populating dark matter halos with quasars. Our set of quasar models includes two different light curves, a lightbulb (simple on/off) and symmetric exponential model, and luminosity-dependent quasar lifetimes. Our method self-consistently reproduces an input quasar luminosity function given a halo catalog from an N-body simulation, and propagates quasars through the merger history of halo hosts. After calibrating quasar clustering using measurements from the Baryon Oscillation Spectroscopic Survey, we find that the characteristic mass of quasar hosts is {M}h∼ 2.5× {10}12 {h}-1 {M}ȯ for the lightbulb model, and {M}h∼ 2.3× {10}12 {h}-1 {M}ȯ for the exponential model. In the latter model, the peak quasar luminosity for a given halo mass is larger than that in the former, typically by a factor of 1.5–2. The effective lifetime for quasars in the lightbulb model is 59 Myr, and in the exponential case, the effective time constant is about 15 Myr. We include semi-analytic calculations of helium reionization, and discuss how to include these quasars as sources of ionizing radiation for full hydrodynamics with radiative transfer simulations in order to study helium reionization.
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
A first-order radiative transfer model for microwave radiometry of forest canopies at L-band
Technology Transfer Automated Retrieval System (TEKTRAN)
In this study, a first-order radiative transfer (RT) model is developed to more accurately account for vegetation canopy scattering by modifying the basic radiative transfer model (the zero-order RT solution). In order to optimally utilize microwave radiometric data in soil moisture (SM) retrievals ...
Dynamic model of Earth's radiation belts
NASA Astrophysics Data System (ADS)
Matsumoto, Haruhisa; Koshiishi, Hideki; Goka, Tateo; Obara, Takahiro
The radiation belts are the region that energetic charged particles are trapped by Earth's magnetic field. It is well known that the energetic particle flux vary during geomagnetic distur-bances, and, the relativistic electrons in the outer radiation belt change with solar wind speed. Many researches have been studied about the flux variation of radiation belt, but the mecha-nism of the variation has not been understood in detail. We have developed a new dynamic model of energetic particles trapped in the based on the data from the MDS-1 spacecraft. This model reproduces the dynamic of radiation belt by running average using magnetic activity index(AP) and running average solar wind speed. This model covers the energy ranges of 0.4-2MeV for electrons, 0.9-210 MeV for protons, and 6-140 MeV for helium ions, and it is valid from low altitudes (approximately 500km) to geosynchronous orbit altitude. We discuss the advantage of the new model, and comparisons between MDS-1 data and our new model.
Radiative Torques: Analytical Model And Basic Properties
NASA Astrophysics Data System (ADS)
Hoang, Thiem; Lazarian, A.
2007-05-01
We attempt to get a physical insight into grain alignment processes by studying basic properties of radiative torques (RATs). For this purpose we consider a simple toy model of a helical grain that reproduces well the basic features of RATs. The model grain consists of a reflecting spheroidal body with a reflecting mirror attached at an angle to it. Being very simple, the model allows analytical description of RATs that act upon it. We show a good correspondence of RATs obtained for this model and those of irregular grains calculated by DDSCAT. Our analysis of the role of different torque components for grain alignment reveals that one of the three RAT components does not affect the alignment, but induces only for grain precession. The other two components provide a generic alignment with grain long axes perpendicular to the light radiation, if the radiation dominates the grain precession, and perpendicular to magnetic field, otherwise. The latter coincides with the famous predictions of the Davis-Greenstein process, but our model does not invoke paramagnetic relaxation. In addition, we find that a substantial part of grains subjected to RATs gets aligned with low angular momentum, which testifies, that most of the grains in diffuse interstellar medium do not rotate fast, i.e. rotate with thermal or even sub-thermal velocities. For the radiation-dominated environments, we find that the alignment can take place on the time scale much shorter than the time of gaseous damping of grain rotation.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Modeling influences of topography on incoming solar radiation from satellite remote sensing data
NASA Astrophysics Data System (ADS)
Zoran, Maria
2007-08-01
Solar radiation is the primary source of energy that drives earth system processes, such as weather patterns and rates of primary production by green plants. Accurate solar irradiance data are necessary for the radiative forcing of the climate system assessment as well as for efficient planning and operation of solar energy systems. Topography is a major factor that determines the amount of solar radiation reaching any particular location on the Earth's surface. Its variability in elevation, surface orientation (slope and aspect), and shadows is subject to quantitative modeling, based on radiative transfer models (RTM) using atmospheric parameter information retrieved from the MODIS satellites. This paper focuses on the description of a solar radiation model to describe spatial and temporal patterns of daily radiation based on topography and daily temperature regimes with a specific analysis for Dobruja area, Romania.
Jovian S emission: Model of radiation source
NASA Astrophysics Data System (ADS)
Ryabov, B. P.
1994-04-01
A physical model of the radiation source and an excitation mechanism have been suggested for the S component in Jupiter's sporadic radio emission. The model provides a unique explanation for most of the interrelated phenomena observed, allowing a consistent interpretation of the emission cone structure, behavior of the integrated radio spectrum, occurrence probability of S bursts, location and size of the radiation source, and fine structure of the dynamic spectra. The mechanism responsible for the S bursts is also discussed in connection with the L type emission. Relations are traced between parameters of the radio emission and geometry of the Io flux tube. Fluctuations in the current amplitude through the tube are estimated, along with the refractive index value and mass density of the plasma near the radiation source.
Franck, Christopher T; Koffarnus, Mikhail N; House, Leanna L; Bickel, Warren K
2015-01-01
The study of delay discounting, or valuation of future rewards as a function of delay, has contributed to understanding the behavioral economics of addiction. Accurate characterization of discounting can be furthered by statistical model selection given that many functions have been proposed to measure future valuation of rewards. The present study provides a convenient Bayesian model selection algorithm that selects the most probable discounting model among a set of candidate models chosen by the researcher. The approach assigns the most probable model for each individual subject. Importantly, effective delay 50 (ED50) functions as a suitable unifying measure that is computable for and comparable between a number of popular functions, including both one- and two-parameter models. The combined model selection/ED50 approach is illustrated using empirical discounting data collected from a sample of 111 undergraduate students with models proposed by Laibson (1997); Mazur (1987); Myerson & Green (1995); Rachlin (2006); and Samuelson (1937). Computer simulation suggests that the proposed Bayesian model selection approach outperforms the single model approach when data truly arise from multiple models. When a single model underlies all participant data, the simulation suggests that the proposed approach fares no worse than the single model approach.
Shuttle Spacesuit (Radiation) Model Development
NASA Technical Reports Server (NTRS)
Anderson, Brooke M.; Nealy, J. E.; Qualls, G. D.; Staritz, P. J.; Wilson, J. W.; Kim, M.-H. Y.; Cucinotta, F. A.; Atwell, W.; DeAngelis, G.; Ware, J.
2001-01-01
A detailed spacesuit computational model is being developed at the Langley Research Center for exposure evaluation studies. The details of the construction of the spacesuit are critical to an estimate of exposures and for assessing the health risk to the astronaut during extravehicular activity (EVA). Fine detail of the basic fabric structure, helmet, and backpack is required to assure a valid evaluation. The exposure fields within the Computerized Anatomical Male (CAM) and Female (CAF) are evaluated at 148 and 156 points, respectively, to determine the dose fluctuations within critical organs. Exposure evaluations for ambient environments will be given and potential implications for geomagnetic storm conditions discussed.
Status of Galileo interim radiation electron model
NASA Technical Reports Server (NTRS)
Garrett, H. B.; Jun, I.; Ratliff, J. M.; Evans, R. W.; Clough, G. A.; McEntire, R. W.
2003-01-01
Measurements of the high energy, omni-directional electron environment by the Galileo spacecraft Energetic Particle Detector (EDP) were used to develop a new model of Jupiter's trapped electron radiation in the jovian equatorial plane for the range 8 to 16 Jupiter radii.
Some analytical models of radiating collapsing spheres
Herrera, L.; Di Prisco, A; Ospino, J.
2006-08-15
We present some analytical solutions to the Einstein equations, describing radiating collapsing spheres in the diffusion approximation. Solutions allow for modeling physical reasonable situations. The temperature is calculated for each solution, using a hyperbolic transport equation, which permits to exhibit the influence of relaxational effects on the dynamics of the system.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).
Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert
2015-08-01
The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255
Accurate assessment of mass, models and resolution by small-angle scattering
Rambo, Robert P.; Tainer, John A.
2013-01-01
Modern small angle scattering (SAS) experiments with X-rays or neutrons provide a comprehensive, resolution-limited observation of the thermodynamic state. However, methods for evaluating mass and validating SAS based models and resolution have been inadequate. Here, we define the volume-of-correlation, Vc: a SAS invariant derived from the scattered intensities that is specific to the structural state of the particle, yet independent of concentration and the requirements of a compact, folded particle. We show Vc defines a ratio, Qr, that determines the molecular mass of proteins or RNA ranging from 10 to 1,000 kDa. Furthermore, we propose a statistically robust method for assessing model-data agreements (X2free) akin to cross-validation. Our approach prevents over-fitting of the SAS data and can be used with a newly defined metric, Rsas, for quantitative evaluation of resolution. Together, these metrics (Vc, Qr, X2free, and Rsas) provide analytical tools for unbiased and accurate macromolecular structural characterizations in solution. PMID:23619693
A radiation damage repair model for normal tissues
NASA Astrophysics Data System (ADS)
Partridge, Mike
2008-07-01
A cellular Monte Carlo model describing radiation damage and repair in normal epithelial tissues is presented. The deliberately simplified model includes cell cycling, cell motility and radiation damage response (cell cycle arrest and cell death) only. Results demonstrate that the model produces a stable equilibrium system for mean cell cycle times in the range 24-96 h. Simulated irradiation of these stable equilibrium systems produced a range of responses that are shown to be consistent with experimental and clinical observation, including (i) re-epithelialization of radiation-induced lesions by a mixture of cell migration into the wound and repopulation at the periphery; (ii) observed radiosensitivity that is quantitatively consistent with both rate of induction of irreparable DNA lesions and, independently, with the observed acute oral and pharyngeal mucosal reactions to radiotherapy; (iii) an observed time between irradiation and maximum toxicity that is consistent with experimental data for skin; (iv) quantitatively accurate predictions of low-dose hyper-radiosensitivity; (v) Gomperzian repopulation for very small lesions (~2000 cells) and (vi) a linear rate of re-epithelialization of 5-10 µm h-1 for large lesions (>15 000 cells).
Accurate Universal Models for the Mass Accretion Histories and Concentrations of Dark Matter Halos
NASA Astrophysics Data System (ADS)
Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Börner, G.
2009-12-01
A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when
ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS
Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.
2009-12-10
A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance LAMBDACDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and LAMBDACDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the LAMBDACDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass
Radiatively induced quark and lepton mass model
NASA Astrophysics Data System (ADS)
Nomura, Takaaki; Okada, Hiroshi
2016-10-01
We propose a radiatively induced quark and lepton mass model in the first and second generation with extra U (1) gauge symmetry and vector-like fermions. Then we analyze the allowed regions which simultaneously satisfy the FCNCs for the quark sector, LFVs including μ- e conversion, the quark mass and mixing, and the lepton mass and mixing. Also we estimate the typical value for the (g - 2) μ in our model.
Development of an infrared radiative heating model
NASA Technical Reports Server (NTRS)
Bergstrom, R. W.; Helmle, L. C.
1979-01-01
Infrared radiative transfer solution algorithms used in global circulation models were assessed. Computation techniques applicable to the Ames circulation model are identified. Transmission properties of gaseous CO2, H2O, and O3 are gathered, and a computer program is developed, using the line parameter tape and Voight profile subroutine, which computes the transmission of CO2, H2O, and O3. A computer code designed to compute atmospheric cooling rates was developed.
The NSSDC trapped radiation model facility
NASA Technical Reports Server (NTRS)
Gaffey, John D., Jr.; Bilitza, D.
1990-01-01
The National Space Science Data Center (NSSDC) trapped radiation models calculate the integral and differential electron and proton flux for given values of the particle energy E, drift shell parameter L, and magnetic field strength B for either solar maximum or solar minimum. The most recent versions of the series of models, which have been developed and continuously improved over several decades by Dr. James Vette and coworkers at NSSDC, are AE-8 for electrons and AP-8 for protons. The present status of the NSSDC trapped particle models is discussed. The limits of validity of the models are described.
Ab Initio Modeling of Molecular Radiation
NASA Technical Reports Server (NTRS)
Jaffe, Richard; Schwenke, David
2014-01-01
Radiative emission from excited states of atoms and molecules can comprise a significant fraction of the total heat flux experienced by spacecraft during atmospheric entry at hypersonic speeds. For spacecraft with ablating heat shields, some of this radiative flux can be absorbed by molecular constituents in the boundary layer that are formed by the ablation process. Ab initio quantum mechanical calculations are carried out to predict the strengths of these emission and absorption processes. This talk will describe the methods used in these calculations using, as examples, the 4th positive emission bands of CO and the 1g+ 1u+ absorption in C3. The results of these calculations are being used as input to NASA radiation modeling codes like NeqAir, HARA and HyperRad.
Radiative torques: analytical model and basic properties
NASA Astrophysics Data System (ADS)
Lazarian, A.; Hoang, Thiem
2007-07-01
We attempt to get a physical insight into grain alignment processes by studying basic properties of radiative torques (RATs). For this purpose we consider a simple toy model of a helical grain that reproduces well the basic features of RATs. The model grain consists of a spheroidal body with a mirror attached at an angle to it. Being very simple, the model allows analytical description of RATs that act upon it. We show a good correspondence of RATs obtained for this model and those of irregular grains calculated by DDSCAT. Our analysis of the role of different torque components for grain alignment reveals that one of the three RAT components does not affect the alignment, but induces only for grain precession. The other two components provide a generic alignment with grain long axes perpendicular to the radiation direction, if the radiation dominates the grain precession, and perpendicular to magnetic field, otherwise. The latter coincides with the famous predictions of the Davis-Greenstein process, but our model does not invoke paramagnetic relaxation. In fact, we identify a narrow range of angles between the radiation beam and the magnetic field, for which the alignment is opposite to the Davis-Greenstein predictions. This range is likely to vanish, however, in the presence of thermal wobbling of grains. In addition, we find that a substantial part of grains subjected to RATs gets aligned with low angular momentum, which testifies that most of the grains in diffuse interstellar medium do not rotate fast, that is, rotate with thermal or even subthermal velocities. This tendency of RATs to decrease grain angular velocity as a result of the RAT alignment decreases the degree of polarization, by decreasing the degree of internal alignment, that is, the alignment of angular momentum with the grain axes. For the radiation-dominated environments, we find that the alignment can take place on the time-scale much shorter than the time of gaseous damping of grain rotation
Seasonal radiative modeling of Titan's stratosphere
NASA Astrophysics Data System (ADS)
Bézard, Bruno; Vinatier, Sandrine; Achterberg, Richard
2016-10-01
We have developed a seasonal radiative model of Titan's stratosphere to investigate the time variation of stratospheric temperatures in the 10-3 - 5 mbar range as observed by the Cassini/CIRS spectrometer. The model incorporates gas and aerosol vertical profiles derived from Cassini/CIRS spectra to calculate the heating and cooling rate profiles as a function of time and latitude. In the equatorial region, the radiative equilibrium profile is warmer than the observed one. Adding adiabatic cooling in the energy equation, with a vertical velocity profile decreasing with depth and having w ≈ 0.4 mm sec-1 at 1 mbar, allows us to reproduce the observed profile. The model predicts a 5 K decrease at 1 mbar between 2008 and 2016 as a result of orbit eccentricity, in relatively good agreement with the observations. At other latitudes, as expected, the radiative model predicts seasonal variations of temperature larger than observed, pointing to latitudinal redistribution of heat by dynamics. Vertical velocities seasonally varying between -0.4 and 1.2 mm sec-1 at 1 mbar provide adiabatic cooling and heating adequate to reproduce the time variation of 1-mbar temperatures from 2005 to 2016 at 30°N and S. The model is also used to investigate the role of the strong compositional changes observed at high southern latitudes after equinox in the concomitant rapid cooling of the stratosphere.
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Grant, K.E.; Taylor, K.E.; Ellis, J.S.; Wuebbles, D.J.
1987-07-01
The authors have implemented a series of state of the art radiation transport submodels in previously developed one dimensional and two dimensional chemical transport models of the troposphere and stratosphere. These submodels provide the capability of calculating accurate solar and infrared heating rates. They are a firm basis for further radiation submodel development as well as for studying interactions between radiation and model dynamics under varying conditions of clear sky, clouds, and aerosols. 37 refs., 3 figs.
NASA Technical Reports Server (NTRS)
Carlson, Leland A.; Bobskill, Glenn J.; Greendyke, Robert B.
1988-01-01
A series of detailed studies comparing various vibration dissociation coupling models, reaction systems and rates, and radiative heating models has been conducted for the nonequilibrium stagnation region of an AFE/AOTV vehicle. Atomic and molecular nonequilibrium radiation correction factors have been developed and applied to various absorption coefficient step models, and a modified vibration dissociation coupling model has been shown to yield good vibration/electronic temperature and concentration profiles. While results indicate sensitivity to the choice of vibration dissociation coupling model and to the nitrogen electron impact ionization rate, by proper combinations accurate flowfield and radiative heating results can be obtained. These results indicate that nonequilibrium effects significantly affect the flowfield and the radiative heat transfer. However, additional work is needed in ionization chemistry and absorption coefficient modeling.
Dai, Daoxin; He, Sailing
2004-12-01
An accurate two-dimensional (2D) model is introduced for the simulation of an arrayed-waveguide grating (AWG) demultiplexer by integrating the field distribution along the vertical direction. The equivalent 2D model has almost the same accuracy as the original three-dimensional model and is more accurate for the AWG considered here than the conventional 2D model based on the effective-index method. To further improve the computational efficiency, the reciprocity theory is applied to the optimal design of a flat-top AWG demultiplexer with a special input structure.
Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method
NASA Technical Reports Server (NTRS)
Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.
2005-01-01
The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.
Principles of the radiative ablation modeling
NASA Astrophysics Data System (ADS)
Saillard, Yves; Arnault, Philippe; Silvert, Virginie
2010-12-01
Indirectly driven inertial confinement fusion (ICF) rests on the setting up of a radiation temperature within a laser cavity and on the optimization of the capsule implosion ablated by this radiation. In both circumstances, the ablation of an optically thick medium is at work. The nonlinear radiation conduction equations that describe this phenomenon admit different kinds of solutions called generically Marshak waves. In this paper, a completely analytic model is proposed to describe the ablation in the subsonic regime relevant to ICF experiments. This model approximates the flow by a deflagrationlike structure where Hugoniot relations are used in the stationary part from the ablation front up to the isothermal sonic Chapman-Jouguet point and where the unstationary expansion from the sonic point up to the external boundary is assumed quasi-isothermal. It uses power law matter properties. It can also accommodate arbitrary boundary conditions provided the ablation wave stays very subsonic and the surface temperature does not vary too quickly. These requirements are often met in realistic situations. Interestingly, the ablated mass rate, the ablation pressure, and the absorbed radiative energy depend on the time history of the surface temperature, not only on the instantaneous temperature values. The results compare very well with self-similar solutions and with numerical simulations obtained by hydrodynamic code. This analytic model gives insight into the physical processes involved in the ablation and is helpful for optimization and sensitivity studies in many situations of interest: radiation temperature within a laser cavity, acceleration of finite size medium, and ICF capsule implosion, for instance.
Principles of the radiative ablation modeling
Saillard, Yves; Arnault, Philippe; Silvert, Virginie
2010-12-15
Indirectly driven inertial confinement fusion (ICF) rests on the setting up of a radiation temperature within a laser cavity and on the optimization of the capsule implosion ablated by this radiation. In both circumstances, the ablation of an optically thick medium is at work. The nonlinear radiation conduction equations that describe this phenomenon admit different kinds of solutions called generically Marshak waves. In this paper, a completely analytic model is proposed to describe the ablation in the subsonic regime relevant to ICF experiments. This model approximates the flow by a deflagrationlike structure where Hugoniot relations are used in the stationary part from the ablation front up to the isothermal sonic Chapman-Jouguet point and where the unstationary expansion from the sonic point up to the external boundary is assumed quasi-isothermal. It uses power law matter properties. It can also accommodate arbitrary boundary conditions provided the ablation wave stays very subsonic and the surface temperature does not vary too quickly. These requirements are often met in realistic situations. Interestingly, the ablated mass rate, the ablation pressure, and the absorbed radiative energy depend on the time history of the surface temperature, not only on the instantaneous temperature values. The results compare very well with self-similar solutions and with numerical simulations obtained by hydrodynamic code. This analytic model gives insight into the physical processes involved in the ablation and is helpful for optimization and sensitivity studies in many situations of interest: radiation temperature within a laser cavity, acceleration of finite size medium, and ICF capsule implosion, for instance.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-06-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-08-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
NASA Astrophysics Data System (ADS)
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Biologically based multistage modeling of radiation effects
William Hazelton; Suresh Moolgavkar; E. Georg Luebeck
2005-08-30
This past year we have made substantial progress in modeling the contribution of homeostatic regulation to low-dose radiation effects and carcinogenesis. We have worked to refine and apply our multistage carcinogenesis models to explicitly incorporate cell cycle states, simple and complex damage, checkpoint delay, slow and fast repair, differentiation, and apoptosis to study the effects of low-dose ionizing radiation in mouse intestinal crypts, as well as in other tissues. We have one paper accepted for publication in ''Advances in Space Research'', and another manuscript in preparation describing this work. I also wrote a chapter describing our combined cell-cycle and multistage carcinogenesis model that will be published in a book on stochastic carcinogenesis models edited by Wei-Yuan Tan. In addition, we organized and held a workshop on ''Biologically Based Modeling of Human Health Effects of Low dose Ionizing Radiation'', July 28-29, 2005 at Fred Hutchinson Cancer Research Center in Seattle, Washington. We had over 20 participants, including Mary Helen Barcellos-Hoff as keynote speaker, talks by most of the low-dose modelers in the DOE low-dose program, experimentalists including Les Redpath (and Mary Helen), Noelle Metting from DOE, and Tony Brooks. It appears that homeostatic regulation may be central to understanding low-dose radiation phenomena. The primary effects of ionizing radiation (IR) are cell killing, delayed cell cycling, and induction of mutations. However, homeostatic regulation causes cells that are killed or damaged by IR to eventually be replaced. Cells with an initiating mutation may have a replacement advantage, leading to clonal expansion of these initiated cells. Thus we have focused particularly on modeling effects that disturb homeostatic regulation as early steps in the carcinogenic process. There are two primary considerations that support our focus on homeostatic regulation. First, a number of epidemiologic studies using multistage
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
NASA Astrophysics Data System (ADS)
Reichert, Andreas; Rettinger, Markus; Sussmann, Ralf
2016-09-01
Quantitative knowledge of water vapor absorption is crucial for accurate climate simulations. An open science question in this context concerns the strength of the water vapor continuum in the near infrared (NIR) at atmospheric temperatures, which is still to be quantified by measurements. This issue can be addressed with radiative closure experiments using solar absorption spectra. However, the spectra used for water vapor continuum quantification have to be radiometrically calibrated. We present for the first time a method that yields sufficient calibration accuracy for NIR water vapor continuum quantification in an atmospheric closure experiment. Our method combines the Langley method with spectral radiance measurements of a high-temperature blackbody calibration source (< 2000 K). The calibration scheme is demonstrated in the spectral range 2500 to 7800 cm-1, but minor modifications to the method enable calibration also throughout the remainder of the NIR spectral range. The resulting uncertainty (2σ) excluding the contribution due to inaccuracies in the extra-atmospheric solar spectrum (ESS) is below 1 % in window regions and up to 1.7 % within absorption bands. The overall radiometric accuracy of the calibration depends on the ESS uncertainty, on which at present no firm consensus has been reached in the NIR. However, as is shown in the companion publication Reichert and Sussmann (2016), ESS uncertainty is only of minor importance for the specific aim of this study, i.e., the quantification of the water vapor continuum in a closure experiment. The calibration uncertainty estimate is substantiated by the investigation of calibration self-consistency, which yields compatible results within the estimated errors for 91.1 % of the 2500 to 7800 cm-1 range. Additionally, a comparison of a set of calibrated spectra to radiative transfer model calculations yields consistent results within the estimated errors for 97.7 % of the spectral range.
Watson, Charles M; Francis, Gamal R
2015-07-01
Hollow copper models painted to match the reflectance of the animal subject are standard in thermal ecology research. While the copper electroplating process results in accurate models, it is relatively time consuming, uses caustic chemicals, and the models are often anatomically imprecise. Although the decreasing cost of 3D printing can potentially allow the reproduction of highly accurate models, the thermal performance of 3D printed models has not been evaluated. We compared the cost, accuracy, and performance of both copper and 3D printed lizard models and found that the performance of the models were statistically identical in both open and closed habitats. We also find that 3D models are more standard, lighter, durable, and inexpensive, than the copper electroformed models. PMID:25965016
A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China
Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin
2014-01-01
Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046
Towards accurate kinetic modeling of prompt NO formation in hydrocarbon flames via the NCN pathway
Sutton, Jeffrey A.; Fleming, James W.
2008-08-15
A basic kinetic mechanism that can predict the appropriate prompt-NO precursor NCN, as shown by experiment, with relative accuracy while still producing postflame NO results that can be calculated as accurately as or more accurately than through the former HCN pathway is presented for the first time. The basic NCN submechanism should be a starting point for future NCN kinetic and prompt NO formation refinement.
Flow-radiation coupling for atmospheric entries using a Hybrid Statistical Narrow Band model
NASA Astrophysics Data System (ADS)
Soucasse, Laurent; Scoggins, James B.; Rivière, Philippe; Magin, Thierry E.; Soufiani, Anouar
2016-09-01
In this study, a Hybrid Statistical Narrow Band (HSNB) model is implemented to make fast and accurate predictions of radiative transfer effects on hypersonic entry flows. The HSNB model combines a Statistical Narrow Band (SNB) model for optically thick molecular systems, a box model for optically thin molecular systems and continua, and a Line-By-Line (LBL) description of atomic radiation. Radiative transfer calculations are coupled to a 1D stagnation-line flow model under thermal and chemical nonequilibrium. Earth entry conditions corresponding to the FIRE 2 experiment, as well as Titan entry conditions corresponding to the Huygens probe, are considered in this work. Thermal nonequilibrium is described by a two temperature model, although non-Boltzmann distributions of electronic levels provided by a Quasi-Steady State model are also considered for radiative transfer. For all the studied configurations, radiative transfer effects on the flow, the plasma chemistry and the total heat flux at the wall are analyzed in detail. The HSNB model is shown to reproduce LBL results with an accuracy better than 5% and a speed up of the computational time around two orders of magnitude. Concerning molecular radiation, the HSNB model provides a significant improvement in accuracy compared to the Smeared-Rotational-Band model, especially for Titan entries dominated by optically thick CN radiation.
Beinke, Christina; Braselmann, Herbert; Meineke, Viktor
2010-02-01
The dicentric assay was established to carry out cytogenetic biodosimetry after suspected radiation overexposure, including a comprehensive documentation system to record the processing of the specimen, all data, results, and stored information. As an essential prerequisite for retrospective radiation dose assessment, a dose-response curve for dicentric induction by in vitro x-ray irradiation of peripheral blood samples was produced. The accelerating potential was 240 kV (maximum photon energy: 240 keV). A total of 8,377 first-division metaphases of four healthy volunteers were analyzed after exposure to doses ranging from 0.2 to 4.0 Gy at a dose rate of 1.0 Gy min. The background level of aberrations at 0-dose was determined by the analysis of 14,522 first-division metaphases obtained from unirradiated blood samples of 10 healthy volunteers. The dose-response relationship follows a linear-quadratic equation, Y = c + alphaD + betaD, with the coefficients c = 0.0005 +/- 0.0002, alpha = 0.043 +/- 0.006, and beta = 0.063 +/- 0.004. The technical competence and the quality of the calibration curve were assessed by determination of the dose prediction accuracy in an in vitro experiment simulating whole-body exposures within a range of 0.2 to 2.0 Gy. Dose estimations were derived by scoring up to 500-1,000 metaphase spreads or more (full estimation mode) and by evaluating only 50 metaphase spreads (triage mode) per subject. The triage mode was applied by performing manifold evaluations of the full estimation data in order to test the robustness of the curve for triage purposes and to assess possible variations among the estimated doses referring to a single exposure and preparation.
Sakabe, K; Sasaki, K; Watanabe, N; Suzuki, M; Wang, Z G; Miyahara, J; Sakabe, N
1997-05-01
Off-line and on-line protein data-collection systems using an imaging plate as a detector are described and their components reported. The off-line scanner IPR4080 was developed for a large-format imaging plate ;BASIII' of dimensions 400 x 400 mm and 400 x 800 mm. The characteristics of this scanner are a dynamic range of 10(5) photons pixel(-1), low background noise and high sensitivity. A means of reducing electronic noise and a method for finding the origin of the noise are discussed in detail. A dedicated screenless Weissenberg camera matching IPR4080 with synchrotron radiation was developed and installed on beamline BL6B at the Photon Factory. This camera can attach one or two sheets of 400 x 800 mm large-format imaging plate inside the film cassette by evacuation. The positional reproducibility of the imaging plate on the cassette is so good that the data can be processed by batch job. Data of 93% completeness up to 1.6 A resolution were collected on a single axis rotation and the value of R(merge) becomes 4% from a tetragonal lysozyme crystal using a set of two imaging-plate sheets. Comparing two types of imaging plates, the signal-to-noise ratio of the ST-VIP-type imaging plate is 25% better than that of the BASIII-type imaging plate for protein data collection using 1.0 and 0.7 A X-rays. A new on-line protein data-collection system with imaging plates is specially designed to use synchrotron radiation X-rays at maximum efficiency.
NASA Astrophysics Data System (ADS)
Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.
2015-07-01
Routine measurements of the beam irradiance at normal incidence (DNI) include the irradiance originating from within the extent of the solar disc only (DNIS) whose angular extent is 0.266° ± 1.7 %, and that from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates if the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and a collocated Sun and Aureole Measurement (SAM) instrument which offers reference measurements of the monochromatic profile of solar radiance, were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 5 %, a relative bias of +1 % and acoefficient of determination greater than 0.97. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a Two Term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 22 and -19 % and a coefficient of determination of 0.89. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard DNI measurements.
Modeling Early Galaxies Using Radiation Hydrodynamics
2011-01-01
This simulation uses a flux-limited diffusion solver to explore the radiation hydrodynamics of early galaxies, in particular, the ionizing radiation created by Population III stars. At the time of this rendering, the simulation has evolved to a redshift of 3.5. The simulation volume is 11.2 comoving megaparsecs, and has a uniform grid of 10243 cells, with over 1 billion dark matter and star particles. This animation shows a combined view of the baryon density, dark matter density, radiation energy and emissivity from this simulation. The multi-variate rendering is particularly useful because is shows both the baryonic matter ("normal") and dark matter, and the pressure and temperature variables are properties of only the baryonic matter. Visible in the gas density are "bubbles", or shells, created by the radiation feedback from young stars. Seeing the bubbles from feedback provides confirmation of the physics model implemented. Features such as these are difficult to identify algorithmically, but easily found when viewing the visualization. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.
Introductory Tools for Radiative Transfer Models
NASA Astrophysics Data System (ADS)
Feldman, D.; Kuai, L.; Natraj, V.; Yung, Y.
2006-12-01
Satellite data are currently so voluminous that, despite their unprecedented quality and potential for scientific application, only a small fraction is analyzed due to two factors: researchers' computational constraints and a relatively small number of researchers actively utilizing the data. Ultimately it is hoped that the terabytes of unanalyzed data being archived can receive scientific scrutiny but this will require a popularization of the methods associated with the analysis. Since a large portion of complexity is associated with the proper implementation of the radiative transfer model, it is reasonable and appropriate to make the model as accessible as possible to general audiences. Unfortunately, the algorithmic and conceptual details that are necessary for state-of-the-art analysis also tend to frustrate the accessibility for those new to remote sensing. Several efforts have been made to have web- based radiative transfer calculations, and these are useful for limited calculations, but analysis of more than a few spectra requires the utilization of home- or server-based computing resources. We present a system that is designed to allow for easier access to radiative transfer models with implementation on a home computing platform in the hopes that this system can be utilized in and expanded upon in advanced high school and introductory college settings. This learning-by-doing process is aided through the use of several powerful tools. The first is a wikipedia-style introduction to the salient features of radiative transfer that references the seminal works in the field and refers to more complicated calculations and algorithms sparingly5. The second feature is a technical forum, commonly referred to as a tiki-wiki, that addresses technical and conceptual questions through public postings, private messages, and a ranked searching routine. Together, these tools may be able to facilitate greater interest in the field of remote sensing.
NASA Astrophysics Data System (ADS)
Juste, B.; Miró, R.; Verdú, G.; Santos, A.
2014-06-01
This work presents a methodology to reconstruct a Linac high energy photon spectrum beam. The method is based on EPID scatter images generated when the incident photon beam impinges onto a plastic block. The distribution of scatter radiation produced by this scattering object placed on the external EPID surface and centered at the beam field size was measured. The scatter distribution was also simulated for a series of monoenergetic identical geometry photon beams. Monte Carlo simulations were used to predict the scattered photons for monoenergetic photon beams at 92 different locations, with 0.5 cm increments and at 8.5 cm from the centre of the scattering material. Measurements were performed with the same geometry using a 6 MeV photon beam produced by the linear accelerator. A system of linear equations was generated to combine the polyenergetic EPID measurements with the monoenergetic simulation results. Regularization techniques were applied to solve the system for the incident photon spectrum. A linear matrix system, A×S=E, was developed to describe the scattering interactions and their relationship to the primary spectrum (S). A is the monoenergetic scatter matrix determined from the Monte Carlo simulations, S is the incident photon spectrum, and E represents the scatter distribution characterized by EPID measurement. Direct matrix inversion methods produce results that are not physically consistent due to errors inherent in the system, therefore Tikhonov regularization methods were applied to address the effects of these errors and to solve the system for obtaining a consistent bremsstrahlung spectrum.
Lattice Boltzmann model for a steady radiative transfer equation.
Yi, Hong-Liang; Yao, Feng-Ju; Tan, He-Ping
2016-08-01
A complete lattice Boltzmann model (LBM) is proposed for the steady radiative transfer equation (RTE). The RTE can be regarded as a pure convection equation with a source term. To derive the expressions for the equilibrium distribution function and the relaxation time, an artificial isotropic diffusion term is introduced to form a convection-diffusion equation. When the dimensionless relaxation time has a value of 0.5, the lattice Boltzmann equation (LBE) is exactly applicable to the original steady RTE. We also perform a multiscale analysis based on the Chapman-Enskog expansion to recover the macroscopic RTE from the mesoscopic LBE. The D2Q9 model is used to solve the LBE, and the numerical results obtained by the LBM are comparable to the results obtained by other methods or analytical solutions, which demonstrates that the proposed model is highly accurate and stable in simulating multidimensional radiative transfer. In addition, we find that the convergence rate of the LBM depends on the transport properties of RTE: for diffusion-dominated RTE with a large optical thickness, the LBM shows a second-order convergence rate in space, while for convection-dominated RTE with a small optical thickness, a lower convergence rate is observed. PMID:27627417
Lattice Boltzmann model for a steady radiative transfer equation
NASA Astrophysics Data System (ADS)
Yi, Hong-Liang; Yao, Feng-Ju; Tan, He-Ping
2016-08-01
A complete lattice Boltzmann model (LBM) is proposed for the steady radiative transfer equation (RTE). The RTE can be regarded as a pure convection equation with a source term. To derive the expressions for the equilibrium distribution function and the relaxation time, an artificial isotropic diffusion term is introduced to form a convection-diffusion equation. When the dimensionless relaxation time has a value of 0.5, the lattice Boltzmann equation (LBE) is exactly applicable to the original steady RTE. We also perform a multiscale analysis based on the Chapman-Enskog expansion to recover the macroscopic RTE from the mesoscopic LBE. The D2Q9 model is used to solve the LBE, and the numerical results obtained by the LBM are comparable to the results obtained by other methods or analytical solutions, which demonstrates that the proposed model is highly accurate and stable in simulating multidimensional radiative transfer. In addition, we find that the convergence rate of the LBM depends on the transport properties of RTE: for diffusion-dominated RTE with a large optical thickness, the LBM shows a second-order convergence rate in space, while for convection-dominated RTE with a small optical thickness, a lower convergence rate is observed.
An Earth longwave radiation climate model
NASA Technical Reports Server (NTRS)
Yang, S. K.
1984-01-01
An Earth outgoing longwave radiation (OLWR) climate model was constructed for radiation budget study. Required information is provided by on empirical 100mb water vapor mixing ratio equation of the mixing ratio interpolation scheme. Cloud top temperature is adjusted so that the calculation would agree with NOAA scanning radiometer measurements. Both clear sky and cloudy sky cases are calculated and discussed for global average, zonal average and world-wide distributed cases. The results agree well with the satellite observations. The clear sky case shows that the OLWR field is highly modulated by water vapor, especially in the tropics. The strongest longitudinal variation occurs in the tropics. This variation can be mostly explained by the strong water vapor gradient. Although in the zonal average case the tropics have a minimum in OLWR, the minimum is essentially contributed by a few very low flux regions, such as the Amazon, Indonesian and the Congo.
Radiative equilibrium model of Titan's atmosphere
NASA Technical Reports Server (NTRS)
Samuelson, R. E.
1983-01-01
The present global radiative equilibrium model for the Saturn satellite Titan is restricted to the two-stream approximation, is vertically homogeneous in its scattering properties, and is spectrally divided into one thermal and two solar channels. Between 13 and 33% of the total incident solar radiation is absorbed at the planetary surface, and the 30-60 ratio of violet to thermal IR absorption cross sections in the stratosphere leads to the large temperature inversion observed there. The spectrally integrated mass absorption coefficient at thermal wavelengths is approximately constant throughout the stratosphere, and approximately linear with pressure in the troposphere, implying the presence of a uniformly mixed aerosol in the stratosphere. There also appear to be two regions of enhanced opacity near 30 and 500 mbar.
Galactic cosmic radiation model and its applications
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; O'Neill, P. M.
1996-01-01
A model for the differential energy spectra of galactic cosmic radiation as a function of solar activity is described. It is based on the standard diffusion-convection theory of solar modulation. Estimates of the modulation potential based on fitting this theory to observed spectral measurements from 1954 to 1989 are correlated to the Climax neutron counting rates and to the sunspot numbers at earlier times taking into account the polarity of the interplanetary magnetic field at the time of observations. These regression lines then provide a method for predicting the modulation at later times. The results of this model are quantitatively compared to a similar Moscow State University (MSU) model. These model cosmic ray spectra are used to predict the linear energy transfer spectra, differential energy spectra of light (charge less than or = 2) ions, and single event upsets rates in memeory devices. These calculations are compared to observations made aboard the Space Shuttle.
Verification of snowpack radiation transfer models using actinometry
NASA Astrophysics Data System (ADS)
Phillips, Gavin J.; Simpson, William R.
2005-04-01
Actinometric measurements of photolysis rate coefficients within artificial snow have been used to test calculations of these coefficients by two radiative transfer models. The models used were based upon the delta-Eddington method or the discrete ordinate method, as implemented in the tropospheric ultraviolet and visible snow model, and were constrained by irradiance measurements and light attenuation profiles within the artificial snow. Actinometric measurements of the photolysis rate coefficient were made by observing the unimolecular conversion of 2-nitrobenzaldehyde (NBA) to its photoproduct under ultraviolet irradiation. A control experiment using liquid solutions of NBA determined that the quantum yield for conversion was ϕ = 0.41 ± 0.04 (±2σ). Measured photolysis rate coefficients in the artificial snow are enhanced in the near-surface layer, as predicted in the model calculations. The two models yielded essentially identical results for the depth-integrated photolysis rate coefficient of NBA, and their results quantitatively agreed with the actinometric measurements within the experimental precision of the measurement (±10%, ±2σ). The study shows that these models accurately determine snowpack actinic fluxes. To calculate in-snow photolysis rates for a molecule of interest, one must also have knowledge of the absorption spectrum and quantum yield for the specific photoprocess in addition to the actinic flux. Having demonstrated that the actinic flux is well determined by these models, we find that the major remaining uncertainty in prediction of snowpack photochemical rates is the measurement of these molecular photophysical properties.
A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.
2006-01-01
The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.
A cloud-radiation climate model
Yi, C.; Wu, R.
1996-12-31
Choosing the global average surface temperature T and cloudiness n as state variables, the relations of planetary albedo {alpha} and the atmospheric effective emissivity {epsilon} with the state variables are developed to study the important feedback processes in the climate system. They lead to a highly simplified nonlinear climate model which shows a self organization mechanism for cloud-radiation interaction. Solar radiation directly affects the surface temperature by which cloudiness changes are driven. The cloudiness changes in turn react on the surface temperature by the planetary albedo {alpha} and the atmospheric effective emittance {epsilon}. In the processes of cooperation and competition between the two subsystems, the surface temperature dominates and cloudiness responds rapidly to the surface temperature. Near the Hopf bifurcation, the analytical solutions of the limit cycle are obtained which are in agreement with numerical solutions. With these analytical solutions, the effects of solar radiation and carbon dioxide on the amplitude, period and phase lag are examined. The authors find that in addition to increasing temperature, an increase in concentration of atmospheric carbon dioxide could enhance sharply the amplitudes of climate oscillation. This implies that increasing carbon dioxide could periodically bring about hazardous impacts.
Polar firn layering in radiative transfer models
NASA Astrophysics Data System (ADS)
Linow, Stefanie; Hoerhold, Maria
2016-04-01
For many applications in the geosciences, remote sensing is the only feasible method of obtaining data from large areas with limited accessibility. This is especially true for the cryosphere, where light conditions and cloud coverage additionally limit the use of optical sensors. Here, instruments operating at microwave frequencies become important, for instance in polar snow parameters / SWE (snow water equivalent) mapping. However, the interaction between snow and microwave radiation is a complex process and still not fully understood. RT (radiative transfer) models to simulate snow-microwave interaction are available, but they require a number of input parameters such as microstructure and density, which are partly ill-constrained. The layering of snow and firn introduces an additional degree of complexity, as all snow parameters show a strong variability with depth. Many studies on RT modeling of polar firn deal with layer variability by using statistical properties derived from previous measurements, such as the standard deviations of density and microstructure, to configure model input. Here, the variability of microstructure parameters, such as density and particle size, are usually assumed to be independent of each other. However, in the case of the firn pack of the polar ice sheets, we observe that microstructure evolution depends on environmental parameters, such as temperature and snow deposition. Accordingly, density and microstructure evolve together within the snow and firn. Based on CT (computer tomography) microstructure measurements of antarctic firn, we can show that: first, the variability of density and effective grain size are linked and can thus be implemented in the RT models as a coupled set of parameters. Second, the magnitude of layering is captured by the measured standard deviation. Based on high-resolution density measurements of an Antarctic firn core, we study the effect of firn layering at different microwave wavelengths. By means of
Evaluation of radiation partitioning models at Bushland, Texas
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop growth and soil-vegetation-atmosphere continuum energy transfer models often require estimates of net radiation components, such as photosynthetic, solar, and longwave radiation to both the canopy and soil. We evaluated the 1998 radiation partitioning model of Campbell and Norman, herein referr...
Radiative effects in the standard model extension
NASA Astrophysics Data System (ADS)
Zhukovsky, V. Ch.; Lobanov, A. E.; Murchikova, E. M.
2006-03-01
The possibility of radiative effects induced by the Lorentz and CPT noninvariant interaction term for fermions in the standard model extension is investigated. In particular, electron-positron photo production and photon emission by electrons and positrons are studied. The rates of these processes are calculated in the Furry picture. It is demonstrated that the rates obtained in the framework of the model adopted strongly depend on the polarization states of the particles involved. As a result, ultrarelativistic particles produced should occupy states with a preferred spin orientation, i.e., photons have the sign of polarization opposite to the sign of the effective potential, while charged particles are preferably in the state with the helicity coinciding with the sign of the effective potential. This leads to evident spatial asymmetries which may have certain consequences observable at high energy accelerators, and in astrophysical and cosmological studies.
A Model of Radiative and Conductive Energy Transfer in Planetary Regoliths
NASA Technical Reports Server (NTRS)
Hapke, Bruce
1996-01-01
The thermal regime in planetary regoliths involves three processes: propagation of visible radiation, propagation of thermal radiation, and thermal conduction. The equations of radiative transfer and heat conduction are formulated for particulate media composed of anisotropically scattering particles. Although the equations are time dependent, only steady state problems are considered in this paper. Using the two-stream approximation, solutions are obtained for two cases: a layer of powder heated from below and an infinitely thick regolith illuminated by visible radiation. Radiative conductivity, subsurface temperature gradients, and the solid state greenhouse effect all appear intrinsically in the solutions without ad hoc additions. Although the equations are nonlinear, approximate analytic solutions that are accurate to a few percent are obtained. Analytic expressions are given for the temperature distribution, the optical and thermal radiance distributions, the hemispherical albedo, the hemispherical emissivity, and the directional emissivity. Additional applications of the new model to three problems of interest in planetary regoliths are presented by Hapke.
Monitoring of radiation fields in a waste tank model: Virtual radiation dosimetry
Tulenko, J.S.
1995-12-31
The University of Florida (UF) has developed a coupled radiation computation and three-dimensional modeling simulation code package. This package combines the Deneb Robotics` IGRIP three-dimensional solid modeling robotic simulation code with the UF developed VRF (Virtual Radiation Field) Monte Carlo based radiation computation code. The code package allows simulated radiation dose monitors to be placed anywhere on simulated robotic equipment to record the radiation doses which would be sustained when carrying out tasks in radiation environments. Comparison with measured values in the Hanford Waste Tank C-106 shows excellent results. The code shows promise of serving as a major tool in the design and operation of robotic equipment in radiation environments to ensure freedom from radiation caused failure.
Dunn, Nicholas J. H.; Noid, W. G.
2015-12-28
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.
NASA Astrophysics Data System (ADS)
Dunn, Nicholas J. H.; Noid, W. G.
2015-12-01
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed "pressure-matching" variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the "simplicity" of the model.
A new radiation model for Baltic Sea ecosystem modelling
NASA Astrophysics Data System (ADS)
Neumann, Thomas; Siegel, Herbert; Gerth, Monika
2015-12-01
Photosynthetically available radiation (PAR) is one of the key requirements for primary production in the ocean. The ambient PAR is determined by incoming solar radiation and optical properties of sea water and the optically active water constituents along the radiation pathway. Especially in coastal waters, the optical properties are affected by terrigenous constituents like yellow substances as well as high primary production. Numerical models for marine ecosystems account for the optical attenuation process in different ways and details. For the consideration of coloured dissolved organic matter (CDOM) and shading effects of phytoplankton particles, we propose a dynamic parametrization for the Baltic Sea. Furthermore, products from biological turnover processes are implemented. Besides PAR and its attenuation coefficient, the model calculates the Secchi disk depth, a simple measurable parameter describing the transparency of the water column and a water quality parameter in the European Water Framework Directive. The components of the proposed optical model are partly implemented from other publications respectively derived from our own measurements for the area of investigation. The model allows a better representation of PAR with a more realistic spatial and temporal variability compared to former parametrizations. The effect is that regional changes of primary production, especially in the northern part of the Baltic Sea, show reduced productivity due to higher CDOM concentrations. The model estimates for Secchi disk depth are much more realistic now. In the northern Baltic Sea, simulated oxygen concentrations in deep water have improved considerably.
Radiative models for the evaluation of the UV radiation at the ground.
Koepke, P
2009-12-01
The variety of radiative models for solar UV radiation is discussed. For the evaluation of measured UV radiation at the ground the basic problem is the availability of actual values of the atmospheric parameters that influence the UV radiation. The largest uncertainties are due to clouds and aerosol, which are highly variable. In the case of tilted receivers, like the human skin for most orientations, and for conditions like a street canyon or tree shadow, besides the classical radiative transfer in the atmosphere additional modelling is necessary.
Angular radiation models for earth-atmosphere system. Volume 2: Longwave radiation
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Green, R. N.; Smith, G. L.; Wielicki, B. A.; Walker, I. J.; Taylor, V. R.; Stowe, L. L.
1989-01-01
The longwave angular radiation models that are required for analysis of satellite measurements of Earth radiation, such as those from the Earth Radiation Budget Experiment (ERBE) are presented. The models contain limb-darkening characteristics and mean fluxes. Limb-darkening characteristics are the longwave anisotropic factor and the standard deviation of the longwave radiance. Derivation of these models from the Nimbus 7 ERB (Earth Radiation Budget) data set is described. Tabulated values and computer-generated plots are included for the limb-darkening and mean-flux models.
NASA Astrophysics Data System (ADS)
Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.
2015-12-01
Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.
Surface electron density models for accurate ab initio molecular dynamics with electronic friction
NASA Astrophysics Data System (ADS)
Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.
2016-06-01
Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.
Ustinov, E A
2014-10-01
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.
Accurate cortical tissue classification on MRI by modeling cortical folding patterns.
Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea
2015-09-01
Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery.
Ustinov, E. A.
2014-10-07
Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.
Ustinov, E A
2014-10-01
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system. PMID:25296827
Accurate cortical tissue classification on MRI by modeling cortical folding patterns.
Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea
2015-09-01
Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453
Diffenderfer, Eric S; Dolney, Derek; Schaettler, Maximilian; Sanzari, Jenine K; McDonough, James; Cengel, Keith A
2014-03-01
The space radiation environment imposes increased dangers of exposure to ionizing radiation, particularly during a solar particle event (SPE). These events consist primarily of low energy protons that produce a highly inhomogeneous dose distribution. Due to this inherent dose heterogeneity, experiments designed to investigate the radiobiological effects of SPE radiation present difficulties in evaluating and interpreting dose to sensitive organs. To address this challenge, we used the Geant4 Monte Carlo simulation framework to develop dosimetry software that uses computed tomography (CT) images and provides radiation transport simulations incorporating all relevant physical interaction processes. We found that this simulation accurately predicts measured data in phantoms and can be applied to model dose in radiobiological experiments with animal models exposed to charged particle (electron and proton) beams. This study clearly demonstrates the value of Monte Carlo radiation transport methods for two critically interrelated uses: (i) determining the overall dose distribution and dose levels to specific organ systems for animal experiments with SPE-like radiation, and (ii) interpreting the effect of random and systematic variations in experimental variables (e.g. animal movement during long exposures) on the dose distributions and consequent biological effects from SPE-like radiation exposure. The software developed and validated in this study represents a critically important new tool that allows integration of computational and biological modeling for evaluating the biological outcomes of exposures to inhomogeneous SPE-like radiation dose distributions, and has potential applications for other environmental and therapeutic exposure simulations.
Future directions for LDEF ionizing radiation modeling and assessments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1993-01-01
A calculational program utilizing data from radiation dosimetry measurements aboard the Long Duration Exposure Facility (LDEF) satellite to reduce the uncertainties in current models defining the ionizing radiation environment is in progress. Most of the effort to date has been on using LDEF radiation dose measurements to evaluate models defining the geomagnetically trapped radiation, which has provided results applicable to radiation design assessments being performed for Space Station Freedom. Plans for future data comparisons, model evaluations, and assessments using additional LDEF data sets (LET spectra, induced radioactivity, and particle spectra) are discussed.
A Radiative Transport Model for Heating Paints using High Density Plasma Arc Lamps
Sabau, Adrian S; Duty, Chad E; Dinwiddie, Ralph Barton; Nichols, Mark; Blue, Craig A; Ott, Ronald D
2009-01-01
The energy distribution and ensuing temperature evolution within paint-like systems under the influence of infrared radiation was studied. Thermal radiation effects as well as those due to heat conduction were considered. A complete set of material properties was derived and discussed. Infrared measurements were conducted to obtain experimental data for the temperature in the paint film. The heat flux of the incident radiation from the plasma arc lamp was measured using a heat flux sensor with a very short response time. The comparison between the computed and experimental results for temperature show that the models that are based on spectral four-flux RTE and accurate optical properties yield accurate results for the black paint systems.
Incorporation of multiple cloud layers for ultraviolet radiation modeling studies
NASA Technical Reports Server (NTRS)
Charache, Darryl H.; Abreu, Vincent J.; Kuhn, William R.; Skinner, Wilbert R.
1994-01-01
Cloud data sets compiled from surface observations were used to develop an algorithm for incorporating multiple cloud layers into a multiple-scattering radiative transfer model. Aerosol extinction and ozone data sets were also incorporated to estimate the seasonally averaged ultraviolet (UV) flux reaching the surface of the Earth in the Detroit, Michigan, region for the years 1979-1991, corresponding to Total Ozone Mapping Spectrometer (TOMS) version 6 ozone observations. The calculated UV spectrum was convolved with an erythema action spectrum to estimate the effective biological exposure for erythema. Calculations show that decreasing the total column density of ozone by 1% leads to an increase in erythemal exposure by approximately 1.1-1.3%, in good agreement with previous studies. A comparison of the UV radiation budget at the surface between a single cloud layer method and a multiple cloud layer method presented here is discussed, along with limitations of each technique. With improved parameterization of cloud properties, and as knowledge of biological effects of UV exposure increase, inclusion of multiple cloud layers may be important in accurately determining the biologically effective UV budget at the surface of the Earth.
NASA Astrophysics Data System (ADS)
Ni-Meister, W.; Kiang, N.; Yang, W.
2007-12-01
The transmission of light through plant canopies results in vertical profiles of light intensity that affect the photosynthetic activity and gas exchange of plants, their competition for light, and the canopy energy balance. The accurate representation of the canopy light profile is then important for predicting ecological dynamics. The study presents a simple canopy radiative transfer scheme to characterize the impact of the horizontal and vertical vegetation structure heterogeneity on light profiles. Actual vertical foliage profile and a clumping factor which are functions of tree geometry, size and density and foliage density are used to characterize the vertical and horizontal vegetation structure heterogeneity. The simple scheme is evaluated using the ground and airborne lidar data collected in deciduous and coniferous forests and was also compared with the more complex Geometric Optical and Radiative Transfer (GORT) model and the two-stream scheme currently being used to describe light interactions with vegetation canopy in most GCMs. The simple modeled PAR profiles match well with the ground data, lidar and full GORT model prediction, it performs much better than the simple Beer's&plaw used in two stream scheme. This scheme will have the same computation cost as the current scheme being used in GCMs, but provides better photosynthesis, radiative fluxes and surface albedo estimates, thus is suitable for a global vegetation dynamic model embedded in GCMs.
NASA Technical Reports Server (NTRS)
Krizmanic, John F.
2013-01-01
We have been assessing the effects of background radiation in low-Earth orbit for the next generation of X-ray and Cosmic-ray experiments, in particular for International Space Station orbit. Outside the areas of high fluxes of trapped radiation, we have been using parameterizations developed by the Fermi team to quantify the high-energy induced background. For the low-energy background, we have been using the AE8 and AP8 SPENVIS models to determine the orbit fractions where the fluxes of trapped particles are too high to allow for useful operation of the experiment. One area we are investigating is how the fluxes of SPENVIS predictions at higher energies match the fluxes at the low-energy end of our parameterizations. I will summarize our methodology for background determination from the various sources of cosmogenic and terrestrial radiation and how these compare to SPENVIS predictions in overlapping energy ranges.
Flavour dependent gauged radiative neutrino mass model
NASA Astrophysics Data System (ADS)
Baek, Seungwon; Okada, Hiroshi; Yagyu, Kei
2015-04-01
We propose a one-loop induced radiative neutrino mass model with anomaly free flavour dependent gauge symmetry: μ minus τ symmetry U(1) μ- τ . A neutrino mass matrix satisfying current experimental data can be obtained by introducing a weak isospin singlet scalar boson that breaks U(1) μ- τ symmetry, an inert doublet scalar field, and three right-handed neutrinos in addition to the fields in the standard model. We find that a characteristic structure appears in the neutrino mass matrix: two-zero texture form which predicts three non-zero neutrino masses and three non-zero CP-phases from five well measured experimental inputs of two squared mass differences and three mixing angles. Furthermore, it is clarified that only the inverted mass hierarchy is allowed in our model. In a favored parameter set from the neutrino sector, the discrepancy in the muon anomalous magnetic moment between the experimental data and the the standard model prediction can be explained by the additional neutral gauge boson loop contribution with mass of order 100 MeV and new gauge coupling of order 10-3.
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
ERIC Educational Resources Information Center
Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.
2011-01-01
Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
Theoretical model of Saturn's kilometric radiation spectrum
NASA Astrophysics Data System (ADS)
Galopeau, P.; Zarka, P.; Le Queau, D.
1989-07-01
A model was developed, which allowed the theoretical derivation of an envelope for the average spectrum of the Saturnian kilometric radiation (SKR), assuming that the SKR is generated by the cyclotron maser instability. The theoretical SKR spectrum derived was found to exhibit the same spectral features as the observed mean spectra. Namely, the overall shape of both calculated and measured spectra are similar, with the fluxes peaking at frequencies of 100,000 Hz and decreasing abruptly at high frequencies, and more slowly at lower frequencies. The calculated spectral intensity levels exceed the most intense observed intensities by up to 1 order of magnitude, suggesting that the SKR emission is only marginally saturated by nonlinear processes.
Radiative Effects in the Standard Model Extension
NASA Astrophysics Data System (ADS)
Zhukovsky, V. Ch.; Lobanov, A. E.; Murchikova, E. M.
2006-10-01
The possibility of radiative effects that are due to interaction of fermions with the constant axial-vector background in the standard model extension is investigated. Electron-positron photo-production and photon emission by electrons and positrons were studied. The rates of these processes were calculated in the Furry picture. It was demonstrated that the rates obtained strongly depend on the polarization states of the particles involved. In consequence of this fact ultra-relativistic particles should occupy states with a preferred spin orientation, i.e., photons have the sign of polarization opposite to the sign of the effective potential, while charged particle are preferably in the state with the helicity coinciding with the sign of the effective potential. This leads to evident spatial asymmetries.
Johnson, Timothy C.; Wellman, Dawn M.
2015-06-26
Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.
NASA Astrophysics Data System (ADS)
Toyokuni, Genti; Takenaka, Hiroshi
2012-06-01
We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic
Statistical Modeling for Radiation Hardness Assurance: Toward Bigger Data
NASA Technical Reports Server (NTRS)
Ladbury, R.; Campola, M. J.
2015-01-01
New approaches to statistical modeling in radiation hardness assurance are discussed. These approaches yield quantitative bounds on flight-part radiation performance even in the absence of conventional data sources. This allows the analyst to bound radiation risk at all stages and for all decisions in the RHA process. It also allows optimization of RHA procedures for the project's risk tolerance.
Angular radiation models for Earth-atmosphere system. Volume 1: Shortwave radiation
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Green, R. N.; Minnis, P.; Smith, G. L.; Staylor, W. F.; Wielicki, B. A.; Walker, I. J.; Young, D. F.; Taylor, V. R.; Stowe, L. L.
1988-01-01
Presented are shortwave angular radiation models which are required for analysis of satellite measurements of Earth radiation, such as those fro the Earth Radiation Budget Experiment (ERBE). The models consist of both bidirectional and directional parameters. The bidirectional parameters are anisotropic function, standard deviation of mean radiance, and shortwave-longwave radiance correlation coefficient. The directional parameters are mean albedo as a function of Sun zenith angle and mean albedo normalized to overhead Sun. Derivation of these models from the Nimbus 7 ERB (Earth Radiation Budget) and Geostationary Operational Environmental Satellite (GOES) data sets is described. Tabulated values and computer-generated plots are included for the bidirectional and directional modes.
Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3
NASA Astrophysics Data System (ADS)
Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.
2016-04-01
Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.
Ho, C K
2009-01-01
Simulations of UV disinfection systems require accurate models of UV radiation within the reactor. Processes such as reflection and refraction at surfaces within the reactor can impact the intensity of the simulated radiation field, which in turn impacts the simulated dose and performance of the UV reactor. This paper describes a detailed discrete ordinates radiation model and comparisons to a test that recorded the UV radiation distribution around a low pressure UV lamp in a water-filled chamber with a UV transmittance of 88%. The effects of reflection and refraction at the quartz sleeve were investigated, along with the impact of wall reflection from the interior surfaces of the chamber. Results showed that the inclusion of wall reflection improved matches between predicted and measured values of incident radiation throughout the chamber. The difference between simulations with and without reflection ranged from several percent near the lamp to nearly 40% further away from the lamp. Neglecting reflection and refraction at the quartz sleeve increased the simulated radiation near the lamp and reduced the simulated radiation further away from the lamp. However, the distribution and trends in the simulated radiation field both with and without the effects of reflection and refraction at the quartz sleeve were consistent with the measured data distributions.
Ho, C K
2009-01-01
Simulations of UV disinfection systems require accurate models of UV radiation within the reactor. Processes such as reflection and refraction at surfaces within the reactor can impact the intensity of the simulated radiation field, which in turn impacts the simulated dose and performance of the UV reactor. This paper describes a detailed discrete ordinates radiation model and comparisons to a test that recorded the UV radiation distribution around a low pressure UV lamp in a water-filled chamber with a UV transmittance of 88%. The effects of reflection and refraction at the quartz sleeve were investigated, along with the impact of wall reflection from the interior surfaces of the chamber. Results showed that the inclusion of wall reflection improved matches between predicted and measured values of incident radiation throughout the chamber. The difference between simulations with and without reflection ranged from several percent near the lamp to nearly 40% further away from the lamp. Neglecting reflection and refraction at the quartz sleeve increased the simulated radiation near the lamp and reduced the simulated radiation further away from the lamp. However, the distribution and trends in the simulated radiation field both with and without the effects of reflection and refraction at the quartz sleeve were consistent with the measured data distributions. PMID:19542648
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
The Chandra X-Ray Observatory Radiation Environment Model
NASA Technical Reports Server (NTRS)
Blackwell, W. C.; Minow, Joseph I.; Smith, Shawn; Swift, Wesley R.; ODell, Stephen L.; Cameron, Robert A.
2003-01-01
CRMFLX (Chandra Radiation Model of ion FluX) is an environmental risk mitigation tool for use as a decision aid in planning the operations times for Chandra's Advanced CCD Imaging Spectrometer (ACIS) detector. The accurate prediction of the proton flux environment with energies of 100 - 200 keV is needed in order to protect the ACIS detector against proton degradation. Unfortunately, protons of this energy are abundant in the region of space Chandra must operate, and the on-board Electron, Proton, and Helium Instrument (EPHIN) does not measure proton flux levels of the required energy range. In addition to the concerns arising from the radiation belts, substorm injections of plasma from the magnetotail may increase the protons flux by orders of magnitude in this energy range. The Earth's magnetosphere is a dynamic entity, with the size and location of the magnetopause driven by the highly variable solar wind parameters (number density, velocity, and magnetic field components). Operational times for the telescope must be made weeks in advance, decisions which are complicated by the variability of the environment. CRMFLX is an engineering model developed to address these problems and provides proton flux and fluence statistics for the terrestrial outer magnetosphere, magnetosheath, and solar wind for use in scheduling ACIS operations. CRMFLX implements a number of standard models to predict the bow shock, magnetopause, and plasma sheet boundaries based on the sampling of historical solar wind data sets. Measurements from the GEOTAIL and POLAR spacecraft are used to create the proton flux database. This paper describes the recently released CRMFLX v2 implementation that includes an algorithm that propagates flux from an observation location to other regions of the magnetosphere based on convective ExB and VB-curvature particle drift motions in electric and magnetic fields. This technique has the advantage of more completely filling out the database and makes maximum
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
Accurate calculation of binding energies for molecular clusters - Assessment of different models
NASA Astrophysics Data System (ADS)
Friedrich, Joachim; Fiedler, Benjamin
2016-06-01
In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are <1 kJ/mol. Since they are smaller than the basis set errors, we obtain higher total accuracy due to the applicability of larger basis sets. The final CCSD(T)/CBS benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.
Accurate characterization and modeling of transmission lines for GaAs MMIC's
NASA Astrophysics Data System (ADS)
Finlay, Hugh J.; Jansen, Rolf H.; Jenkins, John A.; Eddison, Ian G.
1988-06-01
The authors discuss computer-aided design (CAD) tools together with high-accuracy microwave measurements to realize improved design data for GaAs monolithic microwave integrated circuits (MMICs). In particular, a combined theoretical and experimental approach to the generation of an accurate design database for transmission lines on GaAs MMICs is presented. The theoretical approach is based on an improved transmission-line theory which is part of the spectral-domain hybrid-mode computer program MCLINE. The benefit of this approach in the design of multidielectric-media transmission lines is described. The program was designed to include loss mechanisms in all dielectric layers and to include conductor and surface roughness loss contributions. As an example, using GaAs ring resonator techniques covering 2 to 24 GHz, accuracies in effective dielectric constant and loss of 1 percent and 15 percent respectively, are presented. By combining theoretical and experimental techniques, a generalized MMIC microstrip design database is outlined.
D’Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo
2014-12-28
A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmann inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.
Image-based modeling of radiation-induced foci
NASA Astrophysics Data System (ADS)
Costes, Sylvain; Cucinotta, Francis A.; Ponomarev, Artem; Barcellos-Hoff, Mary Helen; Chen, James; Chou, William; Gascard, Philippe
contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern was further characterized by "relative DNA image measurements". This novel imaging approach showed that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent than predicted in regions with lower DNA density. The same preferential nuclear location was also measured for RIF induced by 1 Gy of low-LET radiation. This deviation from random behavior was evident only 5 min after irradiation for phosphorylated ATM RIF, while γH2AX and 53BP1 RIF showed pronounced deviations up to 30 min after exposure. These data suggest that RIF within a few minutes following exposure to radiation cluster into open regions of the nucleus (i.e. euchromatin). It is possible that DNA lesions are collected in these nuclear sub-domains for more efficient repair. If so, this would imply that DSB are actively transported within the nucleus, a phenomenon that has not yet been considered in modeling DNA misrepair following exposure to radiation. These results are thus critical for more accurate risk models of radiation and we are actively working on characterizing further RIF movement in human nuclei using live cell imaging.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.
A Simple, Accurate Model for Alkyl Adsorption on Late Transition Metals
Montemore, Matthew M.; Medlin, James W.
2013-01-18
A simple model that predicts the adsorption energy of an arbitrary alkyl in the high-symmetry sites of late transition metal fcc(111) and related surfaces is presented. The model makes predictions based on a few simple attributes of the adsorbate and surface, including the d-shell filling and the matrix coupling element, as well as the adsorption energy of methyl in the top sites. We use the model to screen surfaces for alkyl chain-growth properties and to explain trends in alkyl adsorption strength, site preference, and vibrational softening.
NASA Astrophysics Data System (ADS)
Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu
2011-05-01
Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.
Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
Geometry and mass model of ionizing radiation experiments on the LDEF satellite
NASA Technical Reports Server (NTRS)
Colborn, B. L.; Armstrong, T. W.
1992-01-01
Extensive measurements related to ionizing radiation environments and effects were made on the LDEF satellite during its mission lifetime of almost 6 years. These data, together with the opportunity they provide for evaluating predictive models and analysis methods, should allow more accurate assessments of the space radiation environment and related effects for future missions in low Earth orbit. The LDEF radiation dosimetry data is influenced to varying degrees by material shielding effects due to the dosimeter itself, nearby components and experiments, and the spacecraft structure. A geometry and mass model is generated of LDEF, incorporating sufficient detail that it can be applied in determining the influence of material shielding on ionizing radiation measurements and predictions. This model can be used as an aid in data interpretation by unfolding shielding effects from the LDEF radiation dosimeter responses. Use of the LDEF geometry/mass model, in conjunction with predictions and comparisons with LDEF dosimetry data currently underway, will also allow more definitive evaluations of current radiation models for future mission applications.
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Palmer, David S; Sergiievskyi, Volodymyr P; Jensen, Frank; Fedorov, Maxim V
2010-07-28
We report on the results of testing the reference interaction site model (RISM) for the estimation of the hydration free energy of druglike molecules. The optimum model was selected after testing of different RISM free energy expressions combined with different quantum mechanics and empirical force-field methods of structure optimization and atomic partial charge calculation. The final model gave a systematic error with a standard deviation of 2.6 kcal/mol for a test set of 31 molecules selected from the SAMPL1 blind challenge set [J. P. Guthrie, J. Phys. Chem. B 113, 4501 (2009)]. After parametrization of this model to include terms for the excluded volume and the number of atoms of different types in the molecule, the root mean squared error for a test set of 19 molecules was less than 1.2 kcal/mol.
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Berger, Perrine; Alouini, Mehdi; Bourderionnet, Jérôme; Bretenaker, Fabien; Dolfi, Daniel
2010-01-18
We developed an improved model in order to predict the RF behavior and the slow light properties of the SOA valid for any experimental conditions. It takes into account the dynamic saturation of the SOA, which can be fully characterized by a simple measurement, and only relies on material fitting parameters, independent of the optical intensity and the injected current. The present model is validated by showing a good agreement with experiments for small and large modulation indices.
NASA Astrophysics Data System (ADS)
Mrigakshi, A. I.; Matthiä, D.; Berger, T.; Reitz, G.; Wimmer-Schweingruber, R. F.
2012-12-01
Astronauts are subjected to elevated levels of high-energy ionizing radiation in space which poses a substantial risk to their health. Therefore, the assessment of the radiation exposure for long duration manned spaceflight is essential. This is done by measuring dose using various detector techniques and by performing numerical simulations utilizing radiation transport codes which allow to predict radiation exposure for future missions and for conditions where measurements are not feasible or available. A necessary prerequisite for an accurate estimation of the exposure using the latter approach is a reliable description of the radiation spectra. Accordingly, in order to estimate the exposure from the Galactic Cosmic Rays (GCRs), which are one of the major sources of radiation exposure in space, GCR models are required. This work presents an evaluation of GCR models for dosimetry purposes and the effect of applying these models on the estimation of GCR exposure in space outside and inside the Earth's magnetosphere. To achieve this, widely used GCR models - Badhwar-O'Neill2010, Burger-Usoskin, CREME2009 and CREME96, were evaluated by comparing model spectra for light and heavy nuclei with measurements from various high-altitude balloon and space missions over several decades. Additionally a new model, based on the GCR-ISO model, developed at the German Aerospace Centre (DLR) was also investigated. The differences arising in the radiation exposure by applying these models are quantified in terms of absorbed dose and dose equivalent rates that were estimated numerically using the GEANT4 Monte-Carlo framework. During certain epochs in the last decade, there are large discrepancies between the model and the measured spectra. All models exhibit weaknesses in describing the increased GCR flux that was observed in 2009-2010. The differences in the spectra, described by the models, result in considerable differences in the estimated dose quantities.
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-04-11
Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
The Radiative Forcing Model Intercomparison Project (RFMIP): experimental protocol for CMIP6
NASA Astrophysics Data System (ADS)
Pincus, Robert; Forster, Piers M.; Stevens, Bjorn
2016-09-01
The phrasing of the first of three questions motivating CMIP6 - "How does the Earth system respond to forcing?" - suggests that forcing is always well-known, yet the radiative forcing to which this question refers has historically been uncertain in coordinated experiments even as understanding of how best to infer radiative forcing has evolved. The Radiative Forcing Model Intercomparison Project (RFMIP) endorsed by CMIP6 seeks to provide a foundation for answering the question through three related activities: (i) accurate characterization of the effective radiative forcing relative to a near-preindustrial baseline and careful diagnosis of the components of this forcing; (ii) assessment of the absolute accuracy of clear-sky radiative transfer parameterizations against reference models on the global scales relevant for climate modeling; and (iii) identification of robust model responses to tightly specified aerosol radiative forcing from 1850 to present. Complete characterization of effective radiative forcing can be accomplished with 180 years (Tier 1) of atmosphere-only simulation using a sea-surface temperature and sea ice concentration climatology derived from the host model's preindustrial control simulation. Assessment of parameterization error requires trivial amounts of computation but the development of small amounts of infrastructure: new, spectrally detailed diagnostic output requested as two snapshots at present-day and preindustrial conditions, and results from the model's radiation code applied to specified atmospheric conditions. The search for robust responses to aerosol changes relies on the CMIP6 specification of anthropogenic aerosol properties; models using this specification can contribute to RFMIP with no additional simulation, while those using a full aerosol model are requested to perform at least one and up to four 165-year coupled ocean-atmosphere simulations at Tier 1.
Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna
2015-01-01
Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon
Predictive model of radiative neutrino masses
NASA Astrophysics Data System (ADS)
Babu, K. S.; Julio, J.
2014-03-01
We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: the hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with δCP=π; and the effective mass in neutrinoless double beta decay lies in a narrow range, mββ=(17.6-18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tanβ, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The nonstandard neutral Higgs bosons, if they are moderately heavy, would decay dominantly into μ and τ with prescribed branching ratios. Observable rates for the decays μ →eγ and τ→3μ are predicted if these scalars have masses in the range of 150-500 GeV.
The bubble coalescence model of radiation blistering
NASA Astrophysics Data System (ADS)
Yadava, R. D. S.
1981-05-01
The existence of overpressurized gas bubbles, and a suitable mechanism for bubble growth during low temperature ion implantations, are the essential ingredients for the validity of a gas-driven blister formation mechanism. In this paper, taking into account the difference between the formation energy of helium interstitials and the free energy change of a bubble per helium atom added, we have theoretically shown that such bubbles indeed exist, and their growth is driven by their bias for vacancies and anti-bias for interstitials which arise because of the overpressure-induced compressive stress field around them. The relations for helium density in bubbles and the bubble overpressure are derived. The role of interbubble interaction and the effect of bubbles on the elastic properties of the material have been taken into account to determine the dose dependence of the integrated lateral stress and the critical conditions for interbubble coalescence/fracture. It is shown that the observed sublinearity and the relief of integrated lateral stress are a natural consequence of the attractive interbubble interaction and do not uniquely relate to the blister formation as considered in the stress model. The derived conditions for coalescence agree well with the available data. It is argued that the present treatment provides a sound theoretical basis for the gas pressure model of radiation blistering.
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
Hewitt, Nicola J; Edwards, Robert J; Fritsche, Ellen; Goebel, Carsten; Aeby, Pierre; Scheel, Julia; Reisinger, Kerstin; Ouédraogo, Gladys; Duche, Daniel; Eilstein, Joan; Latil, Alain; Kenny, Julia; Moore, Claire; Kuehnl, Jochen; Barroso, Joao; Fautz, Rolf; Pfuhler, Stefan
2013-06-01
Several human skin models employing primary cells and immortalized cell lines used as monocultures or combined to produce reconstituted 3D skin constructs have been developed. Furthermore, these models have been included in European genotoxicity and sensitization/irritation assay validation projects. In order to help interpret data, Cosmetics Europe (formerly COLIPA) facilitated research projects that measured a variety of defined phase I and II enzyme activities and created a complete proteomic profile of xenobiotic metabolizing enzymes (XMEs) in native human skin and compared them with data obtained from a number of in vitro models of human skin. Here, we have summarized our findings on the current knowledge of the metabolic capacity of native human skin and in vitro models and made an overall assessment of the metabolic capacity from gene expression, proteomic expression, and substrate metabolism data. The known low expression and function of phase I enzymes in native whole skin were reflected in the in vitro models. Some XMEs in whole skin were not detected in in vitro models and vice versa, and some major hepatic XMEs such as cytochrome P450-monooxygenases were absent or measured only at very low levels in the skin. Conversely, despite varying mRNA and protein levels of phase II enzymes, functional activity of glutathione S-transferases, N-acetyltransferase 1, and UDP-glucuronosyltransferases were all readily measurable in whole skin and in vitro skin models at activity levels similar to those measured in the liver. These projects have enabled a better understanding of the contribution of XMEs to toxicity endpoints. PMID:23539547
Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.
Qu, Xiaohui; Persson, Kristin A
2016-09-13
A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744
Geo-accurate model extraction from three-dimensional image-derived point clouds
NASA Astrophysics Data System (ADS)
Nilosek, David; Sun, Shaohui; Salvaggio, Carl
2012-06-01
A methodology is proposed for automatically extracting primitive models of buildings in a scene from a three-dimensional point cloud derived from multi-view depth extraction techniques. By exploring the information provided by the two-dimensional images and the three-dimensional point cloud and the relationship between the two, automated methods for extraction are presented. Using the inertial measurement unit (IMU) and global positioning system (GPS) data that accompanies the aerial imagery, the geometry is derived in a world-coordinate system so the model can be used with GIS software. This work uses imagery collected by the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory's WASP sensor platform. The data used was collected over downtown Rochester, New York. Multiple target buildings have their primitive three-dimensional model geometry extracted using modern point-cloud processing techniques.
Vavalle, Nicholas A; Moreno, Daniel P; Rhyne, Ashley C; Stitzel, Joel D; Gayzik, F Scott
2013-03-01
This study presents four validation cases of a mid-sized male (M50) full human body finite element model-two lateral sled tests at 6.7 m/s, one sled test at 8.9 m/s, and a lateral drop test. Model results were compared to transient force curves, peak force, chest compression, and number of fractures from the studies. For one of the 6.7 m/s impacts (flat wall impact), the peak thoracic, abdominal and pelvic loads were 8.7, 3.1 and 14.9 kN for the model and 5.2 ± 1.1 kN, 3.1 ± 1.1 kN, and 6.3 ± 2.3 kN for the tests. For the same test setup in the 8.9 m/s case, they were 12.6, 6, and 21.9 kN for the model and 9.1 ± 1.5 kN, 4.9 ± 1.1 kN, and 17.4 ± 6.8 kN for the experiments. The combined torso load and the pelvis load simulated in a second rigid wall impact at 6.7 m/s were 11.4 and 15.6 kN, respectively, compared to 8.5 ± 0.2 kN and 8.3 ± 1.8 kN experimentally. The peak thorax load in the drop test was 6.7 kN for the model, within the range in the cadavers, 5.8-7.4 kN. When analyzing rib fractures, the model predicted Abbreviated Injury Scale scores within the reported range in three of four cases. Objective comparison methods were used to quantitatively compare the model results to the literature studies. The results show a good match in the thorax and abdomen regions while the pelvis results over predicted the reaction loads from the literature studies. These results are an important milestone in the development and validation of this globally developed average male FEA model in lateral impact.
Badhwar, G.D.; Konradi, A.; Cucinotta, F.A.; Braby, L.A.
1994-09-01
A new class of tissue-equivalent proportional counters has been flown on two space shuttle flights. These detectors and their associated electronics cover a lineal energy range from 0.4 to 1250 keV/{mu}m with a multichannel analyzer resolution of 0.1 keV/{mu}m from 0.4 to 20 keV/{mu} and 5 keV/{mu}m from 20 to 1250 keV/{mu}m. These detectors provide the most complete dynamic range and highest resolution of any technique currently in use. On one mission, one detector was mounted in the Shuttle payload bay and another older modele in the mid-deck, thus providing information on the depth dependence of the lineal energy spectrum. A detailed comparison of the observed lineal energy and calculated LET spectra for galacic cosmic radiation shows that, although the radiation transport models provide a rather accurate description of the dose ({+-}15%) and equivalent dose ({+-}15%), the calculations significantly underestimate the frequency of events below about 100 keV/{mu}m. This difference cannot be explained by the inclusion of the contribution of splash protons. The contribution of the secondary pions, kaons and electrons produce in the Shuttle shielding, if included in the radiation transport model, may explain these differences. There are also significant differences between the model predictions and observations above 1440 keV/{mu}m, particularly for 28.5{degrees} inclination orbit. 24 refs., 9 figs., 1 tab.
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Cucinotta, F. A.; Braby, L. A.; Konradi, A.; Wilson, J. W. (Principal Investigator)
1994-01-01
A new class of tissue-equivalent proportional counters has been flown on two space shuttle flights. These detectors and their associated electronics cover a lineal energy range from 0.4 to 1250 keV/microns with a multichannel analyzer resolution of 0.1 keV/microns from 0.4 to 20 keV/microns and 5 keV/microns from 20 to 1250 keV/microns. These detectors provide the most complete dynamic range and highest resolution of any technique currently in use. On one mission, one detector was mounted in the Shuttle payload bay and another older model in the mid-deck, thus providing information on the depth dependence of the lineal energy spectrum. A detailed comparison of the observed lineal energy and calculated LET spectra for galactic cosmic radiation shows that, although the radiation transport models provide a rather accurate description of the dose (+/- 15%) and equivalent dose (+/- 15%), the calculations significantly underestimate the frequency of events below about 100 keV/microns. This difference cannot be explained by the inclusion of the contribution of splash protons. The contribution of the secondary pions, kaons and electrons produced in the Shuttle shielding, if included in the radiation transport model, may explain these differences. There are also significant differences between the model predictions and observations above 140 keV/microns, particularly for 28.5 degrees inclination orbit.
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2013-01-01
The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.
ERIC Educational Resources Information Center
Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.
2012-01-01
The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
Effective UV radiation from model calculations and measurements
NASA Technical Reports Server (NTRS)
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
Sapsis, Themistoklis P.; Majda, Andrew J.
2013-01-01
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods
NASA Astrophysics Data System (ADS)
Sohn, Ilyoup
approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-01-01
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Bowen, B.M.; Olsen, W.A.; Chen, Ili; Van Etten, D.M.
1987-11-01
An array of three portable, pressurized ionization chambers (PICs) continued to measure external radiation levels during 1985 caused by radionuclides emitted from the Los Alamos Meson Physics Facility (LAMPF). A Gaussian-type atmospheric dispersion model, using onsite meteorological and stack release data, was tested during this study. A more complex finite model, which takes into account the contribution of radiation at a receptor from different locations of the passing plume, was also tested. Monitoring results indicate that, as in 1984, a persistent wind up the Rio Grande Valley during the evening and early morning hours is largely responsible for causing the highest external radiation levels to occur to the northeast and north-northeast of LAMPF. However, because of increased turbulent mixing during the day, external radiation levels are generally much less during the day than at night. External radiation levels during 1985 show approximately a 75% reduction over 1984 levels. This resulted from a similar percentage reduction in LAMPF emissions caused by newly implemented emission controls. Comparison of predicted and measured daily external radiation levels indicates a high degree of correlation. The model also gives accurate estimates of measured concentrations over longer time periods. Comparison of predicted and measured hourly values indicates that the model generally tends to overpredict during the day and underpredict at night. 9 refs., 14 figs., 13 tabs.
Fast and accurate low-dimensional reduction of biophysically detailed neuron models.
Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele
2012-01-01
Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions. PMID:23226594
An accurate in vitro model of the E. coli envelope.
Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H
2015-10-01
Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir-Blodgett and Langmuir-Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292
Fast and accurate low-dimensional reduction of biophysically detailed neuron models.
Marasco, Addolorata; Limongiello, Alessandro; Migliore, Michele
2012-01-01
Realistic modeling of neurons are quite successful in complementing traditional experimental techniques. However, their networks require a computational power beyond the capabilities of current supercomputers, and the methods used so far to reduce their complexity do not take into account the key features of the cells nor critical physiological properties. Here we introduce a new, automatic and fast method to map realistic neurons into equivalent reduced models running up to > 40 times faster while maintaining a very high accuracy of the membrane potential dynamics during synaptic inputs, and a direct link with experimental observables. The mapping of arbitrary sets of synaptic inputs, without additional fine tuning, would also allow the convenient and efficient implementation of a new generation of large-scale simulations of brain regions reproducing the biological variability observed in real neurons, with unprecedented advances to understand higher brain functions.
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Horner, Marc; Muralikrishnan, R.
2010-01-01
ABSTRACT Purpose A computational fluid dynamics (CFD) study examined the impact of particle size on dissolution rate and residence of intravitreal suspension depots of Triamcinolone Acetonide (TAC). Methods A model for the rabbit eye was constructed using insights from high-resolution NMR imaging studies (Sawada 2002). The current model was compared to other published simulations in its ability to predict clearance of various intravitreally injected materials. Suspension depots were constructed explicitly rendering individual particles in various configurations: 4 or 16 mg drug confined to a 100 μL spherical depot, or 4 mg exploded to fill the entire vitreous. Particle size was reduced systematically in each configuration. The convective diffusion/dissolution process was simulated using a multiphase model. Results Release rate became independent of particle diameter below a certain value. The size-independent limits occurred for particle diameters ranging from 77 to 428 μM depending upon the depot configuration. Residence time predicted for the spherical depots in the size-independent limit was comparable to that observed in vivo. Conclusions Since the size-independent limit was several-fold greater than the particle size of commercially available pharmaceutical TAC suspensions, differences in particle size amongst such products are predicted to be immaterial to their duration or performance. PMID:20467888
Mathematical model accurately predicts protein release from an affinity-based delivery system.
Vulic, Katarina; Pakulska, Malgosia M; Sonthalia, Rohit; Ramachandran, Arun; Shoichet, Molly S
2015-01-10
Affinity-based controlled release modulates the delivery of protein or small molecule therapeutics through transient dissociation/association. To understand which parameters can be used to tune release, we used a mathematical model based on simple binding kinetics. A comprehensive asymptotic analysis revealed three characteristic regimes for therapeutic release from affinity-based systems. These regimes can be controlled by diffusion or unbinding kinetics, and can exhibit release over either a single stage or two stages. This analysis fundamentally changes the way we think of controlling release from affinity-based systems and thereby explains some of the discrepancies in the literature on which parameters influence affinity-based release. The rate of protein release from affinity-based systems is determined by the balance of diffusion of the therapeutic agent through the hydrogel and the dissociation kinetics of the affinity pair. Equations for tuning protein release rate by altering the strength (KD) of the affinity interaction, the concentration of binding ligand in the system, the rate of dissociation (koff) of the complex, and the hydrogel size and geometry, are provided. We validated our model by collapsing the model simulations and the experimental data from a recently described affinity release system, to a single master curve. Importantly, this mathematical analysis can be applied to any single species affinity-based system to determine the parameters required for a desired release profile. PMID:25449806
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana
2016-04-01
Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.
Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics
NASA Astrophysics Data System (ADS)
Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.
2014-12-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.
Computer modelling of statistical properties of SASE FEL radiation
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1997-06-01
The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY.
Modelling of Radiation Heat Transfer in Reacting Hot Gas Flows
NASA Astrophysics Data System (ADS)
Thellmann, A.; Mundt, C.
2009-01-01
In this work the interaction between a turbulent flow including chemical reactions and radiation transport is investigated. As a first step, the state-of-the art radiation models P1 based on the moment method and Discrete Transfer Model (DTM) based on the discrete ordinate method are used in conjunction with the CFD code ANSYS CFX. The absorbing and emitting medium (water vapor) is modeled by Weighted Sum of Gray Gases. For the chemical reactions the standard Eddy dissipation model combined with the two equation turbulence model k-epsilon is employed. A demonstration experiment is identified which delivers temperature distribution, species concentration and radiative intensity distribution in the investigated combustion enclosure. The simulation results are compared with the experiment and reveals that the P1 model predicts the location of the maximal radiation intensity unphysically. On the other hand the DTM model does better but over predicts the maximum value of the radiation intensity. This radiation sensitivity study is a first step on the way to identify a suitable radiation transport and spectral model in order to implement both in an existing 3D Navier-Stokes Code. Including radiation heat transfer we intend to investigate the influence on the overall energy balance in a hydrogen/oxygen rocket combustion chamber.
NASA Astrophysics Data System (ADS)
Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua
2015-05-01
Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.
Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum
NASA Astrophysics Data System (ADS)
Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.
2013-02-01
Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.
Shock Layer Radiation Modeling and Uncertainty for Mars Entry
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Brandis, Aaron M.; Sutton, Kenneth
2012-01-01
A model for simulating nonequilibrium radiation from Mars entry shock layers is presented. A new chemical kinetic rate model is developed that provides good agreement with recent EAST and X2 shock tube radiation measurements. This model includes a CO dissociation rate that is a factor of 13 larger than the rate used widely in previous models. Uncertainties in the proposed rates are assessed along with uncertainties in translational-vibrational relaxation modeling parameters. The stagnation point radiative flux uncertainty due to these flowfield modeling parameter uncertainties is computed to vary from 50 to 200% for a range of free-stream conditions, with densities ranging from 5e-5 to 5e-4 kg/m3 and velocities ranging from of 6.3 to 7.7 km/s. These conditions cover the range of anticipated peak radiative heating conditions for proposed hypersonic inflatable aerodynamic decelerators (HIADs). Modeling parameters for the radiative spectrum are compiled along with a non-Boltzmann rate model for the dominant radiating molecules, CO, CN, and C2. A method for treating non-local absorption in the non-Boltzmann model is developed, which is shown to result in up to a 50% increase in the radiative flux through absorption by the CO 4th Positive band. The sensitivity of the radiative flux to the radiation modeling parameters is presented and the uncertainty for each parameter is assessed. The stagnation point radiative flux uncertainty due to these radiation modeling parameter uncertainties is computed to vary from 18 to 167% for the considered range of free-stream conditions. The total radiative flux uncertainty is computed as the root sum square of the flowfield and radiation parametric uncertainties, which results in total uncertainties ranging from 50 to 260%. The main contributors to these significant uncertainties are the CO dissociation rate and the CO heavy-particle excitation rates. Applying the baseline flowfield and radiation models developed in this work, the
Subgrid-scale model for radiative transfer in turbulent participating media
NASA Astrophysics Data System (ADS)
Soucasse, L.; Rivière, Ph.; Soufiani, A.
2014-01-01
The simulation of turbulent flows of radiating gases, taking into account all turbulence length scales with an accurate radiation transport solver, is computationally prohibitive for high Reynolds or Rayleigh numbers. This is particularly the case when the small structures are not optically thin. We develop in this paper a radiative transfer subgrid model suitable for the coupling with direct numerical simulations of turbulent radiating fluid flows. Owing to the linearity of the Radiative Transfer Equation (RTE), the emission source term is spatially filtered to define large-scale and subgrid-scale radiation intensities. The large-scale or filtered intensity is computed with a standard ray tracing method on a coarse grid, and the subgrid intensity is obtained analytically (in Fourier space) from the Fourier transform of the subgrid emission source term. A huge saving of computational time is obtained in comparison with direct ray tracing applied on the fine mesh. Model accuracy is checked for three 3D fluctuating temperature fields. The first field is stochastically generated and allows us to discuss the effects of the filtering level and of the optical thicknesses of the whole medium, of the integral length scale, and of the cutoff wave length. The second and third cases correspond respectively to turbulent natural convection of humid air in a cubical box, and to the flow of hot combustion products inside a channel. In all cases, the achieved accuracy on radiative powers and wall fluxes is about a few percents.
Subgrid-scale model for radiative transfer in turbulent participating media
Soucasse, L.; Rivière, Ph.; Soufiani, A.
2014-01-15
The simulation of turbulent flows of radiating gases, taking into account all turbulence length scales with an accurate radiation transport solver, is computationally prohibitive for high Reynolds or Rayleigh numbers. This is particularly the case when the small structures are not optically thin. We develop in this paper a radiative transfer subgrid model suitable for the coupling with direct numerical simulations of turbulent radiating fluid flows. Owing to the linearity of the Radiative Transfer Equation (RTE), the emission source term is spatially filtered to define large-scale and subgrid-scale radiation intensities. The large-scale or filtered intensity is computed with a standard ray tracing method on a coarse grid, and the subgrid intensity is obtained analytically (in Fourier space) from the Fourier transform of the subgrid emission source term. A huge saving of computational time is obtained in comparison with direct ray tracing applied on the fine mesh. Model accuracy is checked for three 3D fluctuating temperature fields. The first field is stochastically generated and allows us to discuss the effects of the filtering level and of the optical thicknesses of the whole medium, of the integral length scale, and of the cutoff wave length. The second and third cases correspond respectively to turbulent natural convection of humid air in a cubical box, and to the flow of hot combustion products inside a channel. In all cases, the achieved accuracy on radiative powers and wall fluxes is about a few percents.
A 3-dimensional DTI MRI-based model of GBM growth and response to radiation therapy.
Hathout, Leith; Patel, Vishal; Wen, Patrick
2016-09-01
Glioblastoma (GBM) is both the most common and the most aggressive intra-axial brain tumor, with a notoriously poor prognosis. To improve this prognosis, it is necessary to understand the dynamics of GBM growth, response to treatment and recurrence. The present study presents a mathematical diffusion-proliferation model of GBM growth and response to radiation therapy based on diffusion tensor (DTI) MRI imaging. This represents an important advance because it allows 3-dimensional tumor modeling in the anatomical context of the brain. Specifically, tumor infiltration is guided by the direction of the white matter tracts along which glioma cells infiltrate. This provides the potential to model different tumor growth patterns based on location within the brain, and to simulate the tumor's response to different radiation therapy regimens. Tumor infiltration across the corpus callosum is simulated in biologically accurate time frames. The response to radiation therapy, including changes in cell density gradients and how these compare across different radiation fractionation protocols, can be rendered. Also, the model can estimate the amount of subthreshold tumor which has extended beyond the visible MR imaging margins. When combined with the ability of being able to estimate the biological parameters of invasiveness and proliferation of a particular GBM from serial MRI scans, it is shown that the model has potential to simulate realistic tumor growth, response and recurrence patterns in individual patients. To the best of our knowledge, this is the first presentation of a DTI-based GBM growth and radiation therapy treatment model. PMID:27572745
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
Reynolds, Andrew M; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments.
Development of a Godunov-type model for the accurate simulation of dispersion dominated waves
NASA Astrophysics Data System (ADS)
Bradford, Scott F.
2016-10-01
A new numerical model based on the Navier-Stokes equations is presented for the simulation of dispersion dominated waves. The equations are solved by splitting the pressure into hydrostatic and non-hydrostatic components. The Godunov approach is utilized to solve the hydrostatic flow equations and the resulting velocity field is then corrected to be divergence free. Alternative techniques for the time integration of the non-hydrostatic pressure gradients are presented and investigated in order to improve the accuracy of dispersion dominated wave simulations. Numerical predictions are compared with analytical solutions and experimental data for test cases involving standing, shoaling, refracting, and breaking waves.
Considering mask pellicle effect for more accurate OPC model at 45nm technology node
NASA Astrophysics Data System (ADS)
Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo
2008-11-01
Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.
Bardhan, Jaydeep P.; Jungwirth, Pavel; Makowski, Lee
2012-01-01
Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular “linear response” model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution). PMID:23020318
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
NASA Astrophysics Data System (ADS)
Chien Chang, Jia-Ren; Tai, Cheng-Chi
2006-07-01
This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.
Howard Barker; Jason Cole
2012-05-17
Utilization of cloud-resolving models and multi-dimensional radiative transfer models to investigate the importance of 3D radiation effects on the numerical simulation of cloud fields and their properties.
Volterra network modeling of the nonlinear finite-impulse reponse of the radiation belt flux
Taylor, M.; Daglis, I. A.; Anastasiadis, A.; Vassiliadis, D.
2011-01-04
We show how a general class of spatio-temporal nonlinear impulse-response forecast networks (Volterra networks) can be constructed from a taxonomy of nonlinear autoregressive integrated moving average with exogenous inputs (NAR-MAX) input-output equations, and used to model the evolution of energetic particle f uxes in the Van Allen radiation belts. We present initial results for the nonlinear response of the radiation belts to conditions a month earlier. The essential features of spatio-temporal observations are recovered with the model echoing the results of state space models and linear f nite impulse-response models whereby the strongest coupling peak occurs in the preceding 1-2 days. It appears that such networks hold promise for the development of accurate and fully data-driven space weather modelling, monitoring and forecast tools.
Volterra network modeling of the nonlinear finite-impulse reponse of the radiation belt flux
NASA Astrophysics Data System (ADS)
Taylor, M.; Daglis, I. A.; Anastasiadis, A.; Vassiliadis, D.
2011-01-01
We show how a general class of spatio-temporal nonlinear impulse-response forecast networks (Volterra networks) can be constructed from a taxonomy of nonlinear autoregressive integrated moving average with exogenous inputs (NAR-MAX) input-output equations, and used to model the evolution of energetic particle f uxes in the Van Allen radiation belts. We present initial results for the nonlinear response of the radiation belts to conditions a month earlier. The essential features of spatio-temporal observations are recovered with the model echoing the results of state space models and linear f nite impulse-response models whereby the strongest coupling peak occurs in the preceding 1-2 days. It appears that such networks hold promise for the development of accurate and fully data-driven space weather modelling, monitoring and forecast tools.
Radiation exposure modeling and project schedule visualization
Jaquish, W.R.; Enderlin, V.R.
1995-10-01
This paper discusses two applications using IGRIP (Interactive Graphical Robot Instruction Program) to assist environmental remediation efforts at the Department of Energy (DOE) Hanford Site. In the first application, IGRIP is used to calculate the estimated radiation exposure to workers conducting tasks in radiation environments. In the second, IGRIP is used as a configuration management tool to detect interferences between equipment and personnel work areas for multiple projects occurring simultaneously in one area. Both of these applications have the capability to reduce environmental remediation costs by reducing personnel radiation exposure and by providing a method to effectively manage multiple projects in a single facility.
NASA Astrophysics Data System (ADS)
Tao, Jianmin; Rappe, Andrew M.
2016-01-01
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
Implementing Badhwar-O'Neill Galactic Cosmic Ray Model for the Analysis of Space Radiation Exposure
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; O'Neill, Patrick M.; Slaba, Tony C.
2014-01-01
For the analysis of radiation risks to astronauts and planning exploratory space missions, accurate energy spectrum of galactic cosmic radiation (GCR) is necessary. Characterization of the ionizing radiation environment is challenging because the interplanetary plasma and radiation fields are modulated by solar disturbances and the radiation doses received by astronauts in interplanetary space are likewise influenced. A model of the Badhwar-O'Neill 2011 (BO11) GCR environment, which is represented by GCR deceleration potential theta, has been derived by utilizing all of the GCR measurements from balloons, satellites, and the newer NASA Advanced Composition Explorer (ACE). In the BO11 model, the solar modulation level is derived from the mean international sunspot numbers with time-delay, which has been calibrated with actual flight instrument measurements to produce better GCR flux data fit during solar minima. GCR fluxes provided by the BO11 model were compared with various spacecraft measurements at 1 AU, and further comparisons were made for the tissue equivalent proportional counters measurements at low Earth orbits using the high-charge and energy transport (HZETRN) code and various GCR models. For the comparison of the absorbed dose and dose equivalent calculations with the measurements by Radiation Assessment Detector (RAD) at Gale crater on Mars, the intensities and energies of GCR entering the heliosphere were calculated by using the BO11 model, which accounts for time-dependent attenuation of the local interstellar spectrum of each element. The BO11 model, which has emphasized for the last 24 solar minima, showed in relatively good agreement with the RAD data for the first 200 sols, but it was resulted in to be less well during near the solar maximum of solar cycle 24 due to subtleties in the changing heliospheric conditions. By performing the error analysis of the BO11 model and the optimization in reducing overall uncertainty, the resultant BO13 model
NASA Astrophysics Data System (ADS)
Weber, Tobias K. D.; Riedel, Thomas
2015-04-01
Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.
Canopy radiation transmission for an energy balance snowmelt model
NASA Astrophysics Data System (ADS)
Mahat, Vinod; Tarboton, David G.
2012-01-01
To better estimate the radiation energy within and beneath the forest canopy for energy balance snowmelt models, a two stream radiation transfer model that explicitly accounts for canopy scattering, absorption and reflection was developed. Upward and downward radiation streams represented by two differential equations using a single path assumption were solved analytically to approximate the radiation transmitted through or reflected by the canopy with multiple scattering. This approximation results in an exponential decrease of radiation intensity with canopy depth, similar to Beer's law for a deep canopy. The solution for a finite canopy is obtained by applying recursive superposition of this two stream single path deep canopy solution. This solution enhances capability for modeling energy balance processes of the snowpack in forested environments, which is important when quantifying the sensitivity of hydrologic response to input changes using physically based modeling. The radiation model was included in a distributed energy balance snowmelt model and results compared with observations made in three different vegetation classes (open, coniferous forest, deciduous forest) at a forest study area in the Rocky Mountains in Utah, USA. The model was able to capture the sensitivity of beneath canopy net radiation and snowmelt to vegetation class consistent with observations and achieve satisfactory predictions of snowmelt from forested areas from parsimonious practically available information. The model is simple enough to be applied in a spatially distributed way, but still relatively rigorously and explicitly represent variability in canopy properties in the simulation of snowmelt over a watershed.
A model of radiation-induced myelopoiesis in space.
Esposito, R D; Durante, M; Gialanella, G; Grossi, G; Pugliese, M; Scampoli, P; Jones, T D
2001-01-01
Astronauts' radiation exposure limits are based on experimental and epidemiological data obtained on Earth. It is assumed that radiation sensitivity remains the same in the extraterrestrial space. However, human radiosensitivity is dependent upon the response of the hematopoietic tissue to the radiation insult. It is well known that the immune system is affected by microgravity. We have developed a mathematical model of radiation-induced myelopoiesis which includes the effect of microgravity on bone marrow kinetics. It is assumed that cellular radiosensitivity is not modified by the space environment, but repopulation rates of stem and stromal cells are reduced as a function of time in weightlessness. A realistic model of the space radiation environment, including the HZE component, is used to simulate the radiation damage. A dedicated computer code was written and applied to solar particle events and to the mission to Mars. The results suggest that altered myelopoiesis and lymphopoiesis in microgravity might increase human radiosensitivity in space. PMID:11771552
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-12-01
Development of a physically accurate and computationally efficient photon migration model for turbid media is crucial for optical computed tomography such as diffuse optical tomography. For the development, this paper constructs a space-time coupling model of the radiative transport equation with the photon diffusion equation. In the coupling model, a space-time regime of the photon migration is divided into the ballistic and diffusive regimes with the interaction between the both regimes to improve the accuracy of the results and the efficiency of computation. The coupling model provides an accurate description of the photon migration in various turbid media in a wide range of the optical properties, and reduces computational loads when compared with those of full calculation of the RTE.
Treatment of cloud radiative effects in general circulation models
Wang, W.C.; Dudek, M.P.; Liang, X.Z.; Ding, M.
1996-04-01
We participate in the Atmospheric Radiation Measurement (ARM) program with two objectives: (1) to improve the general circulation model (GCM) cloud/radiation treatment with a focus on cloud verticle overlapping and layer cloud optical properties, and (2) to study the effects of cloud/radiation-climate interaction on GCM climate simulations. This report summarizes the project progress since the Fourth ARM Science Team meeting February 28-March 4, 1994, in Charleston, South Carolina.
A comparison of two canopy radiative models in land surface processes
NASA Astrophysics Data System (ADS)
Dai, Qiudan; Sun, Shufen
2007-05-01
This paper compares the predictions by two radiative transfer models—the two-stream approximation model and the generalized layered model (developed by the authors) in land surface processes—for different canopies under direct or diffuse radiation conditions. The comparison indicates that there are significant differences between the two models, especially in the near infrared (NIR) band. Results of canopy reflectance from the two-stream model are larger than those from the generalized model. However, results of canopy absorptance from the two-stream model are larger in some cases and smaller in others compared to those from the generalized model, depending on the cases involved. In the visible (VIS) band, canopy reflectance is smaller and canopy absorptance larger from the two-stream model compared to the generalized model when the Leaf Area Index (LAI) is low and soil reflectance is high. In cases of canopies with vertical leaf angles, the differences of reflectance and absorptance in the VIS and NIR bands between the two models are especially large. Two commonly occurring cases, with which the two-stream model cannot deal accurately, are also investigated. One is for a canopy with different adaxial and abaxial leaf optical properties; and the other is for incident sky diffuse radiation with a non-uniform distribution. Comparison of the generalized model within the same canopy for both uniform and non-uniform incident diffuse radiation inputs shows smaller differences in general. However, there is a measurable difference between these radiation inputs for a canopy with high leaf angle. This indicates that the application of the two-stream model to a canopy with different adaxial and abaxial leaf optical properties will introduce non-negligible errors.
NASA Astrophysics Data System (ADS)
Malik, Arif Sultan
This work presents improved technology for attaining high-quality rolled metal strip. The new technology is based on an innovative method to model both the static and dynamic characteristics of rolling mill deflection, and it applies equally to both cluster-type and non cluster-type rolling mill configurations. By effectively combining numerical Finite Element Analysis (FEA) with analytical solid mechanics, the devised approach delivers a rapid, accurate, flexible, high-fidelity model useful for optimizing many important rolling parameters. The associated static deflection model enables computation of the thickness profile and corresponding flatness of the rolled strip. Accurate methods of predicting the strip thickness profile and strip flatness are important in rolling mill design, rolling schedule set-up, control of mill flatness actuators, and optimization of ground roll profiles. The corresponding dynamic deflection model enables solution of the standard eigenvalue problem to determine natural frequencies and modes of vibration. The presented method for solving the roll-stack deflection problem offers several important advantages over traditional methods. In particular, it includes continuity of elastic foundations, non-iterative solution when using pre-determined elastic foundation moduli, continuous third-order displacement fields, simple stress-field determination, the ability to calculate dynamic characteristics, and a comparatively faster solution time. Consistent with the most advanced existing methods, the presented method accommodates loading conditions that represent roll crowning, roll bending, roll shifting, and roll crossing mechanisms. Validation of the static model is provided by comparing results and solution time with large-scale, commercial finite element simulations. In addition to examples with the common 4-high vertical stand rolling mill, application of the presented method to the most complex of rolling mill configurations is demonstrated
Guidelines for effective radiation transport for cable SGEMP modeling
Drumm, Clifton Russell; Fan, Wesley C.; Turner, C. David
2014-07-01
This report describes experiences gained in performing radiation transport computations with the SCEPTRE radiation transport code for System Generated ElectroMagnetic Pulse (SGEMP) applications. SCEPTRE is a complex code requiring a fairly sophisticated user to run the code effectively, so this report provides guidance for analysts interested in performing these types of calculations. One challenge in modeling coupled photon/electron transport for SGEMP is to provide a spatial mesh that is sufficiently resolved to accurately model surface charge emission and charge deposition near material interfaces. The method that has been most commonly used to date to compute cable SGEMP typically requires a sub-micron mesh size near material interfaces, which may be difficult for meshing software to provide for complex geometries. We present here an alternative method for computing cable SGEMP that appears to substantially relax this requirement. The report also investigates the effect of refining the energy mesh and increasing the order of the angular approximation to provide some guidance on determining reasonable parameters for the energy/angular approximation needed for x-ray environments. Conclusions for -ray environments may be quite different and will be treated in a subsequent report. In the course of the energy-mesh refinement studies, a bug in the cross-section generation software was discovered that may cause under prediction of the result by as much as an order of magnitude for the test problem studied here, when the electron energy group widths are much smaller than those for the photons. Results will be presented and compared using cross sections generated before and after the fix. We also describe adjoint modeling, which provides sensitivity of the total charge drive to the source energy and angle of incidence, which is quite useful for comparing the effect of changing the source environment and for determining most stressing angle of incidence and
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP, LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir Space Station. This report gives the details of the model-data comparisons-summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a combination report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian-trapped radiation models.
Trapped Radiation Model Uncertainties: Model-Data and Model-Model Comparisons
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
The standard AP8 and AE8 models for predicting trapped proton and electron environments have been compared with several sets of flight data to evaluate model uncertainties. Model comparisons are made with flux and dose measurements made on various U.S. low-Earth orbit satellites (APEX, CRRES, DMSP. LDEF, NOAA) and Space Shuttle flights, on Russian satellites (Photon-8, Cosmos-1887, Cosmos-2044), and on the Russian Mir space station. This report gives the details of the model-data comparisons -- summary results in terms of empirical model uncertainty factors that can be applied for spacecraft design applications are given in a companion report. The results of model-model comparisons are also presented from standard AP8 and AE8 model predictions compared with the European Space Agency versions of AP8 and AE8 and with Russian trapped radiation models.
Survey of current situation in radiation belt modeling
NASA Technical Reports Server (NTRS)
Fung, Shing F.
2004-01-01
The study of Earth's radiation belts is one of the oldest subjects in space physics. Despite the tremendous progress made in the last four decades, we still lack a complete understanding of the radiation belts in terms of their configurations, dynamics, and detailed physical accounts of their sources and sinks. The static nature of early empirical trapped radiation models, for examples, the NASA AP-8 and AE-8 models, renders those models inappropriate for predicting short-term radiation belt behaviors associated with geomagnetic storms and substorms. Due to incomplete data coverage, these models are also inaccurate at low altitudes (e.g., <1000 km) where many robotic and human space flights occur. The availability of radiation data from modern space missions and advancement in physical modeling and data management techniques have now allowed the development of new empirical and physical radiation belt models. In this paper, we will review the status of modern radiation belt modeling. Published by Elsevier Ltd on behalf of COSPAR.
NASA Astrophysics Data System (ADS)
Randrianalisoa, Jaona; Baillis, Dominique
2014-10-01
The current paper presents an overview of traditional and recent models for predicting the thermal properties of solid foams with open- and closed-cells. Their effective thermal conductivity has been determined analytically by empirical or thermal-resistance-network-based models. Radiative properties crucial to obtain the radiative conductivity have been determined analytically by models based on the independent scattering theory. Powerful models combine three-dimensional (3D) foam modelling (by X-ray tomography, Voronoi tessellation method, etc.) and numerical solution of transport equations. The finite-element method (FEM) has been used to compute thermal conductivity due to solid network for which the computation cost remains reasonable. The effective conductivity can be determined from FEM results combined with the conductivity due to the fluid, which can be accurately evaluated by a simple formula for air or weakly conducting gas. The finite volume method seems well appropriate for solving the thermal problem in both the solid and fluid phases. The ray-tracing Monte Carlo method constitutes the powerful model for radiative properties. Finally, 3D image analysis of foams is useful to determine topological information needed to feed analytical thermal and radiative properties models. xml:lang="fr"
NASA Space Radiation Program Integrative Risk Model Toolkit
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Hu, Shaowen; Plante, Ianik; Ponomarev, Artem L.; Sandridge, Chris
2015-01-01
NASA Space Radiation Program Element scientists have been actively involved in development of an integrative risk models toolkit that includes models for acute radiation risk and organ dose projection (ARRBOD), NASA space radiation cancer risk projection (NSCR), hemocyte dose estimation (HemoDose), GCR event-based risk model code (GERMcode), and relativistic ion tracks (RITRACKS), NASA radiation track image (NASARTI), and the On-Line Tool for the Assessment of Radiation in Space (OLTARIS). This session will introduce the components of the risk toolkit with opportunity for hands on demonstrations. The brief descriptions of each tools are: ARRBOD for Organ dose projection and acute radiation risk calculation from exposure to solar particle event; NSCR for Projection of cancer risk from exposure to space radiation; HemoDose for retrospective dose estimation by using multi-type blood cell counts; GERMcode for basic physical and biophysical properties for an ion beam, and biophysical and radiobiological properties for a beam transport to the target in the NASA Space Radiation Laboratory beam line; RITRACKS for simulation of heavy ion and delta-ray track structure, radiation chemistry, DNA structure and DNA damage at the molecular scale; NASARTI for modeling of the effects of space radiation on human cells and tissue by incorporating a physical model of tracks, cell nucleus, and DNA damage foci with image segmentation for the automated count; and OLTARIS, an integrated tool set utilizing HZETRN (High Charge and Energy Transport) intended to help scientists and engineers study the effects of space radiation on shielding materials, electronics, and biological systems.
NASA Astrophysics Data System (ADS)
Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.
2015-12-01
There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870
Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications
NASA Technical Reports Server (NTRS)
Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.
2016-01-01
Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
Radiative seesaw in left-right symmetric model
Gu Peihong; Sarkar, Utpal
2008-10-01
There are some radiative origins for the neutrino masses in the conventional left-right symmetric models with the usual bidoublet and triplet Higgs scalars. These radiative contributions could dominate over the tree-level seesaw and could explain the observed neutrino masses.
NASA Astrophysics Data System (ADS)
Stewart, Kristin J.
In this work we developed two new devices that aim to improve the accuracy of relative and reference dosimetry for radiation therapy: a guarded liquid ionization chamber (GLIC) and an electron sealed water (ESW) calorimeter. With the GLIC we aimed to develop a perturbation-free energy-independent detector with high spatial resolution for relative dosimetry. We achieved sufficient stability for short-term measurements using the GLIC-03, which has a sensitive volume of approximately 2 mm3. We evaluated ion recombination in pulsed photon beams using a theoretical model and also determined a new empirical method to correct for relative differences in general recombination which could be used in cases where the theoretical model was not applicable. The energy dependence of the GLIC-03 was 1.1% between 6 and 18 MV photon beams. Measurements in the build-up region of an 18 MV beam indicated that this detector produces minimal perturbation to the radiation field and confirmed the validity of the empirical recombination correction. The ESW calorimeter was designed to directly measure absorbed dose to water in clinical electron beams. We obtained reproducible measurements for 6 to 20 MeV beams. We determined corrections for perturbations to the radiation field caused by the glass calorimeter vessel and for conductive heat transfer due to the dose gradient and non-water materials. The overall uncertainty on the ESW calorimeter dose was 0.5% for the 9 to 20 MeV beams and 1.0% for 6 MeV, showing for the first time that the development of a water-calorimeter-based standard for electron beams over a wide range of energies is feasible. Comparison between measurements with the ESW calorimeter and the NRC photon beam standard calorimeter in a 6 MeV beam revealed a discrepancy of 0.7+/-0.2% which is still under investigation. Absorbed-dose beam quality conversion factors in electron beams were measured using the ESW calorimeter for the Exradin A12 and PTW Roos ionization chambers
Freezable Radiator Model Correlation Improvements and Fluids Study
NASA Technical Reports Server (NTRS)
Lillibridge, Sean; Navarro, Moses
2011-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the space craft s surroundings and because of different thermal rejection requirements during different mission phases. However, freezing and thawing (recovering) a radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. To attempt to improve this, tests were conducted in 2009 to determine whether the behavior of a simple stagnating radiator could be predicted or emulated in a Thermal Desktop(trademark) numerical model. A 50-50 mixture of DowFrost HD and water was used as the working fluid. Efforts to scale this model to a full scale design, as well as efforts to characterize various thermal control fluids at low temperatures are also discussed. Previous testing and modeling efforts showed that freezable radiators could be operated as intended, and be fairly, if not perfectly predicted by numerical models. This paper documents the improvements made to the numerical model, and outcomes of fluid studies that were determined necessary to go forward with further radiator testing.
Wu, Wei; Liu, Yangang
2010-05-12
A new one-dimensional radiative equilibrium model is built to analytically evaluate the vertical profile of the Earth's atmospheric radiation entropy flux under the assumption that atmospheric longwave radiation emission behaves as a greybody and shortwave radiation as a diluted blackbody. Results show that both the atmospheric shortwave and net longwave radiation entropy fluxes increase with altitude, and the latter is about one order in magnitude greater than the former. The vertical profile of the atmospheric net radiation entropy flux follows approximately that of the atmospheric net longwave radiation entropy flux. Sensitivity study further reveals that a 'darker' atmosphere with a larger overall atmospheric longwave optical depth exhibits a smaller net radiation entropy flux at all altitudes, suggesting an intrinsic connection between the atmospheric net radiation entropy flux and the overall atmospheric longwave optical depth. These results indicate that the overall strength of the atmospheric irreversible processes at all altitudes as determined by the corresponding atmospheric net entropy flux is closely related to the amount of greenhouse gases in the atmosphere.
Kosakovsky Pond, Sergei L; Posada, David; Stawiski, Eric; Chappey, Colombe; Poon, Art F Y; Hughes, Gareth; Fearnhill, Esther; Gravenor, Mike B; Leigh Brown, Andrew J; Frost, Simon D W
2009-11-01
Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1) are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial) sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL) procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol) sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5%) fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance of accurate
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
Future directions for LDEF ionizing radiation modeling and assessments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
1992-01-01
Data from the ionizing radiation dosimetry aboard LDEF provide a unique opportunity for assessing the accuracy of current space radiation models and in identifying needed improvements for future mission applications. Details are given of the LDEF data available for radiation model evaluations. The status is given of model comparisons with LDEF data, along with future directions of planned modeling efforts and data comparison assessments. The methodology is outlined which is related to modeling being used to help insure that the LDEF ionizing radiation results can be used to address ionizing radiation issues for future missions. In general, the LDEF radiation modeling has emphasized quick-look predictions using simplified methods to make comparisons with absorbed dose measurements and induced radioactivity measurements of emissions. Modeling and LDEF data comparisons related to linear energy transfer spectra are of importance for several reasons which are outlined. The planned modeling and LDEF data comparisons for LET spectra is discussed, including components of the LET spectra due to different environment sources, contribution from different production mechanisms, and spectra in plastic detectors vs silicon.
Solar radiation pressure model for the relay satellite of SELENE
NASA Astrophysics Data System (ADS)
Kubo-Oka, T.; Sengoku, A.
1999-09-01
A new radiation pressure model of the relay satellite of SELENE has been developed. The shape of the satellite was assumed to be a combination of a regular octagonal pillar and a column. Radiation forces acting on each part of the spacecraft were calculated independently and summed vectorially to obtain the mean acceleration of the satellite center of mass. We incorporated this new radiation pressure model into the orbit analysis software GEODYN-II and simulated the tracking data reduction process of the relay satellite. We compared two models: one is the new radiation pressure model developed in this work and the other a so-called "cannonball model" where the shape of the satellite is assumed to be a sphere. By the analysis of simulated two-way Doppler tracking data, we found that the new radiation pressure model reduces the observation residuals compared to the cannonball model. Moreover, we can decrease errors in the estimated lunar gravity field coefficients significantly by use of the new radiation pressure model.
Principal component-based radiative transfer model for hyperspectral sensors: theoretical concept.
Liu, Xu; Smith, William L; Zhou, Daniel K; Larar, Allen
2006-01-01
Modern infrared satellite sensors such as the Atmospheric Infrared Sounder (AIRS), the Cross-Track Infrared Sounder (CrIS), the Tropospheric Emission Spectrometer (TES), the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and the Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, superfast radiative transfer models are needed. We present a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the principal component-based radiative transfer model (PCRTM) predicts the principal component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from the properties of PC scores and instrument line-shape functions. The PCRTM is accurate and flexible. Because of its high speed and compressed spectral information format, it has great potential for superfast one-dimensional physical retrieval and for numerical weather prediction large volume radiance data assimilation applications. The model has been successfully developed for the NAST-I and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.
NASA Technical Reports Server (NTRS)
Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen
2005-01-01
Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.
Myers, D. R.
2003-03-01
Measurement and modeling of broadband and spectral terrestrial solar radiation is important for the evaluation and deployment of solar renewable energy systems. We discuss recent developments in the calibration of broadband solar radiometric instrumentation and improving broadband solar radiation measurement accuracy. An improved diffuse sky reference and radiometer calibration and characterization software and for outdoor pyranometer calibrations is outlined. Several broadband solar radiation model approaches, including some developed at the National Renewable Energy Laboratory, for estimating direct beam, total hemispherical and diffuse sky radiation are briefly reviewed. The latter include the Bird clear sky model for global, direct beam, and diffuse terrestrial solar radiation; the Direct Insolation Simulation Code (DISC) for estimating direct beam radiation from global measurements; and the METSTAT (Meteorological and Statistical) and Climatological Solar Radiation (CSR) models that estimate solar radiation from meteorological data. We conclude that currently the best model uncertainties are representative of the uncertainty in measured data.
Models of Jovian decametric radiation. [astronomical models of decametric waves
NASA Technical Reports Server (NTRS)
Smith, R. A.
1975-01-01
A critical review is presented of theoretical models of Jovian decametric radiation, with particular emphasis on the Io-modulated emission. The problem is divided into three broad aspects: (1) the mechanism coupling Io's orbital motion to the inner exosphere, (2) the consequent instability mechanism by which electromagnetic waves are amplified, and (3) the subsequent propagation of the waves in the source region and the Jovian plasmasphere. At present there exists no comprehensive theory that treats all of these aspects quantitatively within a single framework. Acceleration of particles by plasma sheaths near Io is proposed as an explanation for the coupling mechanism, while most of the properties of the emission may be explained in the context of cyclotron instability of a highly anisotropic distribution of streaming particles.
Radiation transport modeling using extended quadrature method of moments
Vikas, V.; Hauck, C.D.; Wang, Z.J.; Fox, R.O.
2013-08-01
The radiative transfer equation describes the propagation of radiation through a material medium. While it provides a highly accurate description of the radiation field, the large phase space on which the equation is defined makes it numerically challenging. As a consequence, significant effort has gone into the development of accurate approximation methods. Recently, an extended quadrature method of moments (EQMOM) has been developed to solve univariate population balance equations, which also have a large phase space and thus face similar computational challenges. The distinct advantage of the EQMOM approach over other moment methods is that it generates moment equations that are consistent with a positive phase space density and has a moment inversion algorithm that is fast and efficient. The goal of the current paper is to present the EQMOM method in the context of radiation transport, to discuss advantages and disadvantages, and to demonstrate its performance on a set of standard one-dimensional benchmark problems that encompass optically thin, thick, and transition regimes. Special attention is given in the implementation to the issue of realizability—that is, consistency with a positive phase space density. Numerical results in one dimension are promising and lay the foundation for extending the same framework to multiple dimensions.
Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W; Reeher, M; Galmarini, D
2014-06-01
Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms.
NASA Astrophysics Data System (ADS)
McCullagh, Nuala; Jeong, Donghui; Szalay, Alexander S.
2016-01-01
Accurate modelling of non-linearities in the galaxy bispectrum, the Fourier transform of the galaxy three-point correlation function, is essential to fully exploit it as a cosmological probe. In this paper, we present numerical and theoretical challenges in modelling the non-linear bispectrum. First, we test the robustness of the matter bispectrum measured from N-body simulations using different initial conditions generators. We run a suite of N-body simulations using the Zel'dovich approximation and second-order Lagrangian perturbation theory (2LPT) at different starting redshifts, and find that transients from initial decaying modes systematically reduce the non-linearities in the matter bispectrum. To achieve 1 per cent accuracy in the matter bispectrum at z ≤ 3 on scales k < 1 h Mpc-1, 2LPT initial conditions generator with initial redshift z ≳ 100 is required. We then compare various analytical formulas and empirical fitting functions for modelling the non-linear matter bispectrum, and discuss the regimes for which each is valid. We find that the next-to-leading order (one-loop) correction from standard perturbation theory matches with N-body results on quasi-linear scales for z ≥ 1. We find that the fitting formula in Gil-Marín et al. accurately predicts the matter bispectrum for z ≤ 1 on a wide range of scales, but at higher redshifts, the fitting formula given in Scoccimarro & Couchman gives the best agreement with measurements from N-body simulations.
Preliminary results of a three-dimensional radiative transfer model
O`Hirok, W.
1995-09-01
Clouds act as the primary modulator of the Earth`s radiation at the top of the atmosphere, within the atmospheric column, and at the Earth`s surface. They interact with both shortwave and longwave radiation, but it is primarily in the case of shortwave where most of the uncertainty lies because of the difficulties in treating scattered solar radiation. To understand cloud-radiative interactions, radiative transfer models portray clouds as plane-parallel homogeneous entities to ease the computational physics. Unfortunately, clouds are far from being homogeneous, and large differences between measurement and theory point to a stronger need to understand and model cloud macrophysical properties. In an attempt to better comprehend the role of cloud morphology on the 3-dimensional radiation field, a Monte Carlo model has been developed. This model can simulate broadband shortwave radiation fluxes while incorporating all of the major atmospheric constituents. The model is used to investigate the cloud absorption anomaly where cloud absorption measurements exceed theoretical estimates and to examine the efficacy of ERBE measurements and cloud field experiments. 3 figs.
Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy
2014-07-01
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.
MODELING ACUTE EXPOSURE TO SOLAR RADIATION
One of the major technical challenges in calculating solar flux on the human form has been the complexity of the surface geometry (i.e., the surface normal vis a vis the incident radiation). The American Cancer Society reports that over 80% of skin cancers occur on the face, he...
Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu
2015-01-01
Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement
A Physical Model of Electron Radiation Belts of Saturn
NASA Astrophysics Data System (ADS)
Lorenzato, L.; Sicard-Piet, A.; Bourdarie, S.
2012-04-01
Radiation belts causes irreversible damages on on-board instruments materials. That's why for two decades, ONERA proposes studies about radiation belts of magnetized planets. First, in the 90's, the development of a physical model, named Salammbô, carried out a model of the radiation belts of the Earth. Then, for few years, analysis of the magnetosphere of Jupiter and in-situ data (Pioneer, Voyager, Galileo) allow to build a physical model of the radiation belts of Jupiter. Enrolling on the Cassini age and thanks to all information collected, this study permits to adapt Salammbô jovian radiation belts model to the case of Saturn environment. Indeed, some physical processes present in the kronian magnetosphere are similar to those present in the magnetosphere of Jupiter (radial diffusion; interaction of energetic electrons with rings, moons, atmosphere; synchrotron emission). However, some physical processes have to be added to the kronian model (compared to the jovian model) because of the particularity of the magnetosphere of Saturn: interaction of energetic electrons with neutral particles from Enceladus, and wave-particle interaction. This last physical process has been studied in details with the analysis of CASSINI/RPWS (Radio and Plasma Waves Science) data. The major importance of the wave particles interaction is now well known in the case of the radiation belts of the Earth but it is important to investigate on its role in the case of Saturn. So, importance of each physical process has been studied and analysis of Cassini MIMI-LEMMS and CAPS data allows to build a model boundary condition (at L = 6). Finally, results of this study lead to a kronian electrons radiation belts model including radial diffusion, interactions of energetic electrons with rings, moons and neutrals particles and wave-particle interaction (interactions of electrons with atmosphere particles and synchrotron emission are too weak to be taken into account in this model). Then, to
1994-12-31
We are using a hierarchy of numerical models of cirrus and stratus clouds and radiative transfer to improve the reliability of general circulation models. Our detailed cloud microphysical model includes all of the physical processes believed to control the lifecycle of liquid and ice clouds in the troposphere. In our one-dimensional cirrus studies, we find that the ice crystal number and size in cirrus clouds are not very sensitive to the number of condensation nuclei which are present. We have compared our three-dimensional meoscale simulations of cirrus clouds with radar, lidar satellite and other observations of water vapor and cloud fields and find that the model accurately predicts the characteristics of a cirrus cloud system. The model results reproduce several features detected by remote sensing (lidar and radar) measurements, including the appearance of the high cirrus cloud at about 15 UTC and the thickening of the cloud at 20 UTC. We have developed a new parameterizations for production of ice crystals based on the detailed one-dimensional cloud model, and are presently testing the parameterization in three-dimensional simulations of the FIRE-II November 26 case study. We have analyzed NWS radiosonde humidity data from FIRE and ARM and found errors, biases, and uncertainties in the conversion of the sensed resistance to humidity.
Radiative Transfer Model for Translucent Slab Ice on Mars
NASA Astrophysics Data System (ADS)
Andrieu, F.; Schmidt, F.; Douté, S.; Schmitt, B.; Brissaud, O.
2016-09-01
We developed a radiative transfer model that simulates in VIS/NIR the bidirectional reflectance of a contaminated slab layer of ice overlaying a granular medium, under geometrical optics conditions to study martian ices.
Idealized radiation efficiency model for a porous radiant burner
Fu, X.; Viskanta, R.; Gore, J.P.
1999-07-01
A simple, highly idealized radiation efficiency model has been developed for a porous radiant burner with or without a screen to assess the thermal performance of an ideal porous burner that yields the highest radiation efficiency and against which test results and/or more realistic model predictions could be benchmarked. The model is based on thermodynamics principles (first law of thermodynamics) with idealizations made for some of the physical processes. Empirical information, where necessary, is then used to close the model equations. The maximum radiation efficiency at a given firing rate is predicted. The effects of input parameters such as the firing rate, the equivalence ratio, and the effective emittance of the burner on the radiation efficiency of the porous radiant burner are reported.
A space radiation shielding model of the Martian radiation environment experiment (MARIE).
Atwell, W; Saganti, P; Cucinotta, F A; Zeitlin, C J
2004-01-01
The 2001 Mars Odyssey spacecraft was launched towards Mars on April 7, 2001. Onboard the spacecraft is the Martian radiation environment experiment (MARIE), which is designed to measure the background radiation environment due to galactic cosmic rays (GCR) and solar protons in the 20-500 MeV/n energy range. We present an approach for developing a space radiation-shielding model of the spacecraft that includes the MARIE instrument in the current mapping phase orientation. A discussion is presented describing the development and methodology used to construct the shielding model. For a given GCR model environment, using the current MARIE shielding model and the high-energy particle transport codes, dose rate values are compared with MARIE measurements during the early mapping phase in Mars orbit. The results show good agreement between the model calculations and the MARIE measurements as presented for the March 2002 dataset.
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi
2016-10-01
As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.
NASA Astrophysics Data System (ADS)
Movassaghi, Babak; Rasche, Volker; Viergever, Max A.; Niessen, Wiro J.
2004-05-01
For the diagnosis of ischemic heart disease, accurate quantitative analysis of the coronary arteries is important. In coronary angiography, a number of projections is acquired from which 3D models of the coronaries can be reconstructed. A signifcant limitation of the current 3D modeling procedures is the required user interaction for defining the centerlines of the vessel structures in the 2D projections. Currently, the 3D centerlines of the coronary tree structure are calculated based on the interactively determined centerlines in two projections. For every interactively selected centerline point in a first projection the corresponding point in a second projection has to be determined interactively by the user. The correspondence is obtained based on the epipolar-geometry. In this paper a method is proposed to retrieve all the information required for the modeling procedure, by the interactive determination of the 2D centerline-points in only one projection. For every determined 2D centerline-point the corresponding 3D centerline-point is calculated by the analysis of the 1D gray value functions of the corresponding epipolarlines in space for all available 2D projections. This information is then used to build a 3D representation of the coronary arteries using coronary modeling techniques. The approach is illustrated on the analysis of calibrated phantom and calibrated coronary projection data.
Parameterization of clouds and radiation in climate models
Roeckner, E.
1995-09-01
Clouds are a very important, yet poorly modeled element in the climate system. There are many potential cloud feedbacks, including those related to cloud cover, height, water content, phase change, and droplet concentration and size distribution. As a prerequisite to studying the cloud feedback issue, this research reports on the simulation and validation of cloud radiative forcing under present climate conditions using the ECHAM general circulation model and ERBE top-of-atmosphere radiative fluxes.
Radiative transfer model validations during the First ISLSCP Field Experiment
NASA Technical Reports Server (NTRS)
Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine
1990-01-01
Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.
Analogical optical modeling of the asymmetric lateral coherence of betatron radiation.
Paroli, B; Chiadroni, E; Ferrario, M; Potenza, M A C
2015-11-16
By exploiting analogical optical modeling of the radiation emitted by ultrarelativistic electrons undergoing betatron oscillations, we demonstrate peculiar properties of the spatial coherence through an interferometric method reminiscent of the classical Young's double slit experiment. The expected effects due to the curved trajectory and the broadband emission are accurately reproduced. We show that by properly scaling the fundamental parameters for the wavelength, analogical optical modeling of betatron emission can be realized in many cases of broad interest. Applications to study the feasibility of future experiments and to the characterization of beam diagnostics tools are described.
Improving the Salammbo code modelling and using it to better predict radiation belts dynamics
NASA Astrophysics Data System (ADS)
Maget, Vincent; Sicard-Piet, Angelica; Grimald, Sandrine Rochel; Boscher, Daniel
2016-07-01
In the framework of the FP7-SPACESTORM project, one objective is to improve the reliability of the model-based predictions performed of the radiation belt dynamics (first developed during the FP7-SPACECAST project). In this purpose we have analyzed and improved the way the simulations using the ONERA Salammbô code are performed, especially in : - Better controlling the driving parameters of the simulation; - Improving the initialization of the simulation in order to be more accurate at most energies for L values between 4 to 6; - Improving the physics of the model. For first point a statistical analysis of the accuracy of the Kp index has been conducted. For point two we have based our method on a long duration simulation in order to extract typical radiation belt states depending on the solar wind stress and geomagnetic activity. For last point we have first improved separately the modelling of different processes acting in the radiation belts and then, we have analyzed the global improvements obtained when simulating them together. We'll discuss here on all these points and on the balance that has to be taken into account between modeled processes to globally improve the radiation belt modelling.
Solar Radiation Estimated Through Mesoscale Atmospheric Modeling over Northeast Brazil
NASA Astrophysics Data System (ADS)
de Menezes Neto, Otacilio Leandro; Costa, Alexandre Araújo; Ramalho, Fernando Pinto; de Maria, Paulo Henrique Santiago
2009-03-01
The use of renewable energy sources, like solar, wind and biomass is rapidly increasing in recent years, with solar radiation as a particularly abundant energy source over Northeast Brazil. A proper quantitative knowledge of the incoming solar radiation is of great importance for energy planning in Brazil, serving as basis for developing future projects of photovoltaic power plants and solar energy exploitation. This work presents a methodology for mapping the incoming solar radiation at ground level for Northeast Brazil, using a mesoscale atmospheric model (Regional Atmospheric Modeling System—RAMS), calibrated and validated using data from the network of automatic surface stations from the State Foundation for Meteorology and Water Resources from Ceará (Fundação Cearense de Meteorologia e Recursos Hídricos- FUNCEME). The results showed that the model exhibits systematic errors, overestimating surface radiation, but that, after the proper statistical corrections, using a relationship between the model-predicted cloud fraction, the ground-level observed solar radiation and the incoming solar radiation estimated at the top of the atmosphere, a correlation of 0.92 with a confidence interval of 13.5 W/m2 is found for monthly data. Using this methodology, we found an estimate for annual average incoming solar radiation over Ceará of 215 W/m2 (maximum in October: 260 W/m2).
Chandra X-Ray Observatory's Radiation Environment and the AP-8/AE-8 Model
NASA Technical Reports Server (NTRS)
Virani, S. N.; Plucinsky, P. P.; Butt, Y. M.; Mueller-Mellin, R.
2000-01-01
The Chandra X-ray Observatory (CXO) was launched on July 23, 1999 and reached its final orbit on August 7, 1999. The CXO is in a highly elliptical orbit, approximately 140,000 km x 10,000 km, and has a period of roughly 63.5 hours (approx. 2.6 days). It transits the Earth's Van Allen belts once per orbit during which no science observations can be performed due to the high radiation environment. The Chandra X-ray Observatory Center (CXC) currently uses the National Space Science Data Center's "near Earth" AP-8/AE-8 radiation belt model to predict the start and end times of passage through the radiation belts. However, our scheduling software only uses a simple dipole model of the Earth's magnetic field. The resulting B, L magnet coordinates, do not always give sufficiently accurate predictions of the start and end times of transit of the Van Allen belts. We show this by comparing to the data from Chandra's on-board radiation monitor, the EPHIN (Electron, Proton, Helium Instrument particle detector) instrument. We present evidence that demonstrates this mis- of the radiation belts as well as data that also demonstrate the significant variability of one radiation belt transit to the next as experienced by the CXO. We present an explanation for why the dipole implementation of the AP-8/AE-8 gives inaccurate results. We are also investigating use of the Magnetospheric Specification and Forecast Model (MSM) - a model that also accounts for radiation belt variability and geometry.
Chandra X-ray Observatory's radiation environment and the AP-8/AE-8 model
NASA Astrophysics Data System (ADS)
Virani, Shanil N.; Mueller-Mellin, Reinhold; Plucinsky, Paul P.; Butt, Yousaf M.
2000-07-01
The Chandra X-ray Observatory (CXO) was launched on July 23, 1999 and reached its final orbit on August 7, 1999. The CXO is in a highly elliptical orbit, approximately 140,000 km X 10,000 km, and has a period of approximately 63.5 hours (approximately equals 2.65 days). It transits the Earth's Van Allen belts once per orbit during which no science observations can be performed due to the high radiation environment. The Chandra X-ray Observatory Center currently uses the National Space Science Data Center's `near Earth' AP-8/AE-8 radiation belt model to predict the start and end times of passage through the radiation belts. However, our scheduling software uses only a simple dipole model of the Earth's magnetic field. The resulting B, L magnetic coordinates, do not always give sufficiently accurate predictions of the start and end times of transit of the Van Allen belts. We show this by comparing to the data from Chandra's on-board radiation monitor, the EPHIN (Electron, Proton, Helium Instrument particle detector) instrument. We present evidence that demonstrates this mis-timing of the outer electron radiation belt as well as data that also demonstrate the significant variability of one radiation belt transit to the next as experienced by the CXO. We also present an explanation for why the dipole implementation of the AP-8/AE-8 model is not ideally suited for the CXO. Lastly, we provide a brief discussion of our on-going efforts to identify a model that accounts for radiation belt variability, geometry, and one that can be used for observation scheduling purposes.
Radiation exposure modeling for apartment living spaces with multiple radioactive sources.
Hwang, J S; Chan, C C; Wang, J D; Chang, W P
1998-03-01
Since late 1992, over 100 building complexes in Taiwan, including both public and private schools, and 1,000 apartments have been identified as emitting elevated levels of gamma-radiation. These high levels of gamma-radiation have been traced to construction steel contaminated with 60Co. Accurate reconstruction of the radiation exposure dosage among residents is complicated by the discovery of multiple radioactive sources within the living spaces and by the lack of comprehensive information about resident life-style and occupancy patterns within these contaminated spaces. The objective of this study was to evaluate the sensitivity of current dose reconstruction approach employed in an epidemiological study for the health effects of these occupants. We apply a statistical method of local smoothing in dose rate estimation and examine factors that are closely associated with radiation exposure from multiple radioactive sources in the apartment. Two examples are used, a simulated measurement in a hypothetical room with three radioactive sources and a real apartment in Ming-Shan Villa, one of the contaminated buildings. The simulated and estimated means are compared along 5-10 selected points of measurement: by local smoothing approach, with the furniture-adjusted space, and with the occupancy time-weighted mean. We found that the local smoothing approach came much closer to theoretical values. The local smoothing approach may serve as a refined method of radiation dose distribution modeling in exposure estimation. Before environmental exposure assessment, "highly occupied zones" (HOZs) in the contaminated spaces must be identified. Estimates of the time spent in these HOZs are essential to obtain accurate dosage values. These results will facilitate a more accurate dose reconstruction in the assessment of residential exposure in apartments with elevated levels of radioactivity.
Diffusion models for Jupiter's radiation belt
NASA Technical Reports Server (NTRS)
Jacques, S. A.; Davis, L., Jr.
1972-01-01
Solutions are given for the diffusion of trapped particles in a planetary magnetic field in which the first and second adiabatic invariants are preserved but the third is not, using as boundary conditions a fixed density at the outer boundary (the magnetopause) and a zero density at an inner boundary (the planetary surface). Losses to an orbiting natural satellite are included and an approximate evaluation is made of the effects of the synchrotron radiation on the energy of relativistic electrons. Choosing parameters appropriate to Jupiter, the electrons required to produce the observed synchrotron radiation are explained. If a speculative mechanism in which the diffusion is driven by ionospheric wind is the true explanation of the electrons producing the synchrotron emission it can be concluded that Jupiter's inner magnetosphere is occupied by an energetic proton flux that would be a serious hazard to spacecraft.
Towards an accurate model of the redshift-space clustering of haloes in the quasi-linear regime
NASA Astrophysics Data System (ADS)
Reid, Beth A.; White, Martin
2011-11-01
Observations of redshift-space distortions in spectroscopic galaxy surveys offer an attractive method for measuring the build-up of cosmological structure, which depends both on the expansion rate of the Universe and on our theory of gravity. The statistical precision with which redshift-space distortions can now be measured demands better control of our theoretical systematic errors. While many recent studies focus on understanding dark matter clustering in redshift space, galaxies occupy special places in the universe: dark matter haloes. In our detailed study of halo clustering and velocity statistics in 67.5 h-3 Gpc3 of N-body simulations, we uncover a complex dependence of redshift-space clustering on halo bias. We identify two distinct corrections which affect the halo redshift-space correlation function on quasi-linear scales (˜30-80 h-1 Mpc): the non-linear mapping between real-space and redshift-space positions, and the non-linear suppression of power in the velocity divergence field. We model the first non-perturbatively using the scale-dependent Gaussian streaming model, which we show is accurate at the <0.5 (2) per cent level in transforming real-space clustering and velocity statistics into redshift space on scales s > 10 (s > 25) h-1 Mpc for the monopole (quadrupole) halo correlation functions. The dominant correction to the Kaiser limit in this model scales like b3. We use standard perturbation theory to predict the real-space pairwise halo velocity statistics. Our fully analytic model is accurate at the 2 per cent level only on scales s > 40 h-1 Mpc for the range of halo masses we studied (with b= 1.4-2.8). We find that recent models of halo redshift-space clustering that neglect the corrections from the bispectrum and higher order terms from the non-linear real-space to redshift-space mapping will not have the accuracy required for current and future observational analyses. Finally, we note that our simulation results confirm the essential but non
Radiation transport phenomena and modeling - part A: Codes
Lorence, L.J.
1997-06-01
The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped.
Ionizing shocks in argon. Part I: Collisional-radiative model and steady-state structure
Kapper, M. G.; Cambier, J.-L.
2011-06-01
A detailed collisional-radiative model is developed and coupled with a single-fluid, two-temperature convection model for the transport of shock-heated argon. The model is used in a systematic approach to examine the effects of the collision cross sections on the shock structure, including the relaxation layer and subsequent radiative-cooling regime. We present a comparison with previous experimental results obtained at the University of Toronto's Institute of Aerospace Studies and the Australian National University, which serve as benchmarks to the model. It is shown here that ionization proceeds via the ladder-climbing mechanism, in which the upper levels play a dominant role as compared to the metastable states. Taking this into account, the present model is able to accurately reproduce the metastable populations in the relaxation zone measured in previous experiments, which is not possible with a two-step model. Our numerical results of the radiative-cooling region are in close agreement with experiments and have been obtained without having to consider radiative transport. In particular, it found that spontaneous emission involving the upper levels together with Bremsstrahlung emission account for nearly all radiative losses; all other significant radiative processes, resulting in transitions into the ground-state, are mostly self-absorbed and have a lesser impact. The effects of electron heat conduction are also considered and shown to have a large impact on the electron-priming region immediately behind the shock front; however, the overall effect on the induction length, i.e., the distance between the shock front and the electron avalanche, is small.
Improved Modeling of Open Waveguide Aperture Radiators for use in Conformal Antenna Arrays
NASA Astrophysics Data System (ADS)
Nelson, Gregory James
Open waveguide apertures have been used as radiating elements in conformal arrays. Individual radiating element model patterns are used in constructing overall array models. The existing models for these aperture radiating elements may not accurately predict the array pattern for TEM waves which are not on boresight for each radiating element. In particular, surrounding structures can affect the far field patterns of these apertures, which ultimately affects the overall array pattern. New models of open waveguide apertures are developed here with the goal of accounting for the surrounding structure effects on the aperture far field patterns such that the new models make accurate pattern predictions. These aperture patterns (both E plane and H plane) are measured in an anechoic chamber and the manner in which they deviate from existing model patterns are studied. Using these measurements as a basis, existing models for both E and H planes are updated with new factors and terms which allow the prediction of far field open waveguide aperture patterns with improved accuracy. These new and improved individual radiator models are then used to predict overall conformal array patterns. Arrays of open waveguide apertures are constructed and measured in a similar fashion to the individual aperture measurements. These measured array patterns are compared with the newly modeled array patterns to verify the improved accuracy of the new models as compared with the performance of existing models in making array far field pattern predictions. The array pattern lobe characteristics are then studied for predicting fully circularly conformal arrays of varying radii. The lobe metrics that are tracked are angular location and magnitude as the radii of the conformal arrays are varied. A constructed, measured array that is close to conforming to a circular surface is compared with a fully circularly conformal modeled array pattern prediction, with the predicted lobe angular locations and
Parametric plate-bridge dynamic filter model of violin radiativity.
Bissinger, George
2012-07-01
A hybrid, deterministic-statistical, parametric "dynamic filter" model of the violin's radiativity profile [characterized by an averaged-over-sphere, mean-square radiativity (R(ω)(2))] is developed based on the premise that acoustic radiation depends on (1) how strongly it vibrates [characterized by the averaged-over-corpus, mean-square mobility (Y(ω)(2))] and (2) how effectively these vibrations are turned into sound, characterized by the radiation efficiency, which is proportional to (R(ω)(2))/(Y(ω)(2)). Two plate mode frequencies were used to compute 1st corpus bending mode frequencies using empirical trend lines; these corpus bending modes in turn drive cavity volume flows to excite the two lowest cavity modes A0 and A1. All widely-separated, strongly-radiating corpus and cavity modes in the low frequency deterministic region are then parameterized in a dual-Helmholtz resonator model. Mid-high frequency statistical regions are parameterized with the aid of a distributed-excitation statistical mobility function (no bridge) to help extract bridge filter effects associated with (a) bridge rocking mode frequency changes and (b) bridge-corpus interactions from 14-violin-average, excited-via-bridge (Y(ω)(2)) and (R(ω)(2)). Deterministic-statistical regions are rejoined at ~630 Hz in a mobility-radiativity "trough" where all violin quality classes had a common radiativity. Simulations indicate that typical plate tuning has a significantly weaker effect on radiativity profile trends than bridge tuning.
Uses and Abuses of Models in Radiation Risk Management
Strom, Daniel J.
1998-12-10
This paper is a high-level overview of managing risks to workers, public, and the environment. It discusses the difference between a model and a hypothesis. The need for models in risk assessment is justified, and then it is shown that radiation risk models that are useable in risk management are highly simplistic. The weight of evidence is considered for and against the linear non-threshold (LNT) model for carcinogenesis and heritable ill-health that is currently the basis for radiation risk management. Finally, uses and misuses of this model are considered. It is concluded that the LNT model continues to be suitable for use as the basis for radiation protection.
Improved Solar-Radiation-Pressure Models for GPS Satellites
NASA Technical Reports Server (NTRS)
Bar-Sever, Yoaz; Kuang, Da
2006-01-01
A report describes a series of computational models conceived as an improvement over prior models for determining effects of solar-radiation pressure on orbits of Global Positioning System (GPS) satellites. These models are based on fitting coefficients of Fourier functions of Sun-spacecraft- Earth angles to observed spacecraft orbital motions.
NASA Astrophysics Data System (ADS)
Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart
2013-09-01
The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.
Cisonni, Julien; Lucey, Anthony D; King, Andrew J C; Islam, Syed Mohammed Shamsul; Lewis, Richard; Goonewardene, Mithran S
2015-11-01
Repetitive brief episodes of soft-tissue collapse within the upper airway during sleep characterize obstructive sleep apnea (OSA), an extremely common and disabling disorder. Failure to maintain the patency of the upper airway is caused by the combination of sleep-related loss of compensatory dilator muscle activity and aerodynamic forces promoting closure. The prediction of soft-tissue movement in patient-specific airway 3D mechanical models is emerging as a useful contribution to clinical understanding and decision making. Such modeling requires reliable estimations of the pharyngeal wall pressure forces. While nasal obstruction has been recognized as a risk factor for OSA, the need to include the nasal cavity in upper-airway models for OSA studies requires consideration, as it is most often omitted because of its complex shape. A quantitative analysis of the flow conditions generated by the nasal cavity and the sinuses during inspiration upstream of the pharynx is presented. Results show that adequate velocity boundary conditions and simple artificial extensions of the flow domain can reproduce the essential effects of the nasal cavity on the pharyngeal flow field. Therefore, the overall complexity and computational cost of accurate flow predictions can be reduced.
Frequency Integrated Radiation Models for Absorbing and Scattering Media
NASA Technical Reports Server (NTRS)
Ripoll, J. F.; Wray, A. A.
2004-01-01
The objective of this work is to contribute to the simplification of existing radiation models used in complex emitting, absorbing, scattering media. The application in view is the computation of flows occurring in such complex media, such as certain stellar interiors or combusting gases. In these problems, especially when scattering is present, the complexity of the radiative transfer leads to a high numerical cost, which is often avoided by simply neglecting it. The complexity lies partly in the strong dependence of the spectral coefficients on frequency. Models are then needed to capture the effects of the radiation when one cannot afford to directly solve for it. In this work, the frequency dependence will be modeled and integrated out in order retain only the average effects. A frequency-integrated radiative transfer equation (RTE) will be derived.
Recent Developments in the Radiation Belt Environment Model
NASA Technical Reports Server (NTRS)
Fok, M.-C.; Glocer, A.; Zheng, Q.; Horne, R. B.; Meredith, N. P.; Albert, J. M.; Nagai, T.
2010-01-01
The fluxes of energetic particles in the radiation belts are found to be strongly controlled by the solar wind conditions. In order to understand and predict the radiation particle intensities, we have developed a physics-based Radiation Belt Environment (RBE) model that considers the influences from the solar wind, ring current and plasmasphere. Recently, an improved calculation of wave-particle interactions has been incorporated. In particular, the model now includes cross diffusion in energy and pitch-angle. We find that the exclusion of cross diffusion could cause significant overestimation of electron flux enhancement during storm recovery. The RBE model is also connected to MHD fields so that the response of the radiation belts to fast variations in the global magnetosphere can be studied.Weare able to reproduce the rapid flux increase during a substorm dipolarization on 4 September 2008. The timing is much shorter than the time scale of wave associated acceleration.
NASA Astrophysics Data System (ADS)
Rajagopalan, R. A.; Sharan, M.
2015-12-01
Atmospheric aerosol particles play a vital role in the Earth's radiative energy budget. They exert a net cooling influence on climate by directly reflecting the solar radiation to space and by modifying the shortwave reflective properties of clouds. Radiation is the main source that regulates the surface energy budget. Surface temperature and planetary boundary layer (PBL) height depends on accurate calculation of both shortwave and longwave radiation. The weakening of the ambient winds is known to influence the structure of PBL. This study examines the sensitivity of the performance of Weather Research Forecasting (WRF) ARW Model to the use of different radiation schemes [For Long wave Radiation: Rapid Radiative Transfer Model (RRTM), Eta Geophysical Fluid Dynamics Laboratory (GFDL), Goddard, New Goddard, NCAR Community Atmosphere Model (CAM 3.0), New Goddard scheme, Fu-Liou-Gu scheme and for Short wave Radiation: Dudhia scheme, Eta Geophysical Fluid Dynamics Laboratory (GFDL), NCAR Community Atmosphere Model (CAM 3.0), New Goddard scheme]. Two different simulations are conducted one for the summer (14-15 May 2009) and winter (14-15 Dec 2008) season characterized by strong and weak wind conditions over India. Comparison of surface temperatures from different schemes for different cities (New Delhi, Ahmedabad, Lucknow, Kanpur, Jaipur and Jodhpur) on 14-15 May 2009 and 14-15 Dec 2008 with those observed shows the simulation with RRTM , New Goddard, and Fu-Liou-Gu schemes are closer to the observations as compared to other schemes. The temperature simulated from all the radiation schemes have more than 0.9 correlation coefficient but the root mean square error is relatively less in summer compared to winter season. It is surmised that Fu-Liou-Gu scheme performs better in almost all the cases. The reason behind can be the greater absorption of solar and IR radiative fluxes in the atmosphere and the surface provided in Fu-Liou-Gu radiation scheme than those computed in
Occultation Modeling for Radiation Obstruction Effects on Spacecraft Systems
NASA Technical Reports Server (NTRS)
de Carufel, Guy; Li, Zu Qun; Harvey, Jason; Crues, Edwin Z.; Bielski, Paul
2016-01-01
A geometric occultation model has been developed to determine line-of-sight obstruction of radiation sources expected for different NASA space exploration mission designs. Example applications includes fidelity improvements for surface lighting conditions, radiation pressure, thermal and power subsystem modeling. The model makes use of geometric two dimensional shape primitives to most effectively model space vehicles. A set of these primitives is used to represent three dimensional obstructing objects as a two dimensional outline from the perspective of an observing point of interest. Radiation sources, such as the Sun or a Moon's albedo is represented as a collection of points, each of which is assigned a flux value to represent a section of the radiation source. Planetary bodies, such as a Martian moon, is represented as a collection of triangular facets which are distributed in spherical height fields for optimization. These design aspects and the overall model architecture will be presented. Specific uses to be presented includes a study of the lighting condition on Phobos for a possible future surface mission, and computing the incident flux on a spacecraft's solar panels and radiators from direct and reflected solar radiation subject to self-shadowing or shadowing by third bodies.
NASA Astrophysics Data System (ADS)
Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.
2012-03-01
Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.
Modeling and parameterization of horizontally inhomogeneous cloud radiative properties
NASA Technical Reports Server (NTRS)
Welch, R. M.
1995-01-01
One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.
Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.
2008-10-20
One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic
Development of Aerosol Models for Radiative Flux Calculations at ARM Sites
Ogren, John A.; Dutton, Ellsworth G.; McComiskey, Allison C.
2006-09-30
The direct radiative forcing (DRF) of aerosols, the change in net radiative flux due to aerosols in non-cloudy conditions, is an essential quantity for understanding the human impact on climate change. Our work has addressed several key issues that determine the accuracy, and identify the uncertainty, with which aerosol DRF can be modeled. These issues include the accuracy of several radiative transfer models when compared to measurements and to each other in a highly controlled closure study using data from the ARM 2003 Aerosol IOP. The primary focus of our work has been to determine an accurate approach to assigning aerosol properties appropriate for modeling over averaged periods of time and space that represent the observed regional variability of these properties. We have also undertaken a comprehensive analysis of the aerosol properties that contribute most to uncertainty in modeling aerosol DRF, and under what conditions they contribute the most uncertainty. Quantification of these issues enables the community to better state accuracies of radiative forcing calculations and to concentrate efforts in areas that will decrease uncertainties in these calculations in the future.
NASA Astrophysics Data System (ADS)
Yogurtcu, Osman N.; Johnson, Margaret E.
2015-08-01
The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute
AN ANALYTIC RADIATIVE-CONVECTIVE MODEL FOR PLANETARY ATMOSPHERES
Robinson, Tyler D.; Catling, David C.
2012-09-20
We present an analytic one-dimensional radiative-convective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the two-stream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power-law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric-pressure-temperature profile, as well as expressions for the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass (1) worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiative-convective boundaries; (2) worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiative-convective boundaries, depending on the strength of sunlight attenuation; and (3) strongly irradiated giant planets (including hot Jupiters), where we explore the conditions under which these worlds acquire detached convective regions in their mid-tropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof; Mroczka, Janusz; Wriedt, Thomas; Riefler, Norbert
2014-06-01
In many branches of science experiments are expensive, require specialist equipment or are very time consuming. Studying the light scattering phenomenon by fractal aggregates can serve as an example. Light scattering simulations can overcome these problems and provide us with theoretical, additional data which complete our study. For this reason a fractal-like aggregate model as well as fast aggregation codes are needed. Until now various computer models, that try to mimic the physics behind this phenomenon, have been developed. However, their implementations are mostly based on a trial-and-error procedure. Such approach is very time consuming and the morphological parameters of resulting aggregates are not exact because the postconditions (e.g. the position error) cannot be very strict. In this paper we present a very fast and accurate implementation of a tunable aggregation algorithm based on the work of Filippov et al. (2000). Randomization is reduced to its necessary minimum (our technique can be more than 1000 times faster than standard algorithms) and the position of a new particle, or a cluster, is calculated with algebraic methods. Therefore, the postconditions can be extremely strict and the resulting errors negligible (e.g. the position error can be recognized as non-existent). In our paper two different methods, which are based on the particle-cluster (PC) and the cluster-cluster (CC) aggregation processes, are presented.
Virgilio, M.; Schroeder, T.; Yamamoto, Y.; Capellini, G.
2015-12-21
Tensile germanium microstrips are candidate as gain material in Si-based light emitting devices due to the beneficial effect of the strain field on the radiative recombination rate. In this work, we thoroughly investigate their radiative recombination spectra by means of micro-photoluminescence experiments at different temperatures and excitation powers carried out on samples featuring different tensile strain values. For sake of comparison, bulk Ge(001) photoluminescence is also discussed. The experimental findings are interpreted in light of a numerical modeling based on a multi-valley effective mass approach, taking in to account the depth dependence of the photo-induced carrier density and of the self-absorption effect. The theoretical modeling allowed us to quantitatively describe the observed increase of the photoluminescence intensity for increasing values of strain, excitation power, and temperature. The temperature dependence of the non-radiative recombination time in this material has been inferred thanks to the model calibration procedure.