Science.gov

Sample records for accurate material models

  1. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  2. Material Models for Accurate Simulation of Sheet Metal Forming and Springback

    NASA Astrophysics Data System (ADS)

    Yoshida, Fusahito

    2010-06-01

    For anisotropic sheet metals, modeling of anisotropy and the Bauschinger effect is discussed in the framework of Yoshida-Uemori kinematic hardening model incorporating with anisotropic yield functions. The performances of the models in predicting yield loci, cyclic stress-strain responses on several types of steel and aluminum sheets are demonstrated by comparing the numerical simulation results with the corresponding experimental observations. From some examples of FE simulation of sheet metal forming and springback, it is concluded that modeling of both the anisotropy and the Bauschinger effect is essential for the accurate numerical simulation.

  3. Towards an accurate and computationally-efficient modelling of Fe(II)-based spin crossover materials.

    PubMed

    Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent

    2015-07-01

    The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals. PMID:26040609

  4. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  5. Anatomically accurate individual face modeling.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2003-01-01

    This paper presents a new 3D face model of a specific person constructed from the anatomical perspective. By exploiting the laser range data, a 3D facial mesh precisely representing the skin geometry is reconstructed. Based on the geometric facial mesh, we develop a deformable multi-layer skin model. It takes into account the nonlinear stress-strain relationship and dynamically simulates the non-homogenous behavior of the real skin. The face model also incorporates a set of anatomically-motivated facial muscle actuators and underlying skull structure. Lagrangian mechanics governs the facial motion dynamics, dictating the dynamic deformation of facial skin in response to the muscle contraction. PMID:15455936

  6. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  7. Pre-Modeling Ensures Accurate Solid Models

    ERIC Educational Resources Information Center

    Gow, George

    2010-01-01

    Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…

  8. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  9. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  10. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  11. Accurate determination of cobalt traces in several biological reference materials.

    PubMed

    Dybczyński, R; Danko, B

    1994-01-01

    A newly devised, very accurate ("definitive") method for the determination of trace amounts of cobalt in biological materials was validated by the analysis of several certified reference materials. The method is based on a combination of neutron activation and selective and quantitative postirradiation isolation of radiocobalt from practically all other radionuclides by ion-exchange and extraction chromatography followed by gamma-ray spectrometric measurement. The significance of criteria that should be fulfilled in order to accept a given result as obtained by the "definitive method" is emphasized. In view of the demonstrated very good accuracy of the method, it is suggested that our values for cobalt content in those reference materials in which it was originally not certified (SRM 1570 spinach, SRM 1571 orchard leaves, SRM 1577 bovine liver, and Czechoslovak bovine liver 12-02-01) might be used as provisional certified values. PMID:7710879

  12. Accurate astronomical atmospheric dispersion models in ZEMAX

    NASA Astrophysics Data System (ADS)

    Spanò, P.

    2014-07-01

    ZEMAX provides a standard built-in atmospheric model to simulate atmospheric refraction and dispersion. This model has been compared with other ones to assess its intrinsic accuracy, critical for very demanding application like ADCs for AO-assisted extremely large telescopes. A revised simple model, based on updated published data of the air refractivity, is proposed by using the "Gradient 5" surface of Zemax. At large zenith angles (65 deg), discrepancies up to 100 mas in the differential refraction are expected near the UV atmospheric transmission cutoff. When high-accuracy modeling is required, the latter model should be preferred.

  13. Advanced material testing in support of accurate sheet metal forming simulations

    NASA Astrophysics Data System (ADS)

    Kuwabara, Toshihiko

    2013-05-01

    This presentation is a review of experimental methods for accurately measuring and modeling the anisotropic plastic deformation behavior of metal sheets under a variety of loading paths: biaxial compression test, hydraulic bulge test, biaxial tension test using a cruciform specimen, multiaxial tube expansion test using a closed-loop electrohydraulic testing machine for the measurement of forming limit strains and stresses, combined tension-shear test, and in-plane stress reversal test. Observed material responses are compared with predictions using phenomenological plasticity models to highlight the importance of accurate material testing. Special attention is paid to the plastic deformation behavior of sheet metals commonly used in industry, and to verifying the validity of constitutive models based on anisotropic yield functions at a large plastic strain range. The effects of using appropriate material models on the improvement of predictive accuracy for forming defects, such as springback and fracture, are also presented.

  14. Accurate spectral modeling for infrared radiation

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Gupta, S. K.

    1977-01-01

    Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.

  15. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  16. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data. PMID:26355331

  17. Reconstructing accurate ToF-SIMS depth profiles for organic materials with differential sputter rates.

    PubMed

    Taylor, Adam J; Graham, Daniel J; Castner, David G

    2015-09-01

    To properly process and reconstruct 3D ToF-SIMS data from systems such as multi-component polymers, drug delivery scaffolds, cells and tissues, it is important to understand the sputtering behavior of the sample. Modern cluster sources enable efficient and stable sputtering of many organics materials. However, not all materials sputter at the same rate and few studies have explored how different sputter rates may distort reconstructed depth profiles of multicomponent materials. In this study spun-cast bilayer polymer films of polystyrene and PMMA are used as model systems to optimize methods for the reconstruction of depth profiles in systems exhibiting different sputter rates between components. Transforming the bilayer depth profile from sputter time to depth using a single sputter rate fails to account for sputter rate variations during the profile. This leads to inaccurate apparent layer thicknesses and interfacial positions, as well as the appearance of continued sputtering into the substrate. Applying measured single component sputter rates to the bilayer films with a step change in sputter rate at the interfaces yields more accurate film thickness and interface positions. The transformation can be further improved by applying a linear sputter rate transition across the interface, thus modeling the sputter rate changes seen in polymer blends. This more closely reflects the expected sputtering behavior. This study highlights the need for both accurate evaluation of component sputter rates and the careful conversion of sputter time to depth, if accurate 3D reconstructions of complex multi-component organic and biological samples are to be achieved. The effects of errors in sputter rate determination are also explored. PMID:26185799

  18. Water wave model with accurate dispersion and vertical vorticity

    NASA Astrophysics Data System (ADS)

    Bokhove, Onno

    2010-05-01

    Cotter and Bokhove (Journal of Engineering Mathematics 2010) derived a variational water wave model with accurate dispersion and vertical vorticity. In one limit, it leads to Luke's variational principle for potential flow water waves. In the another limit it leads to the depth-averaged shallow water equations including vertical vorticity. Presently, focus will be put on the Hamiltonian formulation of the variational model and its boundary conditions.

  19. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  20. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  1. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  2. More-Accurate Model of Flows in Rocket Injectors

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  3. On the importance of having accurate data for astrophysical modelling

    NASA Astrophysics Data System (ADS)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  4. Accurate method of modeling cluster scaling relations in modified gravity

    NASA Astrophysics Data System (ADS)

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  5. Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations

    SciTech Connect

    Baglietto, Emilio

    2006-07-01

    An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)

  6. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    PubMed

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates. PMID:26357094

  7. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  8. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  9. Mouse models of human AML accurately predict chemotherapy response.

    PubMed

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W

    2009-04-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  10. An accurate model potential for alkali neon systems.

    PubMed

    Zanuttini, D; Jacquet, E; Giglio, E; Douady, J; Gervais, B

    2009-12-01

    We present a detailed investigation of the ground and lowest excited states of M-Ne dimers, for M=Li, Na, and K. We show that the potential energy curves of these Van der Waals dimers can be obtained accurately by considering the alkali neon systems as one-electron systems. Following previous authors, the model describes the evolution of the alkali valence electron in the combined potentials of the alkali and neon cores by means of core polarization pseudopotentials. The key parameter for an accurate model is the M(+)-Ne potential energy curve, which was obtained by means of ab initio CCSD(T) calculation using a large basis set. For each MNe dimer, a systematic comparison with ab initio computation of the potential energy curve for the X, A, and B states shows the remarkable accuracy of the model. The vibrational analysis and the comparison with existing experimental data strengthens this conclusion and allows for a precise assignment of the vibrational levels. PMID:19968334

  11. Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng

    Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.

  12. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    NASA Astrophysics Data System (ADS)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  13. Accurate, low-cost 3D-models of gullies

    NASA Astrophysics Data System (ADS)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  14. Towards Accurate Molecular Modeling of Plastic Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Chantawansri, T. L.; Andzelm, J.; Taylor, D.; Byrd, E.; Rice, B.

    2010-03-01

    There is substantial interest in identifying the controlling factors that influence the susceptibility of polymer bonded explosives (PBXs) to accidental initiation. Numerous Molecular Dynamics (MD) simulations of PBXs using the COMPASS force field have been reported in recent years, where the validity of the force field in modeling the solid EM fill has been judged solely on its ability to reproduce lattice parameters, which is an insufficient metric. Performance of the COMPASS force field in modeling EMs and the polymeric binder has been assessed by calculating structural, thermal, and mechanical properties, where only fair agreement with experimental data is obtained. We performed MD simulations using the COMPASS force field for the polymer binder hydroxyl-terminated polybutadiene and five EMs: cyclotrimethylenetrinitramine, 1,3,5,7-tetranitro-1,3,5,7-tetra-azacyclo-octane, 2,4,6,8,10,12-hexantirohexaazazisowurzitane, 2,4,6-trinitro-1,3,5-benzenetriamine, and pentaerythritol tetranitate. Predicted EM crystallographic and molecular structural parameters, as well as calculated properties for the binder will be compared with experimental results for different simulation conditions. We also present novel simulation protocols, which improve agreement between experimental and computation results thus leading to the accurate modeling of PBXs.

  15. An accurate and simple quantum model for liquid water.

    PubMed

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  16. A calibration-independent method for accurate complex permittivity determination of liquid materials

    SciTech Connect

    Hasar, U. C.

    2008-08-15

    This note presents a calibration-independent method for accurate complex permittivity determination of liquid materials. There are two main advantages of the proposed method over those in the literature, which require measurements of two cells with different lengths loaded by the same liquid material. First, it eliminates any inhomogeneity or impurity present in the second sample and decreases the uncertainty in sample thickness. Second, it removes the undesired impacts of measurement plane deterioration on measurements of liquid materials. For validation of the proposed method, we measure the complex permittivity of distilled water and compare its extracted permittivity with the theoretical datum obtained from the Debye equation.

  17. Materials modelling in London

    NASA Astrophysics Data System (ADS)

    Ciudad, David

    2016-04-01

    Angelos Michaelides, Professor in Theoretical Chemistry at University College London (UCL) and co-director of the Thomas Young Centre (TYC), explains to Nature Materials the challenges in materials modelling and the objectives of the TYC.

  18. A Novel Method for the Accurate Evaluation of Poisson's Ratio of Soft Polymer Materials

    PubMed Central

    Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S.; Kang, Dong-Joong; Park, Sungchan

    2013-01-01

    A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6–47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials. PMID:23737733

  19. Recommended volumetric capacity definitions and protocols for accurate, standardized and unambiguous metrics for hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Parilla, Philip A.; Gross, Karl; Hurst, Katherine; Gennett, Thomas

    2016-03-01

    The ultimate goal of the hydrogen economy is the development of hydrogen storage systems that meet or exceed the US DOE's goals for onboard storage in hydrogen-powered vehicles. In order to develop new materials to meet these goals, it is extremely critical to accurately, uniformly and precisely measure materials' properties relevant to the specific goals. Without this assurance, such measurements are not reliable and, therefore, do not provide a benefit toward the work at hand. In particular, capacity measurements for hydrogen storage materials must be based on valid and accurate results to ensure proper identification of promising materials for further development. Volumetric capacity determinations are becoming increasingly important for identifying promising materials, yet there exists controversy on how such determinations are made and whether such determinations are valid due to differing methodologies to count the hydrogen content. These issues are discussed herein, and we show mathematically that capacity determinations can be made rigorously and unambiguously if the constituent volumes are well defined and measurable in practice. It is widely accepted that this occurs for excess capacity determinations and we show here that this can happen for the total capacity determination. Because the adsorption volume is undefined, the absolute capacity determination remains imprecise. Furthermore, we show that there is a direct relationship between determining the respective capacities and the calibration constants used for the manometric and gravimetric techniques. Several suggested volumetric capacity figure-of-merits are defined, discussed and reporting requirements recommended. Finally, an example is provided to illustrate these protocols and concepts.

  20. Methodology to set up accurate OPC model using optical CD metrology and atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Shim, Yeon-Ah; Kang, Jaehyun; Lee, Sang-Uk; Kim, Jeahee; Kim, Keeho

    2007-03-01

    For the 90nm node and beyond, smaller Critical Dimension(CD) control budget is required and the ways to control good CD uniformity are needed. Moreover Optical Proximity Correction(OPC) for the sub-90nm node demands more accurate wafer CD data in order to improve accuracy of OPC model. Scanning Electron Microscope (SEM) is the typical method for measuring CD until ArF process. However SEM can give serious attack such as shrinkage of Photo Resist(PR) by burning of weak chemical structure of ArF PR due to high energy electron beam. In fact about 5nm CD narrowing occur when we measure CD by using CD-SEM in ArF photo process. Optical CD Metrology(OCD) and Atomic Force Microscopy(AFM) has been considered to the method for measuring CD without attack of organic materials. Also the OCD and AFM measurement system have the merits of speed, easiness and accurate data. For model-based OPC, the model is generated using CD data of test patterns transferred onto the wafer. In this study we discuss to generate accurate OPC model using OCD and AFM measurement system.

  1. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  2. Accurate determination of quantity of material in thin films by Rutherford backscattering spectrometry.

    PubMed

    Jeynes, C; Barradas, N P; Szilágyi, E

    2012-07-17

    Ion beam analysis (IBA) is a cluster of techniques including Rutherford and non-Rutherford backscattering spectrometry and particle-induced X-ray emission (PIXE). Recently, the ability to treat multiple IBA techniques (including PIXE) self-consistently has been demonstrated. The utility of IBA for accurately depth profiling thin films is critically reviewed. As an important example of IBA, three laboratories have independently measured a silicon sample implanted with a fluence of nominally 5 × 10(15) As/cm(2) at an unprecedented absolute accuracy. Using 1.5 MeV (4)He(+) Rutherford backscattering spectrometry (RBS), each lab has demonstrated a combined standard uncertainty around 1% (coverage factor k = 1) traceable to an Sb-implanted certified reference material through the silicon electronic stopping power. The uncertainty budget shows that this accuracy is dominated by the knowledge of the electronic stopping, but that special care must also be taken to accurately determine the electronic gain of the detection system and other parameters. This RBS method is quite general and can be used routinely to accurately validate ion implanter charge collection systems, to certify SIMS standards, and for other applications. The generality of application of such methods in IBA is emphasized: if RBS and PIXE data are analysed self-consistently then the resulting depth profile inherits the accuracy and depth resolution of RBS and the sensitivity and elemental discrimination of PIXE. PMID:22681761

  3. Development of a precise and accurate age-depth model based on 40Ar/39Ar dating of volcanic material in the ANDRILL (1B) drill core, Southern McMurdo Sound, Antarctica

    NASA Astrophysics Data System (ADS)

    Ross, J. I.; McIntosh, W. C.; Dunbar, N. W.

    2012-10-01

    High precision 40Ar/39Ar dates on a variety of volcanic materials from the AND-1B drill core provide pinning points for defining the chronostratigraphy for the core. The volcanic materials dated include 1) felsic and basaltic tephra, 2) interior of a ~ 3 m thick intermediate submarine lava flow, and 3) felsic and basaltic volcanic clasts. In the upper 600 m of the core, two felsic tephra, two basaltic tephra and the intermediate laval flow yield precise and depositional ages, with further maximum age constraints from volcanic clasts. Below 600 m in the core, tephric intervals are significantly altered and maximum age constraints only are available from volcanic clasts. The ages for eight stratigraphic intervals are 1) 17.17-17.18 mbsf, basaltic clast (maximum depositional age 0.310 ± 0.039 Ma, all errors quoted at 2σ), 2) 52.80-52.82 mbsf, three basaltic clasts (maximum depositional age 0.726 ± 0.052 Ma), 3) 85.27-85.87 mbsf felsic tephra (1.014 ± 0.008 Ma), 4) ~ 112-145 mbsf sequence of basaltic tephra (1.633 ± 0.057 to 1.683 ± 0.055 Ma), 5) 480.97-481.96 mbsf pumice-rich mudstone (4.800 ± 0.076 Ma), 6) 646.30-649.34 mbsf intermediate lava flow (6.48 ± 0.13 Ma), 7) 822.78 mbsf kaersutite phenocrysts from volcanic clasts (maximum depositional age 8.53 ± 0.53 Ma) and 8) ~ 1280 mbsf, three volcanic clasts (maximum depositional age 13.57 ± 0.13 Ma). Minimum average sediment accumulation rates or 102 and 87 m/Ma for the upper and lower 650 m of core, respectively were calculated using the 40Ar/39Ar analyses. The volcanic material recovered from AND-1B also reveals a general northward progression of volcanism in Southern McMurdo Sound.

  4. Use of Monocrystalline Silicon as Tool Material for Highly Accurate Blanking of Thin Metal Foils

    SciTech Connect

    Hildering, Sven; Engel, Ulf; Merklein, Marion

    2011-05-04

    The trend towards miniaturisation of metallic mass production components combined with increased component functionality is still unbroken. Manufacturing these components by forming and blanking offers economical and ecological advantages combined with the needed accuracy. The complexity of producing tools with geometries below 50 {mu}m by conventional manufacturing methods becomes disproportional higher. Expensive serial finishing operations are required to achieve an adequate surface roughness combined with accurate geometry details. A novel approach for producing such tools is the use of advanced etching technologies for monocrystalline silicon that are well-established in the microsystems technology. High-precision vertical geometries with a width down to 5 {mu}m are possible. The present study shows a novel concept using this potential for the blanking of thin copper foils with monocrystallline silicon as a tool material. A self-contained machine-tool with compact outer dimensions was designed to avoid tensile stresses in the brittle silicon punch by an accurate, careful alignment of the punch, die and metal foil. A microscopic analysis of the monocrystalline silicon punch shows appropriate properties regarding flank angle, edge geometry and surface quality for the blanking process. Using a monocrystalline silicon punch with a width of 70 {mu}m blanking experiments on as-rolled copper foils with a thickness of 20 {mu}m demonstrate the general applicability of this material for micro production processes.

  5. Sandia Material Model Driver

    2005-09-28

    The Sandia Material Model Driver (MMD) software package allows users to run material models from a variety of different Finite Element Model (FEM) codes in a standalone fashion, independent of the host codes. The MMD software is designed to be run on a variety of different operating system platforms as a console application. Initial development efforts have resulted in a package that has been shown to be fast, convenient, and easy to use, with substantialmore » growth potential.« less

  6. New process model proves accurate in tests on catalytic reformer

    SciTech Connect

    Aguilar-Rodriguez, E.; Ancheyta-Juarez, J. )

    1994-07-25

    A mathematical model has been devised to represent the process that takes place in a fixed-bed, tubular, adiabatic catalytic reforming reactor. Since its development, the model has been applied to the simulation of a commercial semiregenerative reformer. The development of mass and energy balances for this reformer led to a model that predicts both concentration and temperature profiles along the reactor. A comparison of the model's results with experimental data illustrates its accuracy at predicting product profiles. Simple steps show how the model can be applied to simulate any fixed-bed catalytic reformer.

  7. Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.

    PubMed

    Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M

    2014-12-01

    Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. PMID:25195174

  8. Coupling Efforts to the Accurate and Efficient Tsunami Modelling System

    NASA Astrophysics Data System (ADS)

    Son, S.

    2015-12-01

    In the present study, we couple two different types of tsunami models, i.e., nondispersive shallow water model of characteristic form(MOST ver.4) and dispersive Boussinesq model of non-characteristic form(Son et al. (2011)) in an attempt to improve modelling accuracy and efficiency. Since each model deals with different type of primary variables, additional care on matching boundary condition is required. Using an absorbing-generating boundary condition developed by Van Dongeren and Svendsen(1997), model coupling and integration is achieved. Characteristic variables(i.e., Riemann invariants) in MOST are converted to non-characteristic variables for Boussinesq solver without any loss of physical consistency. Established modelling system has been validated through typical test problems to realistic tsunami events. Simulated results reveal good performance of developed modelling system. Since coupled modelling system provides advantageous flexibility feature during implementation, great efficiencies and accuracies are expected to be gained through spot-focusing application of Boussinesq model inside the entire domain of tsunami propagation.

  9. Modeling Granular Materials

    NASA Astrophysics Data System (ADS)

    Brackbill, J. U.

    2000-11-01

    Granular materials are often cited as examples of systems with complex and unusual properties. Much of this complexity is captured by computational models in which the actual material properties of individual grains are idealized and simplified. Because material properties can be important under extreme conditions, we consider assemblies of grains with more realistic properties. Our model grains may deform, their resulting stresses are computed from elastic / plastic constitutive models, and their interactions with each other include Coulomb friction and bonding. Our model equations are solved using a particle-in-cell (PIC) method, which combines a Lagrangian representation of the materials with an adaptive grid [1]. Our contact model between grains is linear in the number of grains, and we model assemblies with statistically significant numbers of grains. With our model, we have studied the response of dense granular material to shear, with especial attention to the probability density function governing the volume distribution of stress for mono- and poly-disperse samples, circular and polygonal grains, and various values of microscopic friction coefficients, yield stresses, and packing fractions [2]. Remarkably, PDF's are similar in form for all cases simulated, and similar to those observed in experiments with granular materials under both compression and shear. Namely, the simulations yield an exponential probability of large stresses above the mean, and there is a finite chance that a few grains in a large assembly are subjected to extreme stresses at any given time, even at low strain rates. For energetic materials, such as explosives, this is a signficant finding. We have also studied the relationship between distributions of boundary tractions and volume distributions of stress. The ratio of normal and tangential components of traction on the boundary defines a bulk frictional response, which we find increases with the inter-granular friction coefficient

  10. Constitutive modeling for isotropic materials

    NASA Technical Reports Server (NTRS)

    Lindholm, Ulric S.

    1985-01-01

    The objective is to develop a unified constitutive model for finite element structural analysis of turbine engine hot-section components. This effort constitutes a different approach for non-linear finite-element computer codes which have heretofore been based on classical inelastic methods. The unified constitutive theory to be developed will avoid the simplifying assumptions of classical theory and should more accurately represent the behavior of superalloy materials under cyclic loading conditions and high temperature environments. During the first two years of the program, extensive experimental correlations were made with two representative unified models. The experiments were both uniaxial and biaxial at temperatures up to 1093 C (2000 F). In addition, the unified models were adopted to the MARC finite element code and used for stress analysis of notched bar and turbine blade geometries.

  11. Highly accurate isotope measurements of surface material on planetary objects in situ

    NASA Astrophysics Data System (ADS)

    Riedo, Andreas; Neuland, Maike; Meyer, Stefan; Tulej, Marek; Wurz, Peter

    2013-04-01

    Studies of isotope variations in solar system objects are of particular interest and importance. Highly accurate isotope measurements provide insight into geochemical processes, constrain the time of formation of planetary material (crystallization ages) and can be robust tracers of pre-solar events and processes. A detailed understanding of the chronology of the early solar system and dating of planetary materials require precise and accurate measurements of isotope ratios, e.g. lead, and abundance of trace element. However, such measurements are extremely challenging and until now, they never have been attempted in space research. Our group designed a highly miniaturized and self-optimizing laser ablation time-of-flight mass spectrometer for space flight for sensitive and accurate measurements of the elemental and isotopic composition of extraterrestrial materials in situ. Current studies were performed by using UV radiation for ablation and ionization of sample material. High spatial resolution is achieved by focusing the laser beam to about Ø 20μm onto the sample surface. The instrument supports a dynamic range of at least 8 orders of magnitude and a mass resolution m/Δm of up to 800—900, measured at iron peak. We developed a measurement procedure, which will be discussed in detail, that allows for the first time to measure with the instrument the isotope distribution of elements, e.g. Ti, Pb, etc., with a measurement accuracy and precision in the per mill and sub per mill level, which is comparable to well-known and accepted measurement techniques, such as TIMS, SIMS and LA-ICP-MS. The present instrument performance offers together with the measurement procedure in situ measurements of 207Pb/206Pb ages with the accuracy for age in the range of tens of millions of years. Furthermore, and in contrast to other space instrumentation, our instrument can measure all elements present in the sample above 10 ppb concentration, which offers versatile applications

  12. Charged Point Defects in the Flatland: Accurate Formation Energy Calculations in Two-Dimensional Materials

    NASA Astrophysics Data System (ADS)

    Komsa, Hannu-Pekka; Berseneva, Natalia; Krasheninnikov, Arkady V.; Nieminen, Risto M.

    2014-07-01

    Impurities and defects frequently govern materials properties, with the most prominent example being the doping of bulk semiconductors where a minute amount of foreign atoms can be responsible for the operation of the electronic devices. Several computational schemes based on a supercell approach have been developed to get insights into types and equilibrium concentrations of point defects, which successfully work in bulk materials. Here, we show that many of these schemes cannot directly be applied to two-dimensional (2D) systems, as formation energies of charged point defects are dominated by large spurious electrostatic interactions between defects in inhomogeneous environments. We suggest two approaches that solve this problem and give accurate formation energies of charged defects in 2D systems in the dilute limit. Our methods, which are applicable to all kinds of charged defects in any 2D system, are benchmarked for impurities in technologically important h-BN and MoS2 2D materials, and they are found to perform equally well for substitutional and adatom impurities.

  13. Procedure for accurate fabrication of tissue compensators with high-density material

    NASA Astrophysics Data System (ADS)

    Mejaddem, Younes; Lax, Ingmar; Adakkai K, Shamsuddin

    1997-02-01

    An accurate method for producing compensating filters using high-density material (Cerrobend) is described. The procedure consists of two cutting steps in a Styrofoam block: (i) levelling a surface of the block to a reference level; (ii) depth-modulated milling of the levelled block in accordance with pre-calculated thickness profiles of the compensator. The calculated thickness (generated by a dose planning system) can be reproduced within acceptable accuracy. The desired compensator thickness manufactured according to this procedure is reproduced to within 0.1 mm, corresponding to a 0.5% change in dose at a beam quality of 6 MV. The results of our quality control checks performed with the technique of stylus profiling measurements show an accuracy of 0.04 mm in the milling process over an arbitrary profile along the milled-out Styrofoam block.

  14. Accurate oscillator strengths for ultraviolet lines of Ar I - Implications for interstellar material

    NASA Technical Reports Server (NTRS)

    Federman, S. R.; Beideck, D. J.; Schectman, R. M.; York, D. G.

    1992-01-01

    Analysis of absorption from interstellar Ar I in lightly reddened lines of sight provides information on the warm and hot components of the interstellar medium near the sun. The details of the analysis are limited by the quality of the atomic data. Accurate oscillator strengths for the Ar I lines at 1048 and 1067 A and the astrophysical implications are presented. From lifetimes measured with beam-foil spectroscopy, an f-value for 1048 A of 0.257 +/- 0.013 is obtained. Through the use of a semiempirical formalism for treating singlet-triplet mixing, an oscillator strength of 0.064 +/- 0.003 is derived for 1067 A. Because of the accuracy of the results, the conclusions of York and colleagues from spectra taken with the Copernicus satellite are strengthened. In particular, for interstellar gas in the solar neighborhood, argon has a solar abundance, and the warm, neutral material is not pervasive.

  15. Accurate modelling of flow induced stresses in rigid colloidal aggregates

    NASA Astrophysics Data System (ADS)

    Vanni, Marco

    2015-07-01

    A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however

  16. Magnetic field models of nine CP stars from "accurate" measurements

    NASA Astrophysics Data System (ADS)

    Glagolevskij, Yu. V.

    2013-01-01

    The dipole models of magnetic fields in nine CP stars are constructed based on the measurements of metal lines taken from the literature, and performed by the LSD method with an accuracy of 10-80 G. The model parameters are compared with the parameters obtained for the same stars from the hydrogen line measurements. For six out of nine stars the same type of structure was obtained. Some parameters, such as the field strength at the poles B p and the average surface magnetic field B s differ considerably in some stars due to differences in the amplitudes of phase dependences B e (Φ) and B s (Φ), obtained by different authors. It is noted that a significant increase in the measurement accuracy has little effect on the modelling of the large-scale structures of the field. By contrast, it is more important to construct the shape of the phase dependence based on a fairly large number of field measurements, evenly distributed by the rotation period phases. It is concluded that the Zeeman component measurement methods have a strong effect on the shape of the phase dependence, and that the measurements of the magnetic field based on the lines of hydrogen are more preferable for modelling the large-scale structures of the field.

  17. An Accurate In Vitro Model of the E. coli Envelope

    PubMed Central

    Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H

    2015-01-01

    Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir–Blodgett and Langmuir–Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292

  18. Leidenfrost effect: accurate drop shape modeling and new scaling laws

    NASA Astrophysics Data System (ADS)

    Sobac, Benjamin; Rednikov, Alexey; Dorbolo, Stéphane; Colinet, Pierre

    2014-11-01

    In this study, we theoretically investigate the shape of a drop in a Leidenfrost state, focusing on the geometry of the vapor layer. The drop geometry is modeled by numerically matching the solution of the hydrostatic shape of a superhydrophobic drop (for the upper part) with the solution of the lubrication equation of the vapor flow underlying the drop (for the bottom part). The results highlight that the vapor layer, fed by evaporation, forms a concave depression in the drop interface that becomes increasingly marked with the drop size. The vapor layer then consists of a gas pocket in the center and a thin annular neck surrounding it. The film thickness increases with the size of the drop, and the thickness at the neck appears to be of the order of 10--100 μm in the case of water. The model is compared to recent experimental results [Burton et al., Phys. Rev. Lett., 074301 (2012)] and shows an excellent agreement, without any fitting parameter. New scaling laws also emerge from this model. The geometry of the vapor pocket is only weakly dependent on the superheat (and thus on the evaporation rate), this weak dependence being more pronounced in the neck region. In turn, the vapor layer characteristics strongly depend on the drop size.

  19. A particle-tracking approach for accurate material derivative measurements with tomographic PIV

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Scarano, Fulvio

    2013-08-01

    The evaluation of the instantaneous 3D pressure field from tomographic PIV data relies on the accurate estimate of the fluid velocity material derivative, i.e., the velocity time rate of change following a given fluid element. To date, techniques that reconstruct the fluid parcel trajectory from a time sequence of 3D velocity fields obtained with Tomo-PIV have already been introduced. However, an accurate evaluation of the fluid element acceleration requires trajectory reconstruction over a relatively long observation time, which reduces random errors. On the other hand, simple integration and finite difference techniques suffer from increasing truncation errors when complex trajectories need to be reconstructed over a long time interval. In principle, particle-tracking velocimetry techniques (3D-PTV) enable the accurate reconstruction of single particle trajectories over a long observation time. Nevertheless, PTV can be reliably performed only at limited particle image number density due to errors caused by overlapping particles. The particle image density can be substantially increased by use of tomographic PIV. In the present study, a technique to combine the higher information density of tomographic PIV and the accurate trajectory reconstruction of PTV is proposed (Tomo-3D-PTV). The particle-tracking algorithm is applied to the tracers detected in the 3D domain obtained by tomographic reconstruction. The 3D particle information is highly sparse and intersection of trajectories is virtually impossible. As a result, ambiguities in the particle path identification over subsequent recordings are easily avoided. Polynomial fitting functions are introduced that describe the particle position in time with sequences based on several recordings, leading to the reduction in truncation errors for complex trajectories. Moreover, the polynomial regression approach provides a reduction in the random errors due to the particle position measurement. Finally, the acceleration

  20. Can scintillation detectors with low spectral resolution accurately determine radionuclides content of building materials?

    PubMed

    Kovler, K; Prilutskiy, Z; Antropov, S; Antropova, N; Bozhko, V; Alfassi, Z B; Lavi, N

    2013-07-01

    The current paper makes an attempt to check whether the scintillation NaI(Tl) detectors, in spite of their poor energy resolution, can determine accurately the content of NORM in building materials. The activity concentrations of natural radionuclides were measured using two types of detectors: (a) NaI(Tl) spectrometer equipped with the special software based on the matrix method of least squares, and (b) high-purity germanium spectrometer. Synthetic compositions with activity concentrations varying in a wide range, from 1/5 to 5 times median activity concentrations of the natural radionuclides available in the earth crust and the samples of popular building materials, such as concrete, pumice and gypsum, were tested, while the density of the tested samples changed in a wide range (from 860 up to 2,410 kg/m(3)). The results obtained in the NaI(Tl) system were similar to those obtained with the HPGe spectrometer, mostly within the uncertainty range. This comparison shows that scintillation spectrometers equipped with a special software aimed to compensate for the lower spectral resolution of NaI(Tl) detectors can be successfully used for the radiation control of mass construction products. PMID:23542118

  1. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  2. Achieving accurate nuetron-multiplicity analysis of metals and oxides with weighted point model equations.

    SciTech Connect

    Burward-Hoy, J. M.; Geist, W. H.; Krick, M. S.; Mayo, D. R.

    2004-01-01

    Neutron multiplicity counting is a technique for the rapid, nondestructive measurement of plutonium mass in pure and impure materials. This technique is very powerful because it uses the measured coincidence count rates to determine the sample mass without requiring a set of representative standards for calibration. Interpreting measured singles, doubles, and triples count rates using the three-parameter standard point model accurately determines plutonium mass, neutron multiplication, and the ratio of ({alpha},n) to spontaneous-fission neutrons (alpha) for oxides of moderate mass. However, underlying standard point model assumptions - including constant neutron energy and constant multiplication throughout the sample - cause significant biases for the mass, multiplication, and alpha in measurements of metal and large, dense oxides.

  3. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, J. A., Jr.

    1998-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  4. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1998-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.

  5. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1997-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  6. Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1997-01-01

    The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various

  7. Cryogenic Model Materials

    NASA Technical Reports Server (NTRS)

    Kimmel, W. M.; Kuhn, N. S.; Berry, R. F.; Newman, J. A.

    2001-01-01

    An overview and status of current activities seeking alternatives to 200 grade 18Ni Steel CVM alloy for cryogenic wind tunnel models is presented. Specific improvements in material selection have been researched including availability, strength, fracture toughness and potential for use in transonic wind tunnel testing. Potential benefits from utilizing damage tolerant life-prediction methods, recently developed fatigue crack growth codes and upgraded NDE methods are also investigated. Two candidate alloys are identified and accepted for cryogenic/transonic wind tunnel models and hardware.

  8. Modeling of Laser Material Interactions

    NASA Astrophysics Data System (ADS)

    Garrison, Barbara

    2009-03-01

    Irradiation of a substrate by laser light initiates the complex chemical and physical process of ablation where large amounts of material are removed. Ablation has been successfully used in techniques such as nanolithography and LASIK surgery, however a fundamental understanding of the process is necessary in order to further optimize and develop applications. To accurately describe the ablation phenomenon, a model must take into account the multitude of events which occur when a laser irradiates a target including electronic excitation, bond cleavage, desorption of small molecules, ongoing chemical reactions, propagation of stress waves, and bulk ejection of material. A coarse grained molecular dynamics (MD) protocol with an embedded Monte Carlo (MC) scheme has been developed which effectively addresses each of these events during the simulation. Using the simulation technique, thermal and chemical excitation channels are separately studied with a model polymethyl methacrylate system. The effects of the irradiation parameters and reaction pathways on the process dynamics are investigated. The mechanism of ablation for thermal processes is governed by a critical number of bond breaks following the deposition of energy. For the case where an absorbed photon directly causes a bond scission, ablation occurs following the rapid chemical decomposition of material. The study provides insight into the influence of thermal and chemical processes in polymethyl methacrylate and facilitates greater understanding of the complex nature of polymer ablation.

  9. Material response mechanisms are needed to obtain highly accurate experimental shock wave data

    NASA Astrophysics Data System (ADS)

    Forbes, Jerry

    2015-06-01

    The field of shock wave compression of matter has provided a simple set of equations relating thermodynamic and kinematic parameters that describe the conservation of mass, momentum and energy across a steady shock wave with one-dimensional flow. Well-known condensed matter shock wave experimental results will be reviewed to see whether the assumptions required for deriving these simple R-H equations are met. Note that the material compression model is not required for deriving the 1-D conservation flow equations across a steady shock front. However, this statement is misleading from a practical experimental viewpoint since obtaining small systematic errors in shock wave measured parameters requires the material compression and release mechanisms to be known. A brief review will be presented on systematic errors in shock wave data from common experimental techniques for fluids, elastic-plastic solids, materials with negative volume phase transitions, glass and ceramic materials, and high explosives. Issues related to time scales of experiments and quasi-steady flow will also be presented.

  10. Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation

    NASA Astrophysics Data System (ADS)

    Poddar, Banibrata; Giurgiutiu, Victor

    2016-04-01

    Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.

  11. Accurate tight-binding Hamiltonians for two-dimensional and layered materials

    NASA Astrophysics Data System (ADS)

    Agapito, Luis A.; Fornari, Marco; Ceresoli, Davide; Ferretti, Andrea; Curtarolo, Stefano; Nardelli, Marco Buongiorno

    2016-03-01

    We present a scheme to controllably improve the accuracy of tight-binding Hamiltonian matrices derived by projecting the solutions of plane-wave ab initio calculations on atomic-orbital basis sets. By systematically increasing the completeness of the basis set of atomic orbitals, we are able to optimize the quality of the band-structure interpolation over wide energy ranges including unoccupied states. This methodology is applied to the case of interlayer and image states, which appear several eV above the Fermi level in materials with large interstitial regions or surfaces such as graphite and graphene. Due to their spatial localization in the empty regions inside or outside of the system, these states have been inaccessible to traditional tight-binding models and even to ab initio calculations with atom-centered basis functions.

  12. Materials Analysis and Modeling of Underfill Materials.

    SciTech Connect

    Wyatt, Nicholas B; Chambers, Robert S.

    2015-08-01

    The thermal-mechanical properties of three potential underfill candidate materials for PBGA applications are characterized and reported. Two of the materials are a formulations developed at Sandia for underfill applications while the third is a commercial product that utilizes a snap-cure chemistry to drastically reduce cure time. Viscoelastic models were calibrated and fit using the property data collected for one of the Sandia formulated materials. Along with the thermal-mechanical analyses performed, a series of simple bi-material strip tests were conducted to comparatively analyze the relative effects of cure and thermal shrinkage amongst the materials under consideration. Finally, current knowledge gaps as well as questions arising from the present study are identified and a path forward presented.

  13. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    NASA Astrophysics Data System (ADS)

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  14. MONA: An accurate two-phase well flow model based on phase slippage

    SciTech Connect

    Asheim, H.

    1984-10-01

    In two phase flow, holdup and pressure loss are related to interfacial slippage. A model based on the slippage concept has been developed and tested using production well data from Forties, the Ekofisk area, and flowline data from Prudhoe Bay. The model developed turned out considerably more accurate than the standard models used for comparison.

  15. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  16. An accurate modeling, simulation, and analysis tool for predicting and estimating Raman LIDAR system performance

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.

    2007-09-01

    BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.

  17. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  18. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  19. Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young

    2015-07-01

    This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.

  20. An X-band waveguide measurement technique for the accurate characterization of materials with low dielectric loss permittivity.

    PubMed

    Allen, Kenneth W; Scott, Mark M; Reid, David R; Bean, Jeffrey A; Ellis, Jeremy D; Morris, Andrew P; Marsh, Jeramy M

    2016-05-01

    In this work, we present a new X-band waveguide (WR90) measurement method that permits the broadband characterization of the complex permittivity for low dielectric loss tangent material specimens with improved accuracy. An electrically long polypropylene specimen that partially fills the cross-section is inserted into the waveguide and the transmitted scattering parameter (S21) is measured. The extraction method relies on computational electromagnetic simulations, coupled with a genetic algorithm, to match the experimental S21 measurement. The sensitivity of the technique to sample length was explored by simulating specimen lengths from 2.54 to 15.24 cm, in 2.54 cm increments. Analysis of our simulated data predicts the technique will have the sensitivity to measure loss tangent values on the order of 10(-3) for materials such as polymers with relatively low real permittivity values. The ability to accurately characterize low-loss dielectric material specimens of polypropylene is demonstrated experimentally. The method was validated by excellent agreement with a free-space focused-beam system measurement of a polypropylene sheet. This technique provides the material measurement community with the ability to accurately extract material properties of low-loss material specimen over the entire X-band range. This technique could easily be extended to other frequency bands. PMID:27250447

  1. An X-band waveguide measurement technique for the accurate characterization of materials with low dielectric loss permittivity

    NASA Astrophysics Data System (ADS)

    Allen, Kenneth W.; Scott, Mark M.; Reid, David R.; Bean, Jeffrey A.; Ellis, Jeremy D.; Morris, Andrew P.; Marsh, Jeramy M.

    2016-05-01

    In this work, we present a new X-band waveguide (WR90) measurement method that permits the broadband characterization of the complex permittivity for low dielectric loss tangent material specimens with improved accuracy. An electrically long polypropylene specimen that partially fills the cross-section is inserted into the waveguide and the transmitted scattering parameter (S21) is measured. The extraction method relies on computational electromagnetic simulations, coupled with a genetic algorithm, to match the experimental S21 measurement. The sensitivity of the technique to sample length was explored by simulating specimen lengths from 2.54 to 15.24 cm, in 2.54 cm increments. Analysis of our simulated data predicts the technique will have the sensitivity to measure loss tangent values on the order of 10-3 for materials such as polymers with relatively low real permittivity values. The ability to accurately characterize low-loss dielectric material specimens of polypropylene is demonstrated experimentally. The method was validated by excellent agreement with a free-space focused-beam system measurement of a polypropylene sheet. This technique provides the material measurement community with the ability to accurately extract material properties of low-loss material specimen over the entire X-band range. This technique could easily be extended to other frequency bands.

  2. Accurate Monitoring Leads to Effective Control and Greater Learning of Patient Education Materials

    ERIC Educational Resources Information Center

    Rawson, Katherine A.; O'Neil, Rochelle; Dunlosky, John

    2011-01-01

    Effective management of chronic diseases (e.g., diabetes) can depend on the extent to which patients can learn and remember disease-relevant information. In two experiments, we explored a technique motivated by theories of self-regulated learning for improving people's learning of information relevant to managing a chronic disease. Materials were…

  3. An accurate elasto-plastic frictional tangential force displacement model for granular-flow simulations: Displacement-driven formulation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Vu-Quoc, Loc

    2007-07-01

    We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.

  4. Identification of accurate nonlinear rainfall-runoff models with unique parameters

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N.

    2009-04-01

    We propose a strategy to identify models with unique parameters that yield accurate streamflow predictions, given a time-series of rainfall inputs. The procedure consists of five general steps. First, an a priori range of model structures is specified based on prior general and site-specific hydrologic knowledge. To this end, we rely on a flexible model code that allows a specification of a wide range of model structures, from simple to complex. Second, using global optimization each model structure is calibrated to a record of rainfall-runoff data, yielding optimal parameter values for each model structure. Third, accuracy of each model structure is determined by estimating model prediction errors using independent validation and statistical theory. Fourth, parameter identifiability of each calibrated model structure is estimated by means of Monte Carlo Markov Chain simulation. Finally, an assessment is made about each model structure in terms of its accuracy of mimicking rainfall-runoff processes (step 3), and the uniqueness of its parameters (step 4). The procedure results in the identification of the most complex and accurate model supported by the data, without causing parameter equifinality. As such, it provides insight into the information content of the data for identifying nonlinear rainfall-runoff models. We illustrate the method using rainfall-runoff data records from several MOPEX basins in the US.

  5. Modeling shocks in periodic lattice materials

    NASA Astrophysics Data System (ADS)

    Messner, Mark; Barham, Matthew; Barton, Nathan

    2015-06-01

    Periodic lattice materials have an excellent density-to-stiffness ratio, with the elastic stiffness of stretch dominated lattices scaling linearly with relative density. Recent developments in additive manufacturing techniques enable the use of lattice materials in situations where the response of the material to shock loading may become significant. Current continuum models do not describe the response of such lattice materials subject to shocks. This presentation details the development of continuum models suitable for representing shock propagation in periodic lattice materials, particularly focusing on the transition between elastic and plastic response. In the elastic regime, the material retains its periodic structure and equivalent continuum models of infinite, periodic truss structures accurately reproduce characteristics of stretch-dominated lattices. At higher velocities, the material tends to lose its initial lattice structure and begins to resemble a foam or a solid with dispersed voids. Capturing the transition between these regimes can be computationally challenging. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. Molecular models and simulations of layered materials.

    SciTech Connect

    Kalinichev, Andrey G.; Cygan, Randall Timothy; Heinz, Hendrik; Greathouse, Jeffery A.

    2008-11-01

    The micro- to nano-sized nature of layered materials, particularly characteristic of naturally occurring clay minerals, limits our ability to fully interrogate their atomic dispositions and crystal structures. The low symmetry, multicomponent compositions, defects, and disorder phenomena of clays and related phases necessitate the use of molecular models and modern simulation methods. Computational chemistry tools based on classical force fields and quantum-chemical methods of electronic structure calculations provide a practical approach to evaluate structure and dynamics of the materials on an atomic scale. Combined with classical energy minimization, molecular dynamics, and Monte Carlo techniques, quantum methods provide accurate models of layered materials such as clay minerals, layered double hydroxides, and clay-polymer nanocomposites.

  7. Viscoelastic models for polymeric composite materials

    NASA Astrophysics Data System (ADS)

    Bardenhagen, S. G.; Harstad, E. N.; Foster, J. C.; Maudlin, P. J.

    1996-05-01

    An improved model of the mechanical properties of the explosive contained in conventional munitions is needed to accurately simulate performance and accident scenarios in weapons storage facilities. A specific class of explosives can be idealized as a mixture of two components: energetic crystals randomly suspended in a polymeric matrix (binder). Strength characteristics of each component material are important in the macroscopic behavior of the composite (explosive). Of interest here is the determination of an appropriate constitutive law for a polyurethane binder material. A Taylor Cylinder impact test, and uniaxial stress tension and compression tests at various strain rates, have been performed on the polyurethane. Evident from time resolved Taylor Cylinder profiles, the material undergoes very large strains (>100%) and yet recovers its initial configuration. A viscoelastic constitutive law is proposed for the polyurethane and was implemented in the finite element, explicit, continuum mechanics code EPIC. The Taylor Cylinder impact experiment was simulated and the results compared with experiment. Modeling improvements are discussed.

  8. Material modeling and structural analysis with the microplane constitutive model

    NASA Astrophysics Data System (ADS)

    Brocca, Michele

    memory alloys is shown to accurately reproduce the behavior observed experimentally in uniaxial and triaxial tests. Finally, the microplane model for cellular materials is successfully used to perform finite element analysis of failure of sandwich beams by core indentation.

  9. A Material Model for FE-Simulation of UD Composites

    NASA Astrophysics Data System (ADS)

    Fischer, Sebastian

    2016-04-01

    Composite materials are being increasingly used for industrial applications. CFRP is particularly suitable for lightweight construction due to its high specific stiffness and strength properties. Simulation methods are needed during the development process in order to reduce the effort for prototypes and testing. This is particularly important for CFRP, as the material is costly. For accurate simulations, a realistic material model is needed. In this paper, a material model for the simulation of UD-composites including non-linear material behaviour and damage is developed and implemented in Abaqus. The material model is validated by comparison with test results on a range of test specimens.

  10. Global nuclear material control model

    SciTech Connect

    Dreicer, J.S.; Rutherford, D.A.

    1996-05-01

    The nuclear danger can be reduced by a system for global management, protection, control, and accounting as part of a disposition program for special nuclear materials. The development of an international fissile material management and control regime requires conceptual research supported by an analytical and modeling tool that treats the nuclear fuel cycle as a complete system. Such a tool must represent the fundamental data, information, and capabilities of the fuel cycle including an assessment of the global distribution of military and civilian fissile material inventories, a representation of the proliferation pertinent physical processes, and a framework supportive of national or international perspective. They have developed a prototype global nuclear material management and control systems analysis capability, the Global Nuclear Material Control (GNMC) model. The GNMC model establishes the framework for evaluating the global production, disposition, and safeguards and security requirements for fissile nuclear material.

  11. Development of modified cable models to simulate accurate neuronal active behaviors

    PubMed Central

    2014-01-01

    In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted. PMID:25277743

  12. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur.

    PubMed

    Panagiotopoulou, O; Wilshin, S D; Rayfield, E J; Shefelbine, S J; Hutchinson, J R

    2012-02-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form-function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810

  13. Particle Image Velocimetry Measurements in an Anatomically-Accurate Scaled Model of the Mammalian Nasal Cavity

    NASA Astrophysics Data System (ADS)

    Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent

    2013-11-01

    The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.

  14. UTILIZING A CHIRP SONAR TO ACCURATELY CHARACTERIZE NEWLY DEPOSITED MATERIAL AT THE CALCASIEU OCEAN DREDGED MATERIAL DISPOSAL SITE, LOUISIANA

    EPA Science Inventory

    The distribution of dredged sediments is measured at the Calcasieu Ocean Dredged Material Disposal Site (ODMDS) using a chirp sonar immediately after disposal and two months later. ubbottom reflection data, generated by a chirp sonar transmitting a 4 to 20 kHz FM sweep, is proces...

  15. Viscoelastic models for explosive binder materials

    SciTech Connect

    Bardenhagen, S.G.; Harstad, E.N.; Maudlin, P.J.; Gray, G.T.; Foster, J.C. Jr.

    1997-07-01

    An improved model of the mechanical properties of the explosive contained in conventional munitions is needed to accurately simulate performance and accident scenarios in weapons storage facilities. A specific class of explosives can he idealized as a mixture of two components: energetic crystals randomly suspended in a polymeric matrix (binder). Strength characteristics of each component material are important in the macroscopic behavior of the composite (explosive). Of interest here is the determination of an appropriate constitutive law for a polyurethane binder material. This paper is a continuation of previous work in modeling polyurethane at moderately high strain rates and for large deformations. Simulation of a large deformation (strains in excess of 100%) Taylor Anvil experiment revealed numerical difficulties which have been addressed. Additional experimental data have been obtained including improved resolution Taylor Anvil data, and stress relaxation data at various strain rates. A thorough evaluation of the candidate viscoelastic constitutive model is made and possible improvements discussed.

  16. Viscoelastic Models for Explosive Binder Materials

    NASA Astrophysics Data System (ADS)

    Bardenhagen, S. G.; Harstad, E. N.; Maudlin, P. J.; Gray, G. T.; Foster, J. C., Jr.

    1997-07-01

    An improved model of the mechanical properties of the explosive contained in conventional munitions is needed to accurately simulate performance and accident scenarios in weapons storage facilities. A specific class of explosives can be idealized as a mixture of two components: energetic crystals randomly suspended in a polymeric matrix (binder). Strength characteristics of each component material are important in the macroscopic behavior of the composite (explosive). Of interest here is the determination of an appropriate constitutive law for a polyurethane binder material. This paper is a continuation of previous work in modeling polyurethane at moderately high strain rates and for large deformations. Simulation of a large deformation (strains in excess of 100%) Taylor Anvil experiment revealed numerical difficulties which have been addressed. Additional experimental data have been obtained including improved resolution Taylor Anvil data, and stress relaxation data at various strain rates. A thorough evaluation of the candidate viscoelastic constitutive model is made and possible improvements discussed.

  17. Viscoelastic models for explosive binder materials

    NASA Astrophysics Data System (ADS)

    Bardenhagen, S. G.; Harstad, E. N.; Maudlin, P. J.; Gray, G. T.; Foster, J. C.

    1998-07-01

    An improved model of the mechanical properties of the explosive contained in conventional munitions is needed to accurately simulate performance and accident scenarios in weapons storage facilities. A specific class of explosives can be idealized as a mixture of two components: energetic crystals randomly suspended in a polymeric matrix (binder). Strength characteristics of each component material are important in the macroscopic behavior of the composite (explosive). Of interest here is the determination of an appropriate constitutive law for a polyurethane binder material. This paper is a continuation of previous work in modeling polyurethane at moderately high strain rates and for large deformations. Simulation of a large deformation (strains in excess of 100%) Taylor Anvil experiment revealed numerical difficulties which have been addressed. Additional experimental data have been obtained including improved resolution Taylor Anvil data, and stress relaxation data at various strain rates. A thorough evaluation of the candidate viscoelastic constitutive model is made and possible improvements discussed.

  18. Parameter Estimation for Viscoplastic Material Modeling

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.

    1997-01-01

    A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.

  19. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  20. Building an accurate 3D model of a circular feature for robot vision

    NASA Astrophysics Data System (ADS)

    Li, L.

    2012-06-01

    In this paper, an accurate 3D model analysis of a circular feature is built with error compensation for robot vision. We propose an efficient method of fitting ellipses to data points by minimizing the algebraic distance subject to the constraint that a conic should be an ellipse and solving the ellipse parameters through a direct ellipse fitting method by analysing the 3D geometrical representation in a perspective projection scheme, the 3D position of a circular feature with known radius can be obtained. A set of identical circles, machined on a calibration board whose centres were known, was calibrated with a camera and did the model analysis that our method developed. Experimental results show that our method is more accurate than other methods.

  1. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  2. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    PubMed

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  3. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers

    PubMed Central

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-01-01

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769

  4. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-01-01

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769

  5. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers

    NASA Astrophysics Data System (ADS)

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-01

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  6. Accurate and efficient halo-based galaxy clustering modelling with simulations

    NASA Astrophysics Data System (ADS)

    Zheng, Zheng; Guo, Hong

    2016-06-01

    Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.

  7. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  8. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707

  9. Accurate protein structure modeling using sparse NMR data and homologous structure information

    PubMed Central

    Thompson, James M.; Sgourakis, Nikolaos G.; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L.; Szyperski, Thomas; Montelione, Gaetano T.; Baker, David

    2012-01-01

    While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining , 13C, and 15N backbone and 13Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2–1.9 Å relative to the conventional determined NMR ensembles and of 0.9–1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments. PMID:22665781

  10. Micromechanical modeling of advanced materials

    SciTech Connect

    Silling, S.A.; Taylor, P.A.; Wise, J.L.; Furnish, M.D.

    1994-04-01

    Funded as a laboratory-directed research and development (LDRD) project, the work reported here focuses on the development of a computational methodology to determine the dynamic response of heterogeneous solids on the basis of their composition and microstructural morphology. Using the solid dynamics wavecode CTH, material response is simulated on a scale sufficiently fine to explicitly represent the material`s microstructure. Conducting {open_quotes}numerical experiments{close_quotes} on this scale, the authors explore the influence that the microstructure exerts on the material`s overall response. These results are used in the development of constitutive models that take into account the effects of microstructure without explicit representation of its features. Applying this methodology to a glass-reinforced plastic (GRP) composite, the authors examined the influence of various aspects of the composite`s microstructure on its response in a loading regime typical of impact and penetration. As a prerequisite to the microscale modeling effort, they conducted extensive materials testing on the constituents, S-2 glass and epoxy resin (UF-3283), obtaining the first Hugoniot and spall data for these materials. The results of this work are used in the development of constitutive models for GRP materials in transient-dynamics computer wavecodes.

  11. Coarse-grained red blood cell model with accurate mechanical properties, rheology and dynamics.

    PubMed

    Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George E

    2009-01-01

    We present a coarse-grained red blood cell (RBC) model with accurate and realistic mechanical properties, rheology and dynamics. The modeled membrane is represented by a triangular mesh which incorporates shear inplane energy, bending energy, and area and volume conservation constraints. The macroscopic membrane elastic properties are imposed through semi-analytic theory, and are matched with those obtained in optical tweezers stretching experiments. Rheological measurements characterized by time-dependent complex modulus are extracted from the membrane thermal fluctuations, and compared with those obtained from the optical magnetic twisting cytometry results. The results allow us to define a meaningful characteristic time of the membrane. The dynamics of RBCs observed in shear flow suggests that a purely elastic model for the RBC membrane is not appropriate, and therefore a viscoelastic model is required. The set of proposed analyses and numerical tests can be used as a complete model testbed in order to calibrate the modeled viscoelastic membranes to accurately represent RBCs in health and disease. PMID:19965026

  12. Stochastic multiscale modeling of polycrystalline materials

    NASA Astrophysics Data System (ADS)

    Wen, Bin

    Mechanical properties of engineering materials are sensitive to the underlying random microstructure. Quantification of mechanical property variability induced by microstructure variation is essential for the prediction of extreme properties and microstructure-sensitive design of materials. Recent advances in high throughput characterization of polycrystalline microstructures have resulted in huge data sets of microstructural descriptors and image snapshots. To utilize these large scale experimental data for computing the resulting variability of macroscopic properties, appropriate mathematical representation of microstructures is needed. By exploring the space containing all admissible microstructures that are statistically similar to the available data, one can estimate the distribution/envelope of possible properties by employing efficient stochastic simulation methodologies along with robust physics-based deterministic simulators. The focus of this thesis is on the construction of low-dimensional representations of random microstructures and the development of efficient physics-based simulators for polycrystalline materials. By adopting appropriate stochastic methods, such as Monte Carlo and Adaptive Sparse Grid Collocation methods, the variability of microstructure-sensitive properties of polycrystalline materials is investigated. The primary outcomes of this thesis include: (1) Development of data-driven reduced-order representations of microstructure variations to construct the admissible space of random polycrystalline microstructures. (2) Development of accurate and efficient physics-based simulators for the estimation of material properties based on mesoscale microstructures. (3) Investigating property variability of polycrystalline materials using efficient stochastic simulation methods in combination with the above two developments. The uncertainty quantification framework developed in this work integrates information science and materials science, and

  13. A Viscoelastic Constitutive Model Can Accurately Represent Entire Creep Indentation Tests of Human Patella Cartilage

    PubMed Central

    Pal, Saikat; Lindsey, Derek P.; Besier, Thor F.; Beaupre, Gary S.

    2013-01-01

    Cartilage material properties provide important insights into joint health, and cartilage material models are used in whole-joint finite element models. Although the biphasic model representing experimental creep indentation tests is commonly used to characterize cartilage, cartilage short-term response to loading is generally not characterized using the biphasic model. The purpose of this study was to determine the short-term and equilibrium material properties of human patella cartilage using a viscoelastic model representation of creep indentation tests. We performed 24 experimental creep indentation tests from 14 human patellar specimens ranging in age from 20 to 90 years (median age 61 years). We used a finite element model to reproduce the experimental tests and determined cartilage material properties from viscoelastic and biphasic representations of cartilage. The viscoelastic model consistently provided excellent representation of the short-term and equilibrium creep displacements. We determined initial elastic modulus, equilibrium elastic modulus, and equilibrium Poisson’s ratio using the viscoelastic model. The viscoelastic model can represent the short-term and equilibrium response of cartilage and may easily be implemented in whole-joint finite element models. PMID:23027200

  14. Accurate Analytic Results for the Steady State Distribution of the Eigen Model

    NASA Astrophysics Data System (ADS)

    Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun

    2016-04-01

    Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.

  15. DREDGED MATERIAL DISPOSAL MANAGEMENT MODELS

    EPA Science Inventory

    US Army Corps of Engineers public web site with computer models, available for download, used in evaluating various aspects of dredging and dredged material disposal. (landfill and water Quality models are also available at this site.) The site includes the following dredged mate...

  16. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.

    2016-06-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k < 10 h Mpc-1, and we present theoretically motivated extensions to cover non-minimally coupled scalar fields, massive neutrinos and Vainshtein screened modified gravity models that result in few per cent accurate power spectra for k < 10 h Mpc-1. For chameleon screened models, we achieve only 10 per cent accuracy for the same range of scales. Finally, we use our halo model to investigate degeneracies between different extensions to the standard cosmological model, finding that the impact of baryonic feedback on the non-linear matter power spectrum can be considered independently of modified gravity or massive neutrino extensions. In contrast, considering the impact of modified gravity and massive neutrinos independently results in biased estimates of power at the level of 5 per cent at scales k > 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.

  17. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    SciTech Connect

    Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.

  18. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  19. Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL

    NASA Astrophysics Data System (ADS)

    Ciambur, B. C.

    2015-09-01

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.

  20. An accurate and computationally efficient model for membrane-type circular-symmetric micro-hotplates.

    PubMed

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  1. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  2. Accurate and efficient modeling of global seismic wave propagation for an attenuative Earth model including the center

    NASA Astrophysics Data System (ADS)

    Toyokuni, Genti; Takenaka, Hiroshi

    2012-06-01

    We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic

  3. A Method for Accurate in silico modeling of Ultrasound Transducer Arrays

    PubMed Central

    Guenther, Drake A.; Walker, William F.

    2009-01-01

    This paper presents a new approach to improve the in silico modeling of ultrasound transducer arrays. While current simulation tools accurately predict the theoretical element spatio-temporal pressure response, transducers do not always behave as theorized. In practice, using the probe's physical dimensions and published specifications in silico, often results in unsatisfactory agreement between simulation and experiment. We describe a general optimization procedure used to maximize the correlation between the observed and simulated spatio-temporal response of a pulsed single element in a commercial ultrasound probe. A linear systems approach is employed to model element angular sensitivity, lens effects, and diffraction phenomena. A numerical deconvolution method is described to characterize the intrinsic electro-mechanical impulse response of the element. Once the response of the element and optimal element characteristics are known, prediction of the pressure response for arbitrary apertures and excitation signals is performed through direct convolution using available tools. We achieve a correlation of 0.846 between the experimental emitted waveform and simulated waveform when using the probe's physical specifications in silico. A far superior correlation of 0.988 is achieved when using the optimized in silico model. Electronic noise appears to be the main effect preventing the realization of higher correlation coefficients. More accurate in silico modeling will improve the evaluation and design of ultrasound transducers as well as aid in the development of sophisticated beamforming strategies. PMID:19041997

  4. Constitutive modeling for isotropic materials

    NASA Technical Reports Server (NTRS)

    Chan, K. S.; Lindholm, U. S.; Bodner, S. R.

    1988-01-01

    The third and fourth years of a 4-year research program, part of the NASA HOST Program, are described. The program goals were: (1) to develop and validate unified constitutive models for isotropic materials, and (2) to demonstrate their usefulness for structural analysis of hot section components of gas turbine engines. The unified models selected for development and evaluation were those of Bodner-Partom and of Walker. The unified approach for elastic-viscoplastic constitutive equations is a viable method for representing and predicting material response characteristics in the range where strain rate and temperature dependent inelastic deformations are experienced. This conclusion is reached by extensive comparison of model calculations against the experimental results of a test program of two high temperature Ni-base alloys, B1900+Hf and Mar-M247, over a wide temperature range for a variety of deformation and thermal histories including uniaxial, multiaxial, and thermomechanical loading paths. The applicability of the Bodner-Partom and the Walker models for structural applications has been demonstrated by implementing these models into the MARC finite element code and by performing a number of analyses including thermomechanical histories on components of hot sections of gas turbine engines and benchmark notch tensile specimens. The results of the 4-year program have been published in four annual reports. The results of the base program are summarized in this report. The tasks covered include: (1) development of material test procedures, (2) thermal history effects, and (3) verification of the constitutive model for an alternative material.

  5. Catastrophic models of materials destruction

    NASA Astrophysics Data System (ADS)

    Kupchishin, A. I.; Taipova, B. G.; Kupchishin, A. A.; Voronova, N. A.; Kirdyashkin, V. I.; Fursa, T. V.

    2016-02-01

    The effect of concentration and type of fillers on mechanical properties of composite material based on polyimide were studied. Polyethylene terephthalate (PET, polyester), polycarbonate (PCAR) and montmorillonite (MM) were used as the fillers. The samples were prepared by mechanically blending the polyimide-based lacquer solutions with different concentrations of the second component. The concentration of filler and its class, especially their internal structure and technology of synthesis determine features of physical and mechanical properties of obtained materials. Models of catastrophic failure of material satisfactorily describe the main features depending on tension ct from deformation e.

  6. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  7. Accurate verification of the conserved-vector-current and standard-model predictions

    SciTech Connect

    Sirlin, A.; Zucchini, R.

    1986-10-20

    An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.

  8. Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks

    PubMed Central

    Fu, Jun-Song; Liu, Yun

    2015-01-01

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211

  9. Particle Image Velocimetry Measurements in Anatomically-Accurate Models of the Mammalian Nasal Cavity

    NASA Astrophysics Data System (ADS)

    Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.

    2012-11-01

    A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.

  10. Applying an accurate spherical model to gamma-ray burst afterglow observations

    NASA Astrophysics Data System (ADS)

    Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.

    2013-05-01

    We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.

  11. Fully Automated Generation of Accurate Digital Surface Models with Sub-Meter Resolution from Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.

    2012-07-01

    Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  12. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  13. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    PubMed Central

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  14. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull.

    PubMed

    Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J

    2013-07-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  15. An accurate and comprehensive model of thin fluid flows with inertia on curved substrates

    NASA Astrophysics Data System (ADS)

    Roberts, A. J.; Li, Zhenquan

    2006-04-01

    Consider the three-dimensional flow of a viscous Newtonian fluid upon a curved two-dimensional substrate when the fluid film is thin, as occurs in many draining, coating and biological flows. We derive a comprehensive model of the dynamics of the film, the model being expressed in terms of the film thickness eta and the average lateral velocity bar{bm u}. Centre manifold theory assures us that the model accurately and systematically includes the effects of the curvature of substrate, gravitational body force, fluid inertia and dissipation. The model resolves wavelike phenomena in the dynamics of viscous fluid flows over arbitrarily curved substrates such as cylinders, tubes and spheres. We briefly illustrate its use in simulating drop formation on cylindrical fibres, wave transitions, three-dimensional instabilities, Faraday waves, viscous hydraulic jumps, flow vortices in a compound channel and flow down and up a step. These models are the most complete models for thin-film flow of a Newtonian fluid; many other thin-film models can be obtained by different restrictions and truncations of the model derived here.

  16. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  17. Molecules-in-Molecules: An Extrapolated Fragment-Based Approach for Accurate Calculations on Large Molecules and Materials.

    PubMed

    Mayhall, Nicholas J; Raghavachari, Krishnan

    2011-05-10

    We present a new extrapolated fragment-based approach, termed molecules-in-molecules (MIM), for accurate energy calculations on large molecules. In this method, we use a multilevel partitioning approach coupled with electronic structure studies at multiple levels of theory to provide a hierarchical strategy for systematically improving the computed results. In particular, we use a generalized hybrid energy expression, similar in spirit to that in the popular ONIOM methodology, that can be combined easily with any fragmentation procedure. In the current work, we explore a MIM scheme which first partitions a molecule into nonoverlapping fragments and then recombines the interacting fragments to form overlapping subsystems. By including all interactions with a cheaper level of theory, the MIM approach is shown to significantly reduce the errors arising from a single level fragmentation procedure. We report the implementation of energies and gradients and the initial assessment of the MIM method using both biological and materials systems as test cases. PMID:26610128

  18. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information.

    PubMed

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W; Jin, Ke; Du, Yingge; Neeway, James J; Ryan, Joseph V; Hu, Dehong; Zhang, Kelvin H L; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest. PMID:25953490

  19. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information

    NASA Astrophysics Data System (ADS)

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W.; Jin, Ke; Du, Yingge; Neeway, James J.; Ryan, Joseph V.; Hu, Dehong; Zhang, Kelvin H. L.; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest.

  20. Simple and accurate modelling of the gravitational potential produced by thick and thin exponential discs

    NASA Astrophysics Data System (ADS)

    Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.

    2015-04-01

    We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.

  1. Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.

    PubMed

    Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit

    2015-05-01

    A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies. PMID:25502920

  2. Turtle utricle dynamic behavior using a combined anatomically accurate model and experimentally measured hair bundle stiffness

    PubMed Central

    Davis, J.L.; Grant, J.W.

    2014-01-01

    Anatomically correct turtle utricle geometry was incorporated into two finite element models. The geometrically accurate model included appropriately shaped macular surface and otoconial layer, compact gel and column filament (or shear) layer thicknesses and thickness distributions. The first model included a shear layer where the effects of hair bundle stiffness was included as part of the shear layer modulus. This solid model’s undamped natural frequency was matched to an experimentally measured value. This frequency match established a realistic value of the effective shear layer Young’s modulus of 16 Pascals. We feel this is the most accurate prediction of this shear layer modulus and fits with other estimates (Kondrachuk, 2001b). The second model incorporated only beam elements in the shear layer to represent hair cell bundle stiffness. The beam element stiffness’s were further distributed to represent their location on the neuroepithelial surface. Experimentally measured striola hair cell bundles mean stiffness values were used in the striolar region and the mean extrastriola hair cell bundles stiffness values were used in this region. The results from this second model indicated that hair cell bundle stiffness contributes approximately 40% to the overall stiffness of the shear layer– hair cell bundle complex. This analysis shows that high mass saccules, in general, achieve high gain at the sacrifice of frequency bandwidth. We propose the mechanism by which this can be achieved is through increase the otoconial layer mass. The theoretical difference in gain (deflection per acceleration) is shown for saccules with large otoconial layer mass relative to saccules and utricles with small otoconial layer mass. Also discussed is the necessity of these high mass saccules to increase their overall system shear layer stiffness. Undamped natural frequencies and mode shapes for these sensors are shown. PMID:25445820

  3. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    PubMed Central

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  4. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  5. An accurate parameterization of the infrared radiative properties of cirrus clouds for climate models

    SciTech Connect

    Fu, Q.; Sun, W.B.; Yang, P.

    1998-09-01

    An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.

  6. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements

    PubMed Central

    Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  7. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    PubMed

    Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  8. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  9. Materials Database Development for Ballistic Impact Modeling

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael

    2007-01-01

    A set of experimental data is being generated under the Fundamental Aeronautics Program Supersonics project to help create and validate accurate computational impact models of jet engine impact events. The data generated will include material property data generated at a range of different strain rates, from 1x10(exp -4)/sec to 5x10(exp 4)/sec, over a range of temperatures. In addition, carefully instrumented ballistic impact tests will be conducted on flat plates and curved structures to provide material and structural response information to help validate the computational models. The material property data and the ballistic impact data will be generated using materials from the same lot, as far as possible. It was found in preliminary testing that the surface finish of test specimens has an effect on measured high strain rate tension response of AL2024. Both the maximum stress and maximum elongation are greater on specimens with a smoother finish. This report gives an overview of the testing that is being conducted and presents results of preliminary testing of the surface finish study.

  10. Problems in obtaining precise and accurate Sr isotope analysis from geological materials using laser ablation MC-ICPMS

    PubMed Central

    van der Wagt, B.; Koornneef, J. M.; Davies, G. R.

    2007-01-01

    This paper reviews the problems encountered in eleven studies of Sr isotope analysis using laser ablation multicollector inductively coupled plasma mass spectrometry (LA-MC-ICPMS) in the period 1995–2006. This technique has been shown to have great potential, but the accuracy and precision are limited by: (1) large instrumental mass discrimination, (2) laser-induced isotopic and elemental fractionations and (3) molecular interferences. The most important isobaric interferences are Kr and Rb, whereas Ca dimer/argides and doubly charged rare earth elements (REE) are limited to sample materials which contain substantial amounts of these elements. With modern laser (193 nm) and MC-ICPMS equipment, minerals with >500 ppm Sr content can be analysed with a precision of better than 100 ppm and a spatial resolution (spot size) of approximately 100 μm. The LA MC-ICPMS analysis of 87Sr/86Sr of both carbonate material and plagioclase is successful in all reported studies, although the higher 84Sr/86Sr ratios do suggest in some cases an influence of Ca dimer and/or argides. High Rb/Sr (>0.01) materials have been successfully analysed by carefully measuring the 85Rb/87Rb in standard material and by applying the standard-sample bracketing method for accurate Rb corrections. However, published LA-MC-ICPMS data on clinopyroxene, apatite and sphene records differences when compared with 87Sr/86Sr measured by thermal ionisation mass spectrometry (TIMS) and solution MC-ICPMS. This suggests that further studies are required to ensure that the most optimal correction methods are applied for all isobaric interferences. PMID:18080118

  11. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  12. Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285

  13. An Accurate Model for Biomolecular Helices and Its Application to Helix Visualization

    PubMed Central

    Wang, Lincong; Qiao, Hui; Cao, Chen; Xu, Shutan; Zou, Shuxue

    2015-01-01

    Helices are the most abundant secondary structural elements in proteins and the structural forms assumed by double stranded DNAs (dsDNA). Though the mathematical expression for a helical curve is simple, none of the previous models for the biomolecular helices in either proteins or DNAs use a genuine helical curve, likely because of the complexity of fitting backbone atoms to helical curves. In this paper we model a helix as a series of different but all bona fide helical curves; each one best fits the coordinates of four consecutive backbone Cα atoms for a protein or P atoms for a DNA molecule. An implementation of the model demonstrates that it is more accurate than the previous ones for the description of the deviation of a helix from a standard helical curve. Furthermore, the accuracy of the model makes it possible to correlate deviations with structural and functional significance. When applied to helix visualization, the ribbon diagrams generated by the model are less choppy or have smaller side chain detachment than those by the previous visualization programs that typically model a helix as a series of low-degree splines. PMID:26126117

  14. Dynamic saturation in Semiconductor Optical Amplifiers: accurate model, role of carrier density, and slow light.

    PubMed

    Berger, Perrine; Alouini, Mehdi; Bourderionnet, Jérôme; Bretenaker, Fabien; Dolfi, Daniel

    2010-01-18

    We developed an improved model in order to predict the RF behavior and the slow light properties of the SOA valid for any experimental conditions. It takes into account the dynamic saturation of the SOA, which can be fully characterized by a simple measurement, and only relies on material fitting parameters, independent of the optical intensity and the injected current. The present model is validated by showing a good agreement with experiments for small and large modulation indices. PMID:20173888

  15. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for Climate Modeling

    NASA Astrophysics Data System (ADS)

    Kopparla, P.; Natraj, V.; Spurr, R. J. D.; Shia, R. L.; Yung, Y. L.

    2014-12-01

    Radiative transfer (RT) computations are an essential component of energy budget calculations in climate models. However, full treatment of RT processes is computationally expensive, prompting usage of 2-stream approximations in operational climate models. This simplification introduces errors of the order of 10% in the top of the atmosphere (TOA) fluxes [Randles et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT simulations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those (few) optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Here, we extend the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Comparisons between the new model, called Universal Principal Component Analysis model for Radiative Transfer (UPCART), 2-stream models (such as those used in climate applications) and line-by-line RT models are performed, in order for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the TOA for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and solar and viewing geometries. We demonstrate that very accurate radiative forcing estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases as compared to an exact line-by-line RT model. The model is comparable in speeds to 2-stream models, potentially rendering UPCART useful for operational General Circulation Models (GCMs). The operational speed and accuracy of UPCART can be further

  16. Constitutive modeling for isotropic materials (HOST)

    NASA Technical Reports Server (NTRS)

    Chan, Kwai S.; Lindholm, Ulric S.; Bodner, S. R.; Hill, Jeff T.; Weber, R. M.; Meyer, T. G.

    1986-01-01

    The results of the third year of work on a program which is part of the NASA Hot Section Technology program (HOST) are presented. The goals of this program are: (1) the development of unified constitutive models for rate dependent isotropic materials; and (2) the demonstration of the use of unified models in structural analyses of hot section components of gas turbine engines. The unified models selected for development and evaluation are those of Bodner-Partom and of Walker. A test procedure was developed for assisting the generation of a data base for the Bodner-Partom model using a relatively small number of specimens. This test procedure involved performing a tensile test at a temperature of interest that involves a succession of strain-rate changes. The results for B1900+Hf indicate that material constants related to hardening and thermal recovery can be obtained on the basis of such a procedure. Strain aging, thermal recovery, and unexpected material variations, however, preluded an accurate determination of the strain-rate sensitivity parameter is this exercise. The effects of casting grain size on the constitutive behavior of B1900+Hf were studied and no particular grain size effect was observed. A systematic procedure was also developed for determining the material constants in the Bodner-Partom model. Both the new test procedure and the method for determining material constants were applied to the alternate material, Mar-M247 . Test data including tensile, creep, cyclic and nonproportional biaxial (tension/torsion) loading were collected. Good correlations were obtained between the Bodner-Partom model and experiments. A literature survey was conducted to assess the effects of thermal history on the constitutive behavior of metals. Thermal history effects are expected to be present at temperature regimes where strain aging and change of microstructure are important. Possible modifications to the Bodner-Partom model to account for these effects are outlined

  17. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  18. Are Quasi-Steady-State Approximated Models Suitable for Quantifying Intrinsic Noise Accurately?

    PubMed Central

    Sengupta, Dola; Kar, Sandip

    2015-01-01

    Large gene regulatory networks (GRN) are often modeled with quasi-steady-state approximation (QSSA) to reduce the huge computational time required for intrinsic noise quantification using Gillespie stochastic simulation algorithm (SSA). However, the question still remains whether the stochastic QSSA model measures the intrinsic noise as accurately as the SSA performed for a detailed mechanistic model or not? To address this issue, we have constructed mechanistic and QSSA models for few frequently observed GRNs exhibiting switching behavior and performed stochastic simulations with them. Our results strongly suggest that the performance of a stochastic QSSA model in comparison to SSA performed for a mechanistic model critically relies on the absolute values of the mRNA and protein half-lives involved in the corresponding GRN. The extent of accuracy level achieved by the stochastic QSSA model calculations will depend on the level of bursting frequency generated due to the absolute value of the half-life of either mRNA or protein or for both the species. For the GRNs considered, the stochastic QSSA quantifies the intrinsic noise at the protein level with greater accuracy and for larger combinations of half-life values of mRNA and protein, whereas in case of mRNA the satisfactory accuracy level can only be reached for limited combinations of absolute values of half-lives. Further, we have clearly demonstrated that the abundance levels of mRNA and protein hardly matter for such comparison between QSSA and mechanistic models. Based on our findings, we conclude that QSSA model can be a good choice for evaluating intrinsic noise for other GRNs as well, provided we make a rational choice based on experimental half-life values available in literature. PMID:26327626

  19. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations

    PubMed Central

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-01-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  20. Continuum descriptions of membranes and their interaction with proteins: Towards chemically accurate models.

    PubMed

    Argudo, David; Bethel, Neville P; Marcoline, Frank V; Grabe, Michael

    2016-07-01

    Biological membranes deform in response to resident proteins leading to a coupling between membrane shape and protein localization. Additionally, the membrane influences the function of membrane proteins. Here we review contributions to this field from continuum elastic membrane models focusing on the class of models that couple the protein to the membrane. While it has been argued that continuum models cannot reproduce the distortions observed in fully-atomistic molecular dynamics simulations, we suggest that this failure can be overcome by using chemically accurate representations of the protein. We outline our recent advances along these lines with our hybrid continuum-atomistic model, and we show the model is in excellent agreement with fully-atomistic simulations of the nhTMEM16 lipid scramblase. We believe that the speed and accuracy of continuum-atomistic methodologies will make it possible to simulate large scale, slow biological processes, such as membrane morphological changes, that are currently beyond the scope of other computational approaches. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. PMID:26853937

  1. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  2. Hybridization modeling of oligonucleotide SNP arrays for accurate DNA copy number estimation

    PubMed Central

    Wan, Lin; Sun, Kelian; Ding, Qi; Cui, Yuehua; Li, Ming; Wen, Yalu; Elston, Robert C.; Qian, Minping; Fu, Wenjiang J

    2009-01-01

    Affymetrix SNP arrays have been widely used for single-nucleotide polymorphism (SNP) genotype calling and DNA copy number variation inference. Although numerous methods have achieved high accuracy in these fields, most studies have paid little attention to the modeling of hybridization of probes to off-target allele sequences, which can affect the accuracy greatly. In this study, we address this issue and demonstrate that hybridization with mismatch nucleotides (HWMMN) occurs in all SNP probe-sets and has a critical effect on the estimation of allelic concentrations (ACs). We study sequence binding through binding free energy and then binding affinity, and develop a probe intensity composite representation (PICR) model. The PICR model allows the estimation of ACs at a given SNP through statistical regression. Furthermore, we demonstrate with cell-line data of known true copy numbers that the PICR model can achieve reasonable accuracy in copy number estimation at a single SNP locus, by using the ratio of the estimated AC of each sample to that of the reference sample, and can reveal subtle genotype structure of SNPs at abnormal loci. We also demonstrate with HapMap data that the PICR model yields accurate SNP genotype calls consistently across samples, laboratories and even across array platforms. PMID:19586935

  3. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    PubMed

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  4. A mathematical recursive model for accurate description of the phase behavior in the near-critical region by Generalized van der Waals Equation

    NASA Astrophysics Data System (ADS)

    Kim, Jibeom; Jeon, Joonhyeon

    2015-01-01

    Recently, related studies on Equation Of State (EOS) have reported that generalized van der Waals (GvdW) shows poor representations in the near critical region for non-polar and non-sphere molecules. Hence, there are still remains a problem of GvdW parameters to minimize loss in describing saturated vapor densities and vice versa. This paper describes a recursive model GvdW (rGvdW) for an accurate representation of pure fluid materials in the near critical region. For the performance evaluation of rGvdW in the near critical region, other EOS models are also applied together with two pure molecule group: alkane and amine. The comparison results show rGvdW provides much more accurate and reliable predictions of pressure than the others. The calculating model of EOS through this approach gives an additional insight into the physical significance of accurate prediction of pressure in the nearcritical region.

  5. Development and application of accurate analytical models for single active electron potentials

    NASA Astrophysics Data System (ADS)

    Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas

    2015-05-01

    The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).

  6. Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?

    PubMed

    Searcy, Christopher A; Shaffer, H Bradley

    2016-04-01

    Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071

  7. Accurate and Fast Simulation of Channel Noise in Conductance-Based Model Neurons by Diffusion Approximation

    PubMed Central

    Linaro, Daniele; Storace, Marco; Giugliano, Michele

    2011-01-01

    Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here. PMID:21423712

  8. Accurate integral equation theory for the central force model of liquid water and ionic solutions

    NASA Astrophysics Data System (ADS)

    Ichiye, Toshiko; Haymet, A. D. J.

    1988-10-01

    The atom-atom pair correlation functions and thermodynamics of the central force model of water, introduced by Lemberg, Stillinger, and Rahman, have been calculated accurately by an integral equation method which incorporates two new developments. First, a rapid new scheme has been used to solve the Ornstein-Zernike equation. This scheme combines the renormalization methods of Allnatt, and Rossky and Friedman with an extension of the trigonometric basis-set solution of Labik and co-workers. Second, by adding approximate ``bridge'' functions to the hypernetted-chain (HNC) integral equation, we have obtained predictions for liquid water in which the hydrogen bond length and number are in good agreement with ``exact'' computer simulations of the same model force laws. In addition, for dilute ionic solutions, the ion-oxygen and ion-hydrogen coordination numbers display both the physically correct stoichiometry and good agreement with earlier simulations. These results represent a measurable improvement over both a previous HNC solution of the central force model and the ex-RISM integral equation solutions for the TIPS and other rigid molecule models of water.

  9. Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.

    1994-01-01

    Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.

  10. An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion

    NASA Astrophysics Data System (ADS)

    Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.

    2014-11-01

    Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.

  11. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  12. A General Pairwise Interaction Model Provides an Accurate Description of In Vivo Transcription Factor Binding Sites

    PubMed Central

    Santolini, Marc; Mora, Thierry; Hakim, Vincent

    2014-01-01

    The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond

  13. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    NASA Astrophysics Data System (ADS)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  14. Development of Accurate Chemical Equilibrium Models for the Hanford Waste Tanks: New Thermodynamic Measurements and Model Applications

    SciTech Connect

    Felmy, Andrew R.; Mason, Marvin; Qafoku, Odeta; Xia, Yuanxian; Wang, Zheming; MacLean, Graham

    2003-03-27

    Developing accurate thermodynamic models for predicting the chemistry of the high-level waste tanks at Hanford is an extremely daunting challenge in electrolyte and radionuclide chemistry. These challenges stem from the extremely high ionic strength of the tank waste supernatants, presence of chelating agents in selected tanks, wide temperature range in processing conditions and the presence of important actinide species in multiple oxidation states. This presentation summarizes progress made to date in developing accurate models for these tank waste solutions, how these data are being used at Hanford and the important challenges that remain. New thermodynamic measurements on Sr and actinide complexation with specific chelating agents (EDTA, HEDTA and gluconate) will also be presented.

  15. Accurate Characterization of Ion Transport Properties in Binary Symmetric Electrolytes Using In Situ NMR Imaging and Inverse Modeling.

    PubMed

    Sethurajan, Athinthra Krishnaswamy; Krachkovskiy, Sergey A; Halalay, Ion C; Goward, Gillian R; Protas, Bartosz

    2015-09-17

    We used NMR imaging (MRI) combined with data analysis based on inverse modeling of the mass transport problem to determine ionic diffusion coefficients and transference numbers in electrolyte solutions of interest for Li-ion batteries. Sensitivity analyses have shown that accurate estimates of these parameters (as a function of concentration) are critical to the reliability of the predictions provided by models of porous electrodes. The inverse modeling (IM) solution was generated with an extension of the Planck-Nernst model for the transport of ionic species in electrolyte solutions. Concentration-dependent diffusion coefficients and transference numbers were derived using concentration profiles obtained from in situ (19)F MRI measurements. Material properties were reconstructed under minimal assumptions using methods of variational optimization to minimize the least-squares deviation between experimental and simulated concentration values with uncertainty of the reconstructions quantified using a Monte Carlo analysis. The diffusion coefficients obtained by pulsed field gradient NMR (PFG-NMR) fall within the 95% confidence bounds for the diffusion coefficient values obtained by the MRI+IM method. The MRI+IM method also yields the concentration dependence of the Li(+) transference number in agreement with trends obtained by electrochemical methods for similar systems and with predictions of theoretical models for concentrated electrolyte solutions, in marked contrast to the salt concentration dependence of transport numbers determined from PFG-NMR data. PMID:26247105

  16. SMARTIES: Spheroids Modelled Accurately with a Robust T-matrix Implementation for Electromagnetic Scattering

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2016-03-01

    SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.

  17. Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models

    NASA Astrophysics Data System (ADS)

    Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel

    2016-02-01

    The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.

  18. A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region

    NASA Astrophysics Data System (ADS)

    Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.

    2016-04-01

    Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.

  19. Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.

    PubMed

    Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M

    2016-06-21

    We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy. PMID:27230942

  20. Oxygen-enhanced MRI accurately identifies, quantifies, and maps tumor hypoxia in preclinical cancer models

    PubMed Central

    O’Connor, James PB; Boult, Jessica KR; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff JM; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P

    2015-01-01

    There is a clinical need for non-invasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning and therapy monitoring. Oxygen enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed “Oxy-R fraction”) would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here we demonstrate that OE-MRI signals are accurate, precise and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia non-invasively and is immediately translatable to the clinic. PMID:26659574

  1. Parallel kinetic Monte Carlo simulation framework incorporating accurate models of adsorbate lateral interactions

    SciTech Connect

    Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail

    2013-12-14

    Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.

  2. The S-model: A highly accurate MOST model for CAD

    NASA Astrophysics Data System (ADS)

    Satter, J. H.

    1986-09-01

    A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.

  3. Random generalized linear model: a highly accurate and interpretable ensemble predictor

    PubMed Central

    2013-01-01

    Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760

  4. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  5. A stochastic model of kinetochore–microtubule attachment accurately describes fission yeast chromosome segregation

    PubMed Central

    Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline

    2012-01-01

    In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B–like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B–like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy. PMID:22412019

  6. Computationally efficient and accurate enantioselectivity modeling by clusters of molecular dynamics simulations.

    PubMed

    Wijma, Hein J; Marrink, Siewert J; Janssen, Dick B

    2014-07-28

    Computational approaches could decrease the need for the laborious high-throughput experimental screening that is often required to improve enzymes by mutagenesis. Here, we report that using multiple short molecular dynamics (MD) simulations makes it possible to accurately model enantioselectivity for large numbers of enzyme-substrate combinations at low computational costs. We chose four different haloalkane dehalogenases as model systems because of the availability of a large set of experimental data on the enantioselective conversion of 45 different substrates. To model the enantioselectivity, we quantified the frequency of occurrence of catalytically productive conformations (near attack conformations) for pairs of enantiomers during MD simulations. We found that the angle of nucleophilic attack that leads to carbon-halogen bond cleavage was a critical variable that limited the occurrence of productive conformations; enantiomers for which this angle reached values close to 180° were preferentially converted. A cluster of 20-40 very short (10 ps) MD simulations allowed adequate conformational sampling and resulted in much better agreement to experimental enantioselectivities than single long MD simulations (22 ns), while the computational costs were 50-100 fold lower. With single long MD simulations, the dynamics of enzyme-substrate complexes remained confined to a conformational subspace that rarely changed significantly, whereas with multiple short MD simulations a larger diversity of conformations of enzyme-substrate complexes was observed. PMID:24916632

  7. Accurate models for P-gp drug recognition induced from a cancer cell line cytotoxicity screen.

    PubMed

    Levatić, Jurica; Ćurak, Jasna; Kralj, Marijeta; Šmuc, Tomislav; Osmak, Maja; Supek, Fran

    2013-07-25

    P-glycoprotein (P-gp, MDR1) is a promiscuous drug efflux pump of substantial pharmacological importance. Taking advantage of large-scale cytotoxicity screening data involving 60 cancer cell lines, we correlated the differential biological activities of ∼13,000 compounds against cellular P-gp levels. We created a large set of 934 high-confidence P-gp substrates or nonsubstrates by enforcing agreement with an orthogonal criterion involving P-gp overexpressing ADR-RES cells. A support vector machine (SVM) was 86.7% accurate in discriminating P-gp substrates on independent test data, exceeding previous models. Two molecular features had an overarching influence: nearly all P-gp substrates were large (>35 atoms including H) and dense (specific volume of <7.3 Å(3)/atom) molecules. Seven other descriptors and 24 molecular fragments ("effluxophores") were found enriched in the (non)substrates and incorporated into interpretable rule-based models. Biological experiments on an independent P-gp overexpressing cell line, the vincristine-resistant VK2, allowed us to reclassify six compounds previously annotated as substrates, validating our method's predictive ability. Models are freely available at http://pgp.biozyne.com . PMID:23772653

  8. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  9. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    PubMed Central

    2011-01-01

    Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645

  10. Constitutive modeling for isotropic materials

    NASA Technical Reports Server (NTRS)

    Lindholm, Ulric S.; Chan, Kwai S.

    1986-01-01

    The objective of the program is to evaluate and develop existing constitutive models for use in finite-element structural analysis of turbine engine hot section components. The class of constitutive equation studied is considered unified in that all inelastic deformation including plasticity, creep, and stress relaxation are treated in a single term rather than a classical separation of plasticity (time independent) and creep (time dependent) behavior. The unified theories employed also do not utilize the classical yield surface or plastic potential concept. The models are constructed from an appropriate flow law, a scalar kinetic relation between strain rate, temperature and stress, and evolutionary equations for internal variables describing strain or work hardening, both isotropic and directional (kinematic). This and other studies have shown that the unified approach is particularly suited for determining the cyclic behavior of superalloy type blade and vane materials and is entirely compatible with three-dimensional inelastic finite-element formulations. The behavior was examined of a second nickel-base alloy, MAR-M247, and compared it with the Bodner-Partom model, further examined procedures for determining the material-specific constants in the models, and exercised the MARC code for a turbine blade under simulated flight spectrum loading. Results are summarized.

  11. A Fibre-Reinforced Poroviscoelastic Model Accurately Describes the Biomechanical Behaviour of the Rat Achilles Tendon

    PubMed Central

    Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna

    2015-01-01

    Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon

  12. Accurate mathematical models to describe the lactation curve of Lacaune dairy sheep under intensive management.

    PubMed

    Elvira, L; Hernandez, F; Cuesta, P; Cano, S; Gonzalez-Martin, J-V; Astiz, S

    2013-06-01

    Although the intensive production system of Lacaune dairy sheep is the only profitable method for producers outside of the French Roquefort area, little is known about this type of systems. This study evaluated yield records of 3677 Lacaune sheep under intensive management between 2005 and 2010 in order to describe the lactation curve of this breed and to investigate the suitability of different mathematical functions for modeling this curve. A total of 7873 complete lactations during a 40-week lactation period corresponding to 201 281 pieces of weekly yield data were used. First, five mathematical functions were evaluated on the basis of the residual mean square, determination coefficient, Durbin Watson and Runs Test values. The two better models were found to be Pollott Additive and fractional polynomial (FP). In the second part of the study, the milk yield, peak of milk yield, day of peak and persistency of the lactations were calculated with Pollot Additive and FP models and compared with the real data. The results indicate that both models gave an extremely accurate fit to Lacaune lactation curves in order to predict milk yields (P = 0.871), with the FP model being the best choice to provide a good fit to an extensive amount of real data and applicable on farm without specific statistical software. On the other hand, the interpretation of the parameters of the Pollott Additive function helps to understand the biology of the udder of the Lacaune sheep. The characteristics of the Lacaune lactation curve and milk yield are affected by lactation number and length. The lactation curves obtained in the present study allow the early identification of ewes with low milk yield potential, which will help to optimize farm profitability. PMID:23257242

  13. Fast and Accurate Radiative Transfer Calculations Using Principal Component Analysis for (Exo-)Planetary Retrieval Models

    NASA Astrophysics Data System (ADS)

    Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.

    2015-12-01

    Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work

  14. Constitutive modeling of inelastic anisotropic material response

    NASA Technical Reports Server (NTRS)

    Stouffer, D. C.

    1984-01-01

    A constitutive equation was developed to predict the inelastic thermomechanical response of single crystal turbine blades. These equations are essential for developing accurate finite element models of hot section components and contribute significantly to the understanding and prediction of crack initiation and propagation. The method used was limited to unified state variable constitutive equations. Two approaches to developing an anisotropic constitutive equation were reviewed. One approach was to apply the Stouffer-Bodner representation for deformation induced anisotropy to materials with an initial anisotropy such as single crystals. The second approach was to determine the global inelastic strain rate from the contribution of the slip in each of the possible crystallographic slip systems. A three dimensional finite element is being developed with a variable constitutive equation link that can be used for constitutive equation development and to predict the response of an experiment using the actual specimen geometry and loading conditions.

  15. Quantum Mechanics Based Multiscale Modeling of Materials

    NASA Astrophysics Data System (ADS)

    Lu, Gang

    2013-03-01

    We present two quantum mechanics based multiscale approaches that can simulate extended defects in metals accurately and efficiently. The first approach (QCDFT) can treat multimillion atoms effectively via density functional theory (DFT). The method is an extension of the original quasicontinuum approach with DFT as its sole energetic formulation. The second method (QM/MM) has to do with quantum mechanics/molecular mechanics coupling based on the constrained density functional theory, which provides an exact framework for a self-consistent quantum mechanical embedding. Several important materials problems will be addressed using the multiscale modeling approaches, including hydrogen-assisted cracking in Al, magnetism-controlled dislocation properties in Fe and Si pipe diffusion along Al dislocation core. We acknowledge the support from the Office of Navel Research and the Army Research Office.

  16. Global climate modeling of Saturn's atmosphere: fast and accurate radiative transfer and exploration of seasonal variability

    NASA Astrophysics Data System (ADS)

    Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.

    2013-10-01

    Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar modeled temperature is 5-10K too low compared to measurements. This suggests that processes other than radiative heating/cooling by trace

  17. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  18. Accurate modeling of cache replacement policies in a Data-Grid.

    SciTech Connect

    Otoo, Ekow J.; Shoshani, Arie

    2003-01-23

    Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.

  19. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    PubMed

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-01

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255

  20. New possibilities of accurate particle characterisation by applying direct boundary models to analytical centrifugation

    NASA Astrophysics Data System (ADS)

    Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang

    2015-04-01

    Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.

  1. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    SciTech Connect

    Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.

    2009-12-10

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance LAMBDACDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and LAMBDACDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the LAMBDACDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass

  2. System level permeability modeling of porous hydrogen storage materials.

    SciTech Connect

    Kanouff, Michael P.; Dedrick, Daniel E.; Voskuilen, Tyler

    2010-01-01

    A permeability model for hydrogen transport in a porous material is successfully applied to both laboratory-scale and vehicle-scale sodium alanate hydrogen storage systems. The use of a Knudsen number dependent relationship for permeability of the material in conjunction with a constant area fraction channeling model is shown to accurately predict hydrogen flow through the reactors. Generally applicable model parameters were obtained by numerically fitting experimental measurements from reactors of different sizes and aspect ratios. The degree of channeling was experimentally determined from the measurements and found to be 2.08% of total cross-sectional area. Use of this constant area channeling model and the Knudsen dependent Young & Todd permeability model allows for accurate prediction of the hydrogen uptake performance of full-scale sodium alanate and similar metal hydride systems.

  3. An accurate parameterization of the radiative properties of water clouds suitable for use in climate models

    SciTech Connect

    Hu, Y.X.; Stamnes, K. )

    1993-04-01

    A new parameterization of the radiative Properties of water clouds is presented. Cloud optical properties for valent radius throughout the solar and both solar and terrestrial spectra and for cloud equivalent radii in the range 2.5-60 [mu]m are calculated from Mie theory. It is found that cloud optical properties depend mainly on equivalent radius throughout the solar and terrestrial spectrum and are insensitive to the details of the droplet size distribution, such as shape, skewness, width, and modality (single or bimodal). This suggests that in cloud models, aimed at predicting the evolution of cloud microphysics with climate change, it is sufficient to determine the third and the second moments of the size distribution (the ratio of which determines the equivalent radius). It also implies that measurements of the cloud liquid water content and the extinction coefficient are sufficient to determine cloud optical properties experimentally (i.e., measuring the complete droplet size distribution is not required). Based on the detailed calculations, the optical properties are parameterized as a function of cloud liquid water path and equivalent cloud droplet radius by using a nonlinear least-square fitting. The parameterization is performed separately for the range of radii 2.5-12 [mu]m, 12-30,[mu]m, and 30-60 [mu]m. Cloud heating and cooling rates are computed from this parameterization by using a comprehensive radiation model. Comparison with similar results obtained from exact Mie scattering calculations shows that this parameterization yields very accurate results and that it is several thousand times faster. This parameterization separates the dependence of cloud optical properties on droplet size and liquid water content, and is suitable for inclusion into climate models. 22 refs., 7 figs., 6 tabs.

  4. Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael

    2014-05-01

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of

  5. HYPERELASTIC MODELS FOR GRANULAR MATERIALS

    SciTech Connect

    Humrickhouse, Paul W; Corradini, Michael L

    2009-01-29

    A continuum framework for modeling of dust mobilization and transport, and the behavior of granular systems in general, has been reviewed, developed and evaluated for reactor design applications. The large quantities of micron-sized particles expected in the international fusion reactor design, ITER, will accumulate into piles and layers on surfaces, which are large relative to the individual particle size; thus, particle-particle, rather than particle-surface, interactions will determine the behavior of the material in bulk, and a continuum approach is necessary and justified in treating the phenomena of interest; e.g., particle resuspension and transport. The various constitutive relations that characterize these solid particle interactions in dense granular flows have been discussed previously, but prior to mobilization their behavior is not even fluid. Even in the absence of adhesive forces between particles, dust or sand piles can exist in static equilibrium under gravity and other forces, e.g., fluid shear. Their behavior is understood to be elastic, though not linear. The recent “granular elasticity” theory proposes a non-linear elastic model based on “Hertz contacts” between particles; the theory identifies the Coulomb yield condition as a requirement for thermodynamic stability, and has successfully reproduced experimental results for stress distributions in sand piles. The granular elasticity theory is developed and implemented in a stand- alone model and then implemented as part of a finite element model, ABAQUS, to determine the stress distributions in dust piles subjected to shear by a fluid flow. We identify yield with the onset of mobilization, and establish, for a given dust pile and flow geometry, the threshold pressure (force) conditions on the surface due to flow required to initiate it. While the granular elasticity theory applies strictly to cohesionless granular materials, attractive forces are clearly important in the interaction of

  6. New possibilities of accurate particle characterisation by applying direct boundary models to analytical centrifugation.

    PubMed

    Walter, Johannes; Thajudeen, Thaseem; Süss, Sebastian; Segets, Doris; Peukert, Wolfgang

    2015-04-21

    Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles. PMID:25789666

  7. Precise and accurate assessment of uncertainties in model parameters from stellar interferometry. Application to stellar diameters

    NASA Astrophysics Data System (ADS)

    Lachaume, Regis; Rabus, Markus; Jordan, Andres

    2015-08-01

    In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.

  8. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  9. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chen, Xiaofei

    2016-06-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  10. Dynamic Characterization and Modeling of Potting Materials for Electronics Assemblies

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant; Lee, Gilbert; Santiago, Jaime

    2015-06-01

    Prediction of survivability of encapsulated electronic components subject to impact relies on accurate modeling. Both static and dynamic characterization of encapsulation material is needed to generate a robust material model. Current focus is on potting materials to mitigate high rate loading on impact. In this effort, encapsulation scheme consists of layers of polymeric material Sylgard 184 and Triggerbond Epoxy-20-3001. Experiments conducted for characterization of materials include conventional tension and compression tests, Hopkinson bar, dynamic material analyzer (DMA) and a non-conventional accelerometer based resonance tests for obtaining high frequency data. For an ideal material, data can be fitted to Williams-Landel-Ferry (WLF) model. A new temperature-time shift (TTS) macro was written to compare idealized temperature shift factor (WLF model) with experimental incremental shift factors. Deviations can be observed by comparison of experimental data with the model fit to determine the actual material behavior. Similarly, another macro written for obtaining Ogden model parameter from Hopkinson Bar tests indicates deviations from experimental high strain rate data. In this paper, experimental results for different materials used for mitigating impact, and ways to combine data from resonance, DMA and Hopkinson bar together with modeling refinements will be presented.

  11. Accurate calculation and modeling of the adiabatic connection in density functional theory

    NASA Astrophysics Data System (ADS)

    Teale, A. M.; Coriani, S.; Helgaker, T.

    2010-04-01

    AC. When parametrized in terms of the same input data, the AC-CI model offers improved performance over the corresponding AC-D model, which is shown to be the lowest-order contribution to the AC-CI model. The utility of the accurately calculated AC curves for the analysis of standard density functionals is demonstrated for the BLYP exchange-correlation functional and the interaction-strength-interpolation (ISI) model AC integrand. From the results of this analysis, we investigate the performance of our proposed two-parameter AC-D and AC-CI models when a simple density functional for the AC at infinite interaction strength is employed in place of information at the fully interacting point. The resulting two-parameter correlation functionals offer a qualitatively correct behavior of the AC integrand with much improved accuracy over previous attempts. The AC integrands in the present work are recommended as a basis for further work, generating functionals that avoid spurious error cancellations between exchange and correlation energies and give good accuracy for the range of densities and types of correlation contained in the systems studied here.

  12. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de

  13. Accurate prediction of the refractive index of polymers using first principles and data modeling

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.

  14. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    SciTech Connect

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  15. An accurate locally active memristor model for S-type negative differential resistance in NbOx

    NASA Astrophysics Data System (ADS)

    Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Vandenberghe, Ken; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R.

    2016-01-01

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or "S-type," negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a "selector," is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.

  16. Towards accurate kinetic modeling of prompt NO formation in hydrocarbon flames via the NCN pathway

    SciTech Connect

    Sutton, Jeffrey A.; Fleming, James W.

    2008-08-15

    A basic kinetic mechanism that can predict the appropriate prompt-NO precursor NCN, as shown by experiment, with relative accuracy while still producing postflame NO results that can be calculated as accurately as or more accurately than through the former HCN pathway is presented for the first time. The basic NCN submechanism should be a starting point for future NCN kinetic and prompt NO formation refinement.

  17. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    NASA Astrophysics Data System (ADS)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-11-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended

  18. Modeling of materials supply, demand and prices

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The societal, economic, and policy tradeoffs associated with materials processing and utilization, are discussed. The materials system provides the materials engineer with the system analysis required for formulate sound materials processing, utilization, and resource development policies and strategies. Materials system simulation and modeling research program including assessments of materials substitution dynamics, public policy implications, and materials process economics was expanded. This effort includes several collaborative programs with materials engineers, economists, and policy analysts. The technical and socioeconomic issues of materials recycling, input-output analysis, and technological change and productivity are examined. The major thrust areas in materials systems research are outlined.

  19. Material point method modeling in oil and gas reservoirs

    DOEpatents

    Vanderheyden, William Brian; Zhang, Duan

    2016-06-28

    A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.

  20. Is scintillometer measurement accurate enough for evaluating remote sensing based energy balance ET models?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The three evapotranspiration (ET) measurement/retrieval techniques used in this study, lysimeter, scintillometer and remote sensing vary in their level of complexity, accuracy, resolution and applicability. The lysimeter with its point measurement is the most accurate and direct method to measure ET...

  1. Making the most of your prognostic factors: presenting a more accurate survival model for breast cancer patients.

    PubMed

    Knorr, K L; Hilsenbeck, S G; Wenger, C R; Pounds, G; Oldaker, T; Vendely, P; Pandian, M R; Harrington, D; Clark, G M

    1992-01-01

    Determining an appropriate level of adjuvant therapy is one of the most difficult facets of treating breast cancer patients. Although the myriad of prognostic factors aid in this decision, often they give conflicting reports of a patient's prognosis. What we need is a survival model which can properly utilize the information contained in these factors and give an accurate, reliable account of the patient's probability of recurrence. We also need a method of evaluating these models' predictive ability instead of simply measuring goodness-of-fit, as is currently done. Often, prognostic factors are broken into two categories such as positive or negative. But this dichotomization may hide valuable prognostic information. We investigated whether continuous representations of factors, including standard transformations--logarithmic, square root, categorical, and smoothers--might more accurately estimate the underlying relationship between each factor and survival. We chose the logistic regression model, a special case of the commonly used Cox model, to test our hypothesis. The model containing continuous transformed factors fit the data more closely than the model containing the traditional dichotomized factors. In order to appropriately evaluate these models, we introduce three predictive validity statistics--the Calibration score, the Overall Calibration score, and the Brier score--designed to assess the model's accuracy and reliability. These standardized scores showed the transformed factors predicted three year survival accurately and reliably. The scores can also be used to assess models or compare across studies. PMID:1391991

  2. EPR-based material modelling of soils

    NASA Astrophysics Data System (ADS)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  3. A generalized methodology to characterize composite materials for pyrolysis models

    NASA Astrophysics Data System (ADS)

    McKinnon, Mark B.

    The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to

  4. Chemical vapor deposition modeling for high temperature materials

    NASA Technical Reports Server (NTRS)

    Gokoglu, Suleyman A.

    1992-01-01

    The formalism for the accurate modeling of chemical vapor deposition (CVD) processes has matured based on the well established principles of transport phenomena and chemical kinetics in the gas phase and on surfaces. The utility and limitations of such models are discussed in practical applications for high temperature structural materials. Attention is drawn to the complexities and uncertainties in chemical kinetics. Traditional approaches based on only equilibrium thermochemistry and/or transport phenomena are defended as useful tools, within their validity, for engineering purposes. The role of modeling is discussed within the context of establishing the link between CVD process parameters and material microstructures/properties. It is argued that CVD modeling is an essential part of designing CVD equipment and controlling/optimizing CVD processes for the production and/or coating of high performance structural materials.

  5. Artificial neural network model for material characterization by indentation

    NASA Astrophysics Data System (ADS)

    Tho, K. K.; Swaddiwudhipong, S.; Liu, Z. S.; Hua, J.

    2004-09-01

    Analytical methods to interpret the indentation load-displacement curves are difficult to formulate and solve due to material and geometric nonlinearities as well as complex contact interactions. In this study, large strain-large deformation finite element analyses were carried out to simulate indentation experiments. An artificial neural network model was constructed for the interpretation of indentation load-displacement curves. The data from finite element analyses were used to train and validate the artificial neural network model. The artificial neural network model was able to accurately determine the material properties when presented with the load-displacement curves that were not used in the training process. The proposed artificial neural network model is robust and directly relates the characteristics of the indentation load-displacement curve to the elasto-plastic material properties.

  6. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    SciTech Connect

    Du, Qiang

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  7. Hysteresis Modeling in Magnetostrictive Materials Via Preisach Operators

    NASA Technical Reports Server (NTRS)

    Smith, R. C.

    1997-01-01

    A phenomenological characterization of hysteresis in magnetostrictive materials is presented. Such hysteresis is due to both the driving magnetic fields and stress relations within the material and is significant throughout, most of the drive range of magnetostrictive transducers. An accurate characterization of the hysteresis and material nonlinearities is necessary, to fully utilize the actuator/sensor capabilities of the magnetostrictive materials. Such a characterization is made here in the context of generalized Preisach operators. This yields a framework amenable to proving the well-posedness of structural models that incorporate the magnetostrictive transducers. It also provides a natural setting in which to develop practical approximation techniques. An example illustrating this framework in the context of a Timoshenko beam model is presented.

  8. Modeling of laser interactions with composite materials

    DOE PAGESBeta

    Rubenchik, Alexander M.; Boley, Charles D.

    2013-05-07

    In this study, we develop models of laser interactions with composite materials consisting of fibers embedded within a matrix. A ray-trace model is shown to determine the absorptivity, absorption depth, and optical power enhancement within the material, as well as the angular distribution of the reflected light. We also develop a macroscopic model, which provides physical insight and overall results. We show that the parameters in this model can be determined from the ray trace model.

  9. Material model library for explicit numerical codes

    SciTech Connect

    Hofmann, R.; Dial, B.W.

    1982-08-01

    A material model logic structure has been developed which is useful for most explicit finite-difference and explicit finite-element Lagrange computer codes. This structure has been implemented and tested in the STEALTH codes to provide an example for researchers who wish to implement it in generically similar codes. In parallel with these models, material parameter libraries have been created for the implemented models for materials which are often needed in DoD applications.

  10. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    SciTech Connect

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.

  11. Modelling Shock Waves in Composite Materials

    NASA Astrophysics Data System (ADS)

    Vignjevic, Rade; Campbell, J. C.; Bourne, N.; Matic, Ognjen; Djordjevic, Nenad

    2007-12-01

    Composite materials have been of significant interest due to widespread application of anisotropic materials in aerospace and civil engineering problems. For example, composite materials are one of the important types of materials in the construction of modern aircraft due to their mechanical properties. The strain rate dependent mechanical behaviour of composite materials is important for applications involving impact and dynamic loading. Therefore, we are interested in understanding the composite material mechanical properties and behaviour for loading rates between quasistatic and 1×108 s-1. This paper investigates modelling of shock wave propagation in orthotropic materials in general and a specific type of CFC composite material. The determination of the equation of state and its coupling with the rest of the constitutive model for these materials is presented and discussed along with validation from three dimensional impact tests.

  12. Accurate cortical tissue classification on MRI by modeling cortical folding patterns.

    PubMed

    Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea

    2015-09-01

    Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453

  13. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    PubMed

    Ustinov, E A

    2014-10-01

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system. PMID:25296827

  14. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    SciTech Connect

    Ustinov, E. A.

    2014-10-07

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.

  15. Surface electron density models for accurate ab initio molecular dynamics with electronic friction

    NASA Astrophysics Data System (ADS)

    Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.

    2016-06-01

    Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.

  16. A new, fast and accurate spectrophotometric method for the determination of the optical constants of arbitrary absorptance thin films from a single transmittance curve: application to dielectric materials

    NASA Astrophysics Data System (ADS)

    Desforges, Jean; Deschamps, Clément; Gauvin, Serge

    2015-08-01

    The determination of the complex refractive index of thin films usually requires the highest accuracy. In this paper, we report on a new and accurate method based on a spectral rectifying process of a single transmittance curve. The agreements with simulated and real experimental data show the helpfulness of the method. The case of materials having arbitrary absorption bands at midpoint in spectral range, such as pigments in guest-host polymers, is also encompassed by this method.

  17. Multi-Material ALE with AMR for Modeling Hot Plasmas and Cold Fragmenting Materials

    NASA Astrophysics Data System (ADS)

    Alice, Koniges; Nathan, Masters; Aaron, Fisher; David, Eder; Wangyi, Liu; Robert, Anderson; David, Benson; Andrea, Bertozzi

    2015-02-01

    We have developed a new 3D multi-physics multi-material code, ALE-AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR) to connect the continuum to the microstructural regimes. The code is unique in its ability to model hot radiating plasmas and cold fragmenting solids. New numerical techniques were developed for many of the physics packages to work efficiently on a dynamically moving and adapting mesh. We use interface reconstruction based on volume fractions of the material components within mixed zones and reconstruct interfaces as needed. This interface reconstruction model is also used for void coalescence and fragmentation. A flexible strength/failure framework allows for pluggable material models, which may require material history arrays to determine the level of accumulated damage or the evolving yield stress in J2 plasticity models. For some applications laser rays are propagating through a virtual composite mesh consisting of the finest resolution representation of the modeled space. A new 2nd order accurate diffusion solver has been implemented for the thermal conduction and radiation transport packages. One application area is the modeling of laser/target effects including debris/shrapnel generation. Other application areas include warm dense matter, EUV lithography, and material wall interactions for fusion devices.

  18. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  19. How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis

    ERIC Educational Resources Information Center

    Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.

    2011-01-01

    Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…

  20. Models in biology: ‘accurate descriptions of our pathetic thinking’

    PubMed Central

    2014-01-01

    In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484

  1. More accurate predictions with transonic Navier-Stokes methods through improved turbulence modeling

    NASA Technical Reports Server (NTRS)

    Johnson, Dennis A.

    1989-01-01

    Significant improvements in predictive accuracies for off-design conditions are achievable through better turbulence modeling; and, without necessarily adding any significant complication to the numerics. One well established fact about turbulence is it is slow to respond to changes in the mean strain field. With the 'equilibrium' algebraic turbulence models no attempt is made to model this characteristic and as a consequence these turbulence models exaggerate the turbulent boundary layer's ability to produce turbulent Reynolds shear stresses in regions of adverse pressure gradient. As a consequence, too little momentum loss within the boundary layer is predicted in the region of the shock wave and along the aft part of the airfoil where the surface pressure undergoes further increases. Recently, a 'nonequilibrium' algebraic turbulence model was formulated which attempts to capture this important characteristic of turbulence. This 'nonequilibrium' algebraic model employs an ordinary differential equation to model the slow response of the turbulence to changes in local flow conditions. In its original form, there was some question as to whether this 'nonequilibrium' model performed as well as the 'equilibrium' models for weak interaction cases. However, this turbulence model has since been further improved wherein it now appears that this turbulence model performs at least as well as the 'equilibrium' models for weak interaction cases and for strong interaction cases represents a very significant improvement. The performance of this turbulence model relative to popular 'equilibrium' models is illustrated for three airfoil test cases of the 1987 AIAA Viscous Transonic Airfoil Workshop, Reno, Nevada. A form of this 'nonequilibrium' turbulence model is currently being applied to wing flows for which similar improvements in predictive accuracy are being realized.

  2. Accurate modeling and inversion of electrical resistivity data in the presence of metallic infrastructure with known location and dimension

    SciTech Connect

    Johnson, Timothy C.; Wellman, Dawn M.

    2015-06-26

    Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.

  3. Towards more accurate isoscapes encouraging results from wine, water and marijuana data/model and model/model comparisons.

    NASA Astrophysics Data System (ADS)

    West, J. B.; Ehleringer, J. R.; Cerling, T.

    2006-12-01

    Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across

  4. Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Hinkley, Jeffrey A.

    2003-01-01

    The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.

  5. Accurate determination of the superfluid-insulator transition in the one-dimensional Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Zakrzewski, Jakub; Delande, Dominique

    2008-11-01

    The quantum phase transition point between the insulator and the superfluid phase at unit filling factor of the infinite one-dimensional Bose-Hubbard model is numerically computed with a high accuracy. The method uses the infinite system version of the time evolving block decimation algorithm, here tested in a challenging case. We provide also the accurate estimate of the phase transition point at double occupancy.

  6. Accurate kinematic measurement at interfaces between dissimilar materials using conforming finite-element-based digital image correlation

    NASA Astrophysics Data System (ADS)

    Tao, Ran; Moussawi, Ali; Lubineau, Gilles; Pan, Bing

    2016-06-01

    Digital image correlation (DIC) is now an extensively applied full-field measurement technique with subpixel accuracy. A systematic drawback of this technique, however, is the smoothening of the kinematic field (e.g., displacement and strains) across interfaces between dissimilar materials, where the deformation gradient is known to be large. This can become an issue when a high level of accuracy is needed, for example, in the interfacial region of composites or joints. In this work, we described the application of global conforming finite-element-based DIC technique to obtain precise kinematic fields at interfaces between dissimilar materials. Speckle images from both numerical and actual experiments processed by the described global DIC technique better captured sharp strain gradient at the interface than local subset-based DIC.

  7. Accurate analytical method for the extraction of solar cell model parameters

    NASA Astrophysics Data System (ADS)

    Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.

    1984-05-01

    Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.

  8. Active appearance model and deep learning for more accurate prostate segmentation on MRI

    NASA Astrophysics Data System (ADS)

    Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.

    2016-03-01

    Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.

  9. Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models

    PubMed Central

    Drugowitsch, Jan

    2016-01-01

    We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models’ parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation. PMID:26864391

  10. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    SciTech Connect

    D’Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmann inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.

  11. Accurate calculation of binding energies for molecular clusters - Assessment of different models

    NASA Astrophysics Data System (ADS)

    Friedrich, Joachim; Fiedler, Benjamin

    2016-06-01

    In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are <1 kJ/mol. Since they are smaller than the basis set errors, we obtain higher total accuracy due to the applicability of larger basis sets. The final CCSD(T)/CBS benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.

  12. conSSert: Consensus SVM Model for Accurate Prediction of Ordered Secondary Structure.

    PubMed

    Kieslich, Chris A; Smadbeck, James; Khoury, George A; Floudas, Christodoulos A

    2016-03-28

    Accurate prediction of protein secondary structure remains a crucial step in most approaches to the protein-folding problem, yet the prediction of ordered secondary structure, specifically beta-strands, remains a challenge. We developed a consensus secondary structure prediction method, conSSert, which is based on support vector machines (SVM) and provides exceptional accuracy for the prediction of beta-strands with QE accuracy of over 0.82 and a Q2-EH of 0.86. conSSert uses as input probabilities for the three types of secondary structure (helix, strand, and coil) that are predicted by four top performing methods: PSSpred, PSIPRED, SPINE-X, and RAPTOR. conSSert was trained/tested using 4261 protein chains from PDBSelect25, and 8632 chains from PISCES. Further validation was performed using targets from CASP9, CASP10, and CASP11. Our data suggest that poor performance in strand prediction is likely a result of training bias and not solely due to the nonlocal nature of beta-sheet contacts. conSSert is freely available for noncommercial use as a webservice: http://ares.tamu.edu/conSSert/ . PMID:26928531

  13. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. PMID:26708965

  14. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    PubMed

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  15. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    SciTech Connect

    Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  16. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    NASA Astrophysics Data System (ADS)

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu

    2011-05-01

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  17. A model for the accurate computation of the lateral scattering of protons in water.

    PubMed

    Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-02-21

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380

  18. A model for the accurate computation of the lateral scattering of protons in water

    NASA Astrophysics Data System (ADS)

    Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.

    2016-02-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  19. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    PubMed Central

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855

  20. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  1. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    PubMed

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model. PMID:27077855

  2. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  3. On the accuracy and fitting of transversely isotropic material models.

    PubMed

    Feng, Yuan; Okamoto, Ruth J; Genin, Guy M; Bayly, Philip V

    2016-08-01

    Fiber reinforced structures are central to the form and function of biological tissues. Hyperelastic, transversely isotropic material models are used widely in the modeling and simulation of such tissues. Many of the most widely used models involve strain energy functions that include one or both pseudo-invariants (I4 or I5) to incorporate energy stored in the fibers. In a previous study we showed that both of these invariants must be included in the strain energy function if the material model is to reduce correctly to the well-known framework of transversely isotropic linear elasticity in the limit of small deformations. Even with such a model, fitting of parameters is a challenge. Here, by evaluating the relative roles of I4 and I5 in the responses to simple loadings, we identify loading scenarios in which previous models accounting for only one of these invariants can be expected to provide accurate estimation of material response, and identify mechanical tests that have special utility for fitting of transversely isotropic constitutive models. Results provide guidance for fitting of transversely isotropic constitutive models and for interpretation of the predictions of these models. PMID:27136091

  4. How accurate are polymer models in the analysis of Förster resonance energy transfer experiments on proteins?

    NASA Astrophysics Data System (ADS)

    O'Brien, Edward P.; Morrison, Greg; Brooks, Bernard R.; Thirumalai, D.

    2009-03-01

    Single molecule Förster resonance energy transfer (FRET) experiments are used to infer the properties of the denatured state ensemble (DSE) of proteins. From the measured average FRET efficiency, ⟨E⟩, the distance distribution P(R ) is inferred by assuming that the DSE can be described as a polymer. The single parameter in the appropriate polymer model (Gaussian chain, wormlike chain, or self-avoiding walk) for P(R ) is determined by equating the calculated and measured ⟨E⟩. In order to assess the accuracy of this "standard procedure," we consider the generalized Rouse model (GRM), whose properties [⟨E⟩ and P(R )] can be analytically computed, and the Molecular Transfer Model for protein L for which accurate simulations can be carried out as a function of guanadinium hydrochloride (GdmCl) concentration. Using the precisely computed ⟨E⟩ for the GRM and protein L, we infer P(R ) using the standard procedure. We find that the mean end-to-end distance can be accurately inferred (less than 10% relative error) using ⟨E⟩ and polymer models for P(R ). However, the value extracted for the radius of gyration (Rg) and the persistence length (lp) are less accurate. For protein L, the errors in the inferred properties increase as the GdmCl concentration increases for all polymer models. The relative error in the inferred Rg and lp, with respect to the exact values, can be as large as 25% at the highest GdmCl concentration. We propose a self-consistency test, requiring measurements of ⟨E⟩ by attaching dyes to different residues in the protein, to assess the validity of describing DSE using the Gaussian model. Application of the self-consistency test to the GRM shows that even for this simple model, which exhibits an order→disorder transition, the Gaussian P(R ) is inadequate. Analysis of experimental data of FRET efficiencies with dyes at several locations for the cold shock protein, and simulations results for protein L, for which accurate FRET

  5. Physical resist models and their calibration: their readiness for accurate EUV lithography simulation

    NASA Astrophysics Data System (ADS)

    Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.

    2010-04-01

    In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.

  6. Use of human in vitro skin models for accurate and ethical risk assessment: metabolic considerations.

    PubMed

    Hewitt, Nicola J; Edwards, Robert J; Fritsche, Ellen; Goebel, Carsten; Aeby, Pierre; Scheel, Julia; Reisinger, Kerstin; Ouédraogo, Gladys; Duche, Daniel; Eilstein, Joan; Latil, Alain; Kenny, Julia; Moore, Claire; Kuehnl, Jochen; Barroso, Joao; Fautz, Rolf; Pfuhler, Stefan

    2013-06-01

    Several human skin models employing primary cells and immortalized cell lines used as monocultures or combined to produce reconstituted 3D skin constructs have been developed. Furthermore, these models have been included in European genotoxicity and sensitization/irritation assay validation projects. In order to help interpret data, Cosmetics Europe (formerly COLIPA) facilitated research projects that measured a variety of defined phase I and II enzyme activities and created a complete proteomic profile of xenobiotic metabolizing enzymes (XMEs) in native human skin and compared them with data obtained from a number of in vitro models of human skin. Here, we have summarized our findings on the current knowledge of the metabolic capacity of native human skin and in vitro models and made an overall assessment of the metabolic capacity from gene expression, proteomic expression, and substrate metabolism data. The known low expression and function of phase I enzymes in native whole skin were reflected in the in vitro models. Some XMEs in whole skin were not detected in in vitro models and vice versa, and some major hepatic XMEs such as cytochrome P450-monooxygenases were absent or measured only at very low levels in the skin. Conversely, despite varying mRNA and protein levels of phase II enzymes, functional activity of glutathione S-transferases, N-acetyltransferase 1, and UDP-glucuronosyltransferases were all readily measurable in whole skin and in vitro skin models at activity levels similar to those measured in the liver. These projects have enabled a better understanding of the contribution of XMEs to toxicity endpoints. PMID:23539547

  7. Constitutive modeling for isotropic materials

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.

    1984-01-01

    A state-of-the-art review of applicable constitutive models with selection of two for detailed comparison with a wide range of experimental tests was conducted. The experimental matrix contained uniaxial and biaxial tensile, creep, stress relaxation, and cyclic fatigue tests at temperatures to 1093 C and strain rates from .0000001 to .001/sec. Some nonisothermal cycles will also be run. The constitutive models will be incorporated into the MARC finite element structural analysis program with a demonstration computation made for advanced turbine blade configuration. In the code development work, particular emphasis is being placed on developing efficient integration algorithms for the highly nonlinear and stiff constitutive equations. Another area of emphasis is the appropriate and efficient methodology for determing constitutive constants from a minimum extent of experimental data.

  8. Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.

    PubMed

    Qu, Xiaohui; Persson, Kristin A

    2016-09-13

    A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744

  9. Ceramic materials testing and modeling

    SciTech Connect

    Wilfinger, K. R., LLNL

    1998-04-30

    corrosion by limiting the transport of water and oxygen to the ceramic-metal interface. Thermal spray techniques for ceramic coating metallic structures are currently being explored. The mechanics of thermal spray resembles spray painting in many respects, allowing large surfaces and contours to be covered smoothly. All of the relevant thermal spray processes use a high energy input to melt or partially melt a powdered oxide material, along with a high velocity gas to impinge the molten droplets onto a substrate where they conform, quench, solidify and adhere mechanically. The energy input can be an arc generated plasma, an oxy-fuel flame or an explosion. The appropriate feed material and the resulting coating morphologies vary with technique as well as with application parameters. To date on this project, several versions of arc plasma systems, a detonation coating system and two variations of high velocity oxy-fuel (HVOF) fired processes have been investigated, operating on several different ceramic materials.

  10. Network diffusion accurately models the relationship between structural and functional brain connectivity networks

    PubMed Central

    Abdelnour, Farras; Voss, Henning U.; Raj, Ashish

    2014-01-01

    The relationship between anatomic connectivity of large-scale brain networks and their functional connectivity is of immense importance and an area of active research. Previous attempts have required complex simulations which model the dynamics of each cortical region, and explore the coupling between regions as derived by anatomic connections. While much insight is gained from these non-linear simulations, they can be computationally taxing tools for predicting functional from anatomic connectivities. Little attention has been paid to linear models. Here we show that a properly designed linear model appears to be superior to previous non-linear approaches in capturing the brain’s long-range second order correlation structure that governs the relationship between anatomic and functional connectivities. We derive a linear network of brain dynamics based on graph diffusion, whereby the diffusing quantity undergoes a random walk on a graph. We test our model using subjects who underwent diffusion MRI and resting state fMRI. The network diffusion model applied to the structural networks largely predicts the correlation structures derived from their fMRI data, to a greater extent than other approaches. The utility of the proposed approach is that it can routinely be used to infer functional correlation from anatomic connectivity. And since it is linear, anatomic connectivity can also be inferred from functional data. The success of our model confirms the linearity of ensemble average signals in the brain, and implies that their long-range correlation structure may percolate within the brain via purely mechanistic processes enacted on its structural connectivity pathways. PMID:24384152

  11. Fast and accurate modeling of molecular atomization energies with machine learning.

    PubMed

    Rupp, Matthias; Tkatchenko, Alexandre; Müller, Klaus-Robert; von Lilienfeld, O Anatole

    2012-02-01

    We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schrödinger equation is mapped onto a nonlinear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross validation over more than seven thousand organic molecules yields a mean absolute error of ∼10  kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves. PMID:22400967

  12. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  13. The Effects of Video Modeling with Voiceover Instruction on Accurate Implementation of Discrete-Trial Instruction

    ERIC Educational Resources Information Center

    Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…

  14. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  15. Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2013-01-01

    The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.

  16. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions.

    PubMed

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985

  17. A constitutive mechanical model for energetic materials

    SciTech Connect

    Hobbs, M.L.; Baer, M.R.; Gross, R.J.

    1994-06-01

    Cookoff modeling of energetic materials has traditionally addressed reactive heat flow with the goal of defining the onset of runaway combustion behavior. Current modeling efforts are now aimed toward predicting the violence of the event. Combined thermal, chemical, and mechanical response must be modeled, since confinement results in pressure buildup which can breach confinement or enhance gas-phase combustion rates leading to runaway combustion behavior. Thermally induced stresses can also cause gaps which inhibit heat flow. These mechanical effects must also be included in cookoff modeling. A new reactive elastic-plastic constitutive model for micromechanical response has been developed which represents a stress-strain relation for reacting materials such as explosives, propellants, pyrotechnics, or burning foams. This micromechanical model is based on bubble mechanics. A local force balance, with mass continuity constraints, forms the basis of the constitutive model requiring input of temperature and reacted fraction. This constitutive material model has been incorporated into a quasistatic mechanics code, SANTOS. To provide temperature and reacted gas fraction, the thermal-chemical solver, XCHEM, has been coupled to SANTOS. This paper summarizes the development of the micromechanical model with material property estimates for conventional energetic materials. This study shows that large pressures can arise from small reacted fractions which implies that cookoff modeling must consider the strong interaction between thermochemistry and mechanics.

  18. Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems.

    PubMed

    Sapsis, Themistoklis P; Majda, Andrew J

    2013-08-20

    A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398

  19. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  20. Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?

    NASA Astrophysics Data System (ADS)

    Ramarohetra, J.; Sultan, B.

    2012-04-01

    Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and

  1. A Framework for Accurate Geospatial Modeling of Recharge and Discharge Maps using Image Ranking and Machine Learning

    NASA Astrophysics Data System (ADS)

    Yahja, A.; Kim, C.; Lin, Y.; Bajcsy, P.

    2008-12-01

    This paper addresses the problem of accurate estimation of geospatial models from a set of groundwater recharge & discharge (R&D) maps and from auxiliary remote sensing and terrestrial raster measurements. The motivation for our work is driven by the cost of field measurements, and by the limitations of currently available physics-based modeling techniques that do not include all relevant variables and allow accurate predictions only at coarse spatial scales. The goal is to improve our understanding of the underlying physical phenomena and increase the accuracy of geospatial models--with a combination of remote sensing, field measurements and physics-based modeling. Our approach is to process a set of R&D maps generated from interpolated sparse field measurements using existing physics-based models, and identify the R&D map that would be the most suitable for extracting a set of rules between the auxiliary variables of interest and the R&D map labels. We implemented this approach by ranking R&D maps using information entropy and mutual information criteria, and then by deriving a set of rules using a machine learning technique, such as the decision tree method. The novelty of our work is in developing a general framework for building geospatial models with the ultimate goal of minimizing cost and maximizing model accuracy. The framework is demonstrated for groundwater R&D rate models but could be applied to other similar studies, for instance, to understanding hypoxia based on physics-based models and remotely sensed variables. Furthermore, our key contribution is in designing a ranking method for R&D maps that allows us to analyze multiple plausible R&D maps with a different number of zones which was not possible in our earlier prototype of the framework called Spatial Pattern to Learn. We will present experimental results using examples R&D and other maps from an area in Wisconsin.

  2. An Accurate In Vitro Model of the E. coli Envelope

    PubMed Central

    Clifton, Luke A.; Holt, Stephen A.; Hughes, Arwel V.; Daulton, Emma L.; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R.; Webster, John R. P.; Kinane, Christian J.

    2015-01-01

    Abstract Gram‐negative bacteria are an increasingly serious source of antibiotic‐resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir–Blodgett and Langmuir–Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:27346898

  3. An accurate in vitro model of the E. coli envelope.

    PubMed

    Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H

    2015-10-01

    Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir-Blodgett and Langmuir-Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292

  4. An accurate two-phase approximate solution to the acute viral infection model

    SciTech Connect

    Perelson, Alan S

    2009-01-01

    During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.

  5. Constitutive Modeling of Crosslinked Nanotube Materials

    NASA Technical Reports Server (NTRS)

    Odegard, G. M.; Frankland, S. J. V.; Herzog, M. N.; Gates, T. S.; Fay, C. C.

    2004-01-01

    A non-linear, continuum-based constitutive model is developed for carbon nanotube materials in which bundles of aligned carbon nanotubes have varying amounts of crosslinks between the nanotubes. The model accounts for the non-linear elastic constitutive behavior of the material in terms of strain, and is developed using a thermodynamic energy approach. The model is used to examine the effect of the crosslinking on the overall mechanical properties of variations of the crosslinked carbon nanotube material with varying degrees of crosslinking. It is shown that the presence of the crosslinks has significant effects on the mechanical properties of the carbon nanotube materials. An increase in the transverse shear properties is observed when the nanotubes are crosslinked. However, this increase is accompanied by a decrease in axial mechanical properties of the nanotube material upon crosslinking.

  6. Features of creation of highly accurate models of triumphal pylons for archaeological reconstruction

    NASA Astrophysics Data System (ADS)

    Grishkanich, A. S.; Sidorov, I. S.; Redka, D. N.

    2015-12-01

    Cited a measuring operation for determining the geometric characteristics of objects in space and geodetic survey objects on the ground. In the course of the work, data were obtained on a relative positioning of the pylons in space. There are deviations from verticality. In comparison with traditional surveying this testing method is preferable because it allows you to get in semi-automated mode, the CAD model of the object is high for subsequent analysis that is more economical-ly advantageous.

  7. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  8. Mathematical model accurately predicts protein release from an affinity-based delivery system.

    PubMed

    Vulic, Katarina; Pakulska, Malgosia M; Sonthalia, Rohit; Ramachandran, Arun; Shoichet, Molly S

    2015-01-10

    Affinity-based controlled release modulates the delivery of protein or small molecule therapeutics through transient dissociation/association. To understand which parameters can be used to tune release, we used a mathematical model based on simple binding kinetics. A comprehensive asymptotic analysis revealed three characteristic regimes for therapeutic release from affinity-based systems. These regimes can be controlled by diffusion or unbinding kinetics, and can exhibit release over either a single stage or two stages. This analysis fundamentally changes the way we think of controlling release from affinity-based systems and thereby explains some of the discrepancies in the literature on which parameters influence affinity-based release. The rate of protein release from affinity-based systems is determined by the balance of diffusion of the therapeutic agent through the hydrogel and the dissociation kinetics of the affinity pair. Equations for tuning protein release rate by altering the strength (KD) of the affinity interaction, the concentration of binding ligand in the system, the rate of dissociation (koff) of the complex, and the hydrogel size and geometry, are provided. We validated our model by collapsing the model simulations and the experimental data from a recently described affinity release system, to a single master curve. Importantly, this mathematical analysis can be applied to any single species affinity-based system to determine the parameters required for a desired release profile. PMID:25449806

  9. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.

    PubMed

    Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  10. Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics

    NASA Astrophysics Data System (ADS)

    Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.

    2014-12-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.

  11. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  12. Comparison of four digital PCR platforms for accurate quantification of DNA copy number of a certified plasmid DNA reference material

    PubMed Central

    Dong, Lianhua; Meng, Ying; Sui, Zhiwei; Wang, Jing; Wu, Liqing; Fu, Boqiang

    2015-01-01

    Digital polymerase chain reaction (dPCR) is a unique approach to measurement of the absolute copy number of target DNA without using external standards. However, the comparability of different dPCR platforms with respect to measurement of DNA copy number must be addressed before dPCR can be classified fundamentally as an absolute quantification technique. The comparability of four dPCR platforms with respect to accuracy and measurement uncertainty was investigated by using a certified plasmid reference material. Plasmid conformation was found to have a significant effect on droplet-based dPCR (QX100 and RainDrop) not shared with chip-based QuantStudio 12k or BioMark. The relative uncertainty of partition volume was determined to be 0.7%, 0.8%, 2.3% and 2.9% for BioMark, QX100, QuantStudio 12k and RainDrop, respectively. The measurements of the certified pNIM-001 plasmid made using the four dPCR platforms were corrected for partition volume and closely consistent with the certified value within the expended uncertainty. This demonstrated that the four dPCR platforms are of comparable effectiveness in quantifying DNA copy number. These findings provide an independent assessment of this method of determining DNA copy number when using different dPCR platforms and underline important factors that should be taken into consideration in the design of dPCR experiments. PMID:26302947

  13. Effects of the inlet conditions and blood models on accurate prediction of hemodynamics in the stented coronary arteries

    NASA Astrophysics Data System (ADS)

    Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua

    2015-05-01

    Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.

  14. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    NASA Astrophysics Data System (ADS)

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  15. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  16. Modeling of fatigue for cellular materials

    SciTech Connect

    Huang, J.S.; Lin, J.Y.

    1998-12-31

    Dimensional arguments are used to analyze the fatigue of cellular materials. A modeling describing the fatigue of foams with or without macrocrack is derived and compared to the existing experimental data of cementitious foams and phenolic foams; agreement is good.

  17. Computer Model Buildings Contaminated with Radioactive Material

    1998-05-19

    The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

  18. Key Issues for an Accurate Modelling of GaSb TPV Converters

    NASA Astrophysics Data System (ADS)

    Martín, Diego; Algora, Carlos

    2003-01-01

    GaSb TPV devices are commonly manufactured by Zn diffusion from the vapour phase on a n-type substrate, leading to very high doping concentrations in a narrow emitter. This fact emphasizes the need of a careful modelling that must include high doping effects to simulate the optoelectronic behaviour of devices. In this work the key parameters that have strong influence on the performance of GaSb TPV devices are underlined, more reliable values are suggested and our first results on the study of the absorption coefficient dependence with p-type high doping concentration are presented.

  19. Accurate modeling and reconstruction of three-dimensional percolating filamentary microstructures from two-dimensional micrographs via dilation-erosion method

    SciTech Connect

    Guo, En-Yu; Chawla, Nikhilesh; Jing, Tao; Torquato, Salvatore; Jiao, Yang

    2014-03-01

    Heterogeneous materials are ubiquitous in nature and synthetic situations and have a wide range of important engineering applications. Accurate modeling and reconstructing three-dimensional (3D) microstructure of topologically complex materials from limited morphological information such as a two-dimensional (2D) micrograph is crucial to the assessment and prediction of effective material properties and performance under extreme conditions. Here, we extend a recently developed dilation–erosion method and employ the Yeong–Torquato stochastic reconstruction procedure to model and generate 3D austenitic–ferritic cast duplex stainless steel microstructure containing percolating filamentary ferrite phase from 2D optical micrographs of the material sample. Specifically, the ferrite phase is dilated to produce a modified target 2D microstructure and the resulting 3D reconstruction is eroded to recover the percolating ferrite filaments. The dilation–erosion reconstruction is compared with the actual 3D microstructure, obtained from serial sectioning (polishing), as well as the standard stochastic reconstructions incorporating topological connectedness information. The fact that the former can achieve the same level of accuracy as the latter suggests that the dilation–erosion procedure is tantamount to incorporating appreciably more topological and geometrical information into the reconstruction while being much more computationally efficient. - Highlights: • Spatial correlation functions used to characterize filamentary ferrite phase • Clustering information assessed from 3D experimental structure via serial sectioning • Stochastic reconstruction used to generate 3D virtual structure 2D micrograph • Dilation–erosion method to improve accuracy of 3D reconstruction.

  20. Multiconjugate adaptive optics applied to an anatomically accurate human eye model.

    PubMed

    Bedggood, P A; Ashman, R; Smith, G; Metha, A B

    2006-09-01

    Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics. PMID:19529172

  1. Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis

    NASA Astrophysics Data System (ADS)

    Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C.; Matarrese, G.; Del Guerra, A.; Bisogni, M. G.

    2013-08-01

    It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM-FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.

  2. Considering mask pellicle effect for more accurate OPC model at 45nm technology node

    NASA Astrophysics Data System (ADS)

    Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo

    2008-11-01

    Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.

  3. ASPH modeling of Material Damage and Failure

    SciTech Connect

    Owen, J M

    2010-04-30

    We describe our new methodology for Adaptive Smoothed Particle Hydrodynamics (ASPH) and its application to problems in modeling material failure. We find that ASPH is often crucial for properly modeling such experiments, since in most cases the strain placed on materials is non-isotropic (such as a stretching rod), and without the directional adaptability of ASPH numerical failure due to SPH nodes losing contact in the straining direction can compete with or exceed the physical process of failure.

  4. Material characterization and modeling with shear ography

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Callahan, Virginia

    1993-01-01

    Shearography has emerged as a useful technique for nondestructible evaluation and materials characterization of aerospace materials. A suitable candidate for the technique is to determine the response of debonds on foam-metal interfaces such as the TPS system on the External Tank. The main thrust is to develop a model which allows valid interpretation of shearographic information on TPS type systems. Confirmation of the model with shearographic data will be performed.

  5. Modeling of shear localization in materials

    SciTech Connect

    Lesuer, D.; LeBlanc, M.; Riddle, B.; Jorgensen, B.

    1998-02-11

    The deformation response of a Ti alloy, Ti-6Al-4V, has been studied during shear localization. The study has involved well-controlled laboratory tests involving a double-notch shear sample. The results have been used to provide a comparison between experiment and the predicted response using DYNA2D and two material models (the Johnson-Cook model and an isotropic elastic-plastic-hydrodynamic model). The work will serve as the basis for the development of a new material model which represents the different deformation mechanisms active during shear localization.

  6. Accurate modeling of light trapping in thin film silicon solar cells

    SciTech Connect

    Abouelsaood, A.A.; Ghannam, M.Y.; Poortmans, J.; Mertens, R.P.

    1997-12-31

    An attempt is made to assess the accuracy of the simplifying assumption of total retransmission of light inside the escape or loss cone which is made in many models of optical confinement in thin-film silicon solar cells. A closed form expression is derived for the absorption enhancement factor as a function of the refractive index in the low-absorption limit for a thin-film cell with a flat front surface and a lambertian back reflector. Numerical calculations are carried out to investigate similar systems with antireflection coatings, and the investigation of cells with a textured front surface is achieved using a modified version of the existing ray-tracing computer simulation program TEXTURE.

  7. Accurate programmable electrocardiogram generator using a dynamical model implemented on a microcontroller

    NASA Astrophysics Data System (ADS)

    Chien Chang, Jia-Ren; Tai, Cheng-Chi

    2006-07-01

    This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.

  8. TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow

    USGS Publications Warehouse

    Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.

    1993-01-01

    A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.

  9. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  10. Biomechanical modeling provides more accurate data for neuronavigation than rigid registration

    PubMed Central

    Garlapati, Revanth Reddy; Roy, Aditi; Joldes, Grand Roman; Wittek, Adam; Mostayed, Ahmed; Doyle, Barry; Warfield, Simon Keith; Kikinis, Ron; Knuckey, Neville; Bunt, Stuart; Miller, Karol

    2015-01-01

    It is possible to improve neuronavigation during image-guided surgery by warping the high-quality preoperative brain images so that they correspond with the current intraoperative configuration of the brain. In this work, the accuracy of registration results obtained using comprehensive biomechanical models is compared to the accuracy of rigid registration, the technology currently available to patients. This comparison allows us to investigate whether biomechanical modeling provides good quality image data for neuronavigation for a larger proportion of patients than rigid registration. Preoperative images for 33 cases of neurosurgery were warped onto their respective intraoperative configurations using both biomechanics-based method and rigid registration. We used a Hausdorff distance-based evaluation process that measures the difference between images to quantify the performance of both methods of registration. A statistical test for difference in proportions was conducted to evaluate the null hypothesis that the proportion of patients for whom improved neuronavigation can be achieved, is the same for rigid and biomechanics-based registration. The null hypothesis was confidently rejected (p-value<10−4). Even the modified hypothesis that less than 25% of patients would benefit from the use of biomechanics-based registration was rejected at a significance level of 5% (p-value = 0.02). The biomechanics-based method proved particularly effective for cases experiencing large craniotomy-induced brain deformations. The outcome of this analysis suggests that our nonlinear biomechanics-based methods are beneficial to a large proportion of patients and can be considered for use in the operating theatre as one possible method of improving neuronavigation and surgical outcomes. PMID:24460486

  11. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  12. An investigation of the material and model parameters for a constitutive model for MSMAs

    NASA Astrophysics Data System (ADS)

    Dikes, Jason; Feigenbaum, Heidi; Ciocanel, Constantin

    2015-04-01

    A two dimensional constitutive model capable of predicting the magneto-mechanical response of a magnetic shape memory alloy (MSMA) has been developed and calibrated using a zero field-variable stress test1. This calibration approach is easy to perform and facilitates a faster evaluation of the three calibration constants required by the model (vs. five calibration constants required by previous models2,3). The calibration constants generated with this approach facilitate good model predictions of constant field-variable stress tests, for a wide range of loading conditions1. However, the same calibration constants yield less accurate model predictions for constant stress-variable field tests. Deployment of a separate calibration method for this type of loading, using a varying field-zero stress calibration test, also didn't lead to improved model predictions of this loading case. As a result, a sensitivity analysis was performed on most model and material parameters to identify which of them may influence model predictions the most, in both types of loading conditions. The sensitivity analysis revealed that changing most of these parameters did not improve model predictions for all loading types. Only the anisotropy coefficient was found to improve significantly field controlled model predictions and slightly worsen model predictions for stress controlled cases. This suggests that either the value of the anisotropy coefficient (which is provided by the manufacturer) is not accurate, or that the model is missing features associated with the magnetic energy of the material.

  13. The Model 9977 Radioactive Material Packaging Primer

    SciTech Connect

    Abramczyk, G.

    2015-10-09

    The Model 9977 Packaging is a single containment drum style radioactive material (RAM) shipping container designed, tested and analyzed to meet the performance requirements of Title 10 the Code of Federal Regulations Part 71. A radioactive material shipping package, in combination with its contents, must perform three functions (please note that the performance criteria specified in the Code of Federal Regulations have alternate limits for normal operations and after accident conditions): Containment, the package must “contain” the radioactive material within it; Shielding, the packaging must limit its users and the public to radiation doses within specified limits; and Subcriticality, the package must maintain its radioactive material as subcritical

  14. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    NASA Astrophysics Data System (ADS)

    Tao, Jianmin; Rappe, Andrew M.

    2016-01-01

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.

  15. Structural Modelling of Two Dimensional Amorphous Materials

    NASA Astrophysics Data System (ADS)

    Kumar, Avishek

    The continuous random network (CRN) model of network glasses is widely accepted as a model for materials such as vitreous silica and amorphous silicon. Although it has been more than eighty years since the proposal of the CRN, there has not been conclusive experimental evidence of the structure of glasses and amorphous materials. This has now changed with the advent of two-dimensional amorphous materials. Now, not only the distribution of rings but the actual atomic ring structure can be imaged in real space, allowing for greater charicterization of these types of networks. This dissertation reports the first work done on the modelling of amorphous graphene and vitreous silica bilayers. Models of amorphous graphene have been created using a Monte Carlo bond-switching method and MD method. Vitreous silica bilayers have been constructed using models of amorphous graphene and the ring statistics of silica bilayers has been studied.

  16. Accurate estimation of retinal vessel width using bagged decision trees and an extended multiresolution Hermite model.

    PubMed

    Lupaşcu, Carmen Alina; Tegolo, Domenico; Trucco, Emanuele

    2013-12-01

    We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy. PMID:24001930

  17. Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices

    NASA Astrophysics Data System (ADS)

    Bauer, Friedhelm D.

    2009-06-01

    A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.

  18. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    PubMed

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059

  19. Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.

    PubMed

    Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M

    2016-08-01

    Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835

  20. Arctic sea ice modeling with the material-point method.

    SciTech Connect

    Peterson, Kara J.; Bochev, Pavel Blagoveston

    2010-04-01

    Arctic sea ice plays an important role in global climate by reflecting solar radiation and insulating the ocean from the atmosphere. Due to feedback effects, the Arctic sea ice cover is changing rapidly. To accurately model this change, high-resolution calculations must incorporate: (1) annual cycle of growth and melt due to radiative forcing; (2) mechanical deformation due to surface winds, ocean currents and Coriolis forces; and (3) localized effects of leads and ridges. We have demonstrated a new mathematical algorithm for solving the sea ice governing equations using the material-point method with an elastic-decohesive constitutive model. An initial comparison with the LANL CICE code indicates that the ice edge is sharper using Materials-Point Method (MPM), but that many of the overall features are similar.

  1. Small pores in soils: Is the physico-chemical environment accurately reflected in biogeochemical models ?

    NASA Astrophysics Data System (ADS)

    Weber, Tobias K. D.; Riedel, Thomas

    2015-04-01

    Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.

  2. Accurate blackbodies

    NASA Astrophysics Data System (ADS)

    Latvakoski, Harri M.; Watson, Mike; Topham, Shane; Scott, Deron; Wojcik, Mike; Bingham, Gail

    2010-07-01

    Infrared radiometers and spectrometers generally use blackbodies for calibration, and with the high accuracy needs of upcoming missions, blackbodies capable of meeting strict accuracy requirements are needed. One such mission, the NASA climate science mission Climate Absolute Radiance and Refractivity Observatory (CLARREO), which will measure Earth's emitted spectral radiance from orbit, has an absolute accuracy requirement of 0.1 K (3σ) at 220 K over most of the thermal infrared. Space Dynamics Laboratory (SDL) has a blackbody design capable of meeting strict modern accuracy requirements. This design is relatively simple to build, was developed for use on the ground or onorbit, and is readily scalable for aperture size and required performance. These-high accuracy blackbodies are currently in use as a ground calibration unit and with a high-altitude balloon instrument. SDL is currently building a prototype blackbody to demonstrate the ability to achieve very high accuracy, and we expect it to have emissivity of ~0.9999 from 1.5 to 50 μm, temperature uncertainties of ~25 mK, and radiance uncertainties of ~10 mK due to temperature gradients. The high emissivity and low thermal gradient uncertainties are achieved through cavity design, while the low temperature uncertainty is attained by including phase change materials such as mercury, gallium, and water in the blackbody. Blackbody temperature sensors are calibrated at the melt points of these materials, which are determined by heating through their melt point. This allows absolute temperature calibration traceable to the SI temperature scale.

  3. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    PubMed

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. PMID:15931680

  4. A dynamic model for material removal in ultrasonic machining

    SciTech Connect

    Wang, Z.Y.; Rojurkar, K.P.

    1995-12-31

    This paper proposes a dynamic model of the material removal mechanism and provides a relationship between material removal rate and operation parameters in ultrasonic machining (USM). The model incorporates effect of high values of vibration amplitude, frequency and grit size. The effect of non-uniformity of abrasive grits is also considered by using a probability distribution for the diameter of the abrasive particles. The model is able to predict accurately the increasing rate of material removal for increasing values of amplitude and frequency. It can also be used to determine the reducing rate of material removal, after a certain maximum level is attained, for further increments of vibration amplitude and frequency. Equations representing the dynamic normal stress and elastic displacement of work-piece caused by the impact of an arbitrary grit are used in developing a model considering the dynamic impact phenomena of grits on the work-piece. The analysis shows that there is an effective speed zone for the tool. Within this range, grits in the cutting zone can obtain the maximum momentum and energy from the tool. During the machining process, only those grits whose sizes are in the range of the effective speed zone, can abrade work-piece most effectively.

  5. Materials and techniques for model construction

    NASA Technical Reports Server (NTRS)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  6. Multiscale Materials Modeling in an Industrial Environment.

    PubMed

    Weiß, Horst; Deglmann, Peter; In 't Veld, Pieter J; Cetinkaya, Murat; Schreiner, Eduard

    2016-06-01

    In this review, we sketch the materials modeling process in industry. We show that predictive and fast modeling is a prerequisite for successful participation in research and development processes in the chemical industry. Stable and highly automated workflows suitable for handling complex systems are a must. In particular, we review approaches to build and parameterize soft matter systems. By satisfying these prerequisites, efficiency for the development of new materials can be significantly improved, as exemplified here for formulation polymer development. This is in fact in line with recent Materials Genome Initiative efforts sponsored by the US government. Valuable contributions to product development are possible today by combining existing modeling techniques in an intelligent fashion, provided modeling and experiment work hand in hand. PMID:26927661

  7. Improvements to constitutive material model for fabrics

    NASA Astrophysics Data System (ADS)

    Morea, Mihai I.

    2011-12-01

    The high strength to weight ratio of woven fabric offers a cost effective solution to be used in a containment system for aircraft propulsion engines. Currently, Kevlar is the only Federal Aviation Administration (FAA) approved fabric for usage in systems intended to mitigate fan blade-out events. This research builds on an earlier constitutive model of Kevlar 49 fabric developed at Arizona State University (ASU) with the addition of new and improved modeling details. Latest stress strain experiments provided new and valuable data used to modify the material model post peak behavior. These changes reveal an overall improvement of the Finite Element (FE) model's ability to predict experimental results. First, the steel projectile is modeled using Johnson-Cook material model and provides a more realistic behavior in the FE ballistic models. This is particularly noticeable when comparing FE models with laboratory tests where large deformations in projectiles are observed. Second, follow-up analysis of the results obtained through the new picture frame tests conducted at ASU provides new values for the shear moduli and corresponding strains. The new approach for analysis of data from picture frame tests combines digital image analysis and a two-level factorial optimization formulation. Finally, an additional improvement in the material model for Kevlar involves checking the convergence at variation of mesh density of fabrics. The study performed and described herein shows the converging trend, therefore validating the FE model.

  8. Accurate prediction of interference minima in linear molecular harmonic spectra by a modified two-center model

    NASA Astrophysics Data System (ADS)

    Xin, Cui; Di-Yu, Zhang; Gao, Chen; Ji-Gen, Chen; Si-Liang, Zeng; Fu-Ming, Guo; Yu-Jun, Yang

    2016-03-01

    We demonstrate that the interference minima in the linear molecular harmonic spectra can be accurately predicted by a modified two-center model. Based on systematically investigating the interference minima in the linear molecular harmonic spectra by the strong-field approximation (SFA), it is found that the locations of the harmonic minima are related not only to the nuclear distance between the two main atoms contributing to the harmonic generation, but also to the symmetry of the molecular orbital. Therefore, we modify the initial phase difference between the double wave sources in the two-center model, and predict the harmonic minimum positions consistent with those simulated by SFA. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant Nos. 11274001, 11274141, 11304116, 11247024, and 11034003), and the Jilin Provincial Research Foundation for Basic Research, China (Grant Nos. 20130101012JC and 20140101168JC).

  9. Accurate prediction model of bead geometry in crimping butt of the laser brazing using generalized regression neural network

    NASA Astrophysics Data System (ADS)

    Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.

    2015-12-01

    There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.

  10. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    PubMed Central

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870

  11. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    NASA Astrophysics Data System (ADS)

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  12. Modeling heat transfer within porous multiconstituent materials

    NASA Astrophysics Data System (ADS)

    Niezgoda, Mathieu; Rochais, Denis; Enguehard, Franck; Rousseau, Benoit; Echegut, Patrick

    2012-06-01

    The purpose of our work has been to determine the effective thermal properties of materials considered heterogeneous at the microscale but which are regarded as homogenous in the macroscale environment in which they are used. We have developed a calculation code that renders it possible to simulate thermal experiments over complex multiconstituent materials from their numerical microstructural morphology obtained by volume segmentation through tomography. This modeling relies on the transient solving of the coupled conductive and radiative heat transfer in these voxelized structures.

  13. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  14. Modeling ready biodegradability of fragrance materials.

    PubMed

    Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola

    2015-06-01

    In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. PMID:25663647

  15. How accurately can subject-specific finite element models predict strains and strength of human femora? Investigation using full-field measurements.

    PubMed

    Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna

    2016-03-21

    Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response. PMID:26944687

  16. Modeling of Irradiation Hardening of Polycrystalline Materials

    SciTech Connect

    Li, Dongsheng; Zbib, Hussein M.; Garmestani, Hamid; Sun, Xin; Khaleel, Mohammad A.

    2011-09-14

    High energy particle irradiation of structural polycrystalline materials usually produces irradiation hardening and embrittlement. The development of predict capability for the influence of irradiation on mechanical behavior is very important in materials design for next generation reactors. In this work a multiscale approach was implemented to predict irradiation hardening of body centered cubic (bcc) alpha-iron. The effect of defect density, texture and grain boundary was investigated. In the microscale, dislocation dynamics models were used to predict the critical resolved shear stress from the evolution of local dislocation and defects. In the macroscale, a viscoplastic self-consistent model was applied to predict the irradiation hardening in samples with changes in texture and grain boundary. This multiscale modeling can guide performance evaluation of structural materials used in next generation nuclear reactors.

  17. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests.

    PubMed

    Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z; Chen, Ronald C; Shen, Dinggang

    2016-06-01

    Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531

  18. Extended model of the photoinitiation mechanisms in photopolymer materials

    SciTech Connect

    Liu Shui; Gleeson, Michael R.; Sabol, Dusan; Sheridan, John T.

    2009-11-15

    In order to further improve photopolymer materials for applications such as data storage, a deeper understanding of the photochemical mechanisms which are present during the formation of holographic gratings has become ever more crucial. This is especially true of the photoinitiation processes, since holographic data storage requires multiple sequential short exposures. Previously, models describing the temporal variation in the photosensitizer (dye) concentration as a function of exposure have been presented and applied to two different types of photosensitizer, i.e., Methylene Blue and Erythrosine B, in a polyvinyl alcohol/acrylamide based photopolymer. These models include the effects of photosensitizer recovery and bleaching under certain limiting conditions. In this paper, based on a detailed study of the photochemical reactions, the previous models are further developed to more physically represent these effects. This enables a more accurate description of the time varying dye absorption, recovery, and bleaching, and therefore of the generation of primary radicals in photopolymers containing such dyes.

  19. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    NASA Astrophysics Data System (ADS)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  20. SU-E-T-475: An Accurate Linear Model of Tomotherapy MLC-Detector System for Patient Specific Delivery QA

    SciTech Connect

    Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W; Reeher, M; Galmarini, D

    2014-06-01

    Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms.

  1. Toward accurate modelling of the non-linear matter bispectrum: standard perturbation theory and transients from initial conditions

    NASA Astrophysics Data System (ADS)

    McCullagh, Nuala; Jeong, Donghui; Szalay, Alexander S.

    2016-01-01

    Accurate modelling of non-linearities in the galaxy bispectrum, the Fourier transform of the galaxy three-point correlation function, is essential to fully exploit it as a cosmological probe. In this paper, we present numerical and theoretical challenges in modelling the non-linear bispectrum. First, we test the robustness of the matter bispectrum measured from N-body simulations using different initial conditions generators. We run a suite of N-body simulations using the Zel'dovich approximation and second-order Lagrangian perturbation theory (2LPT) at different starting redshifts, and find that transients from initial decaying modes systematically reduce the non-linearities in the matter bispectrum. To achieve 1 per cent accuracy in the matter bispectrum at z ≤ 3 on scales k < 1 h Mpc-1, 2LPT initial conditions generator with initial redshift z ≳ 100 is required. We then compare various analytical formulas and empirical fitting functions for modelling the non-linear matter bispectrum, and discuss the regimes for which each is valid. We find that the next-to-leading order (one-loop) correction from standard perturbation theory matches with N-body results on quasi-linear scales for z ≥ 1. We find that the fitting formula in Gil-Marín et al. accurately predicts the matter bispectrum for z ≤ 1 on a wide range of scales, but at higher redshifts, the fitting formula given in Scoccimarro & Couchman gives the best agreement with measurements from N-body simulations.

  2. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512

  3. A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.

    PubMed

    Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu

    2015-01-01

    Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement

  4. Mathematical Modeling of Ultraporous Nonmetallic Reticulated Materials

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Cherepanov, V. V.; Morzhukhina, A. V.

    2015-01-01

    We have developed an imitation statistical mathematical model reflecting the structure and the thermal, electrophysical, and optical properties of nonmetallic ultraporous reticulated materials. This model, in combination with a nonstationary thermal experiment and methods of the theory of inverse heat transfer problems, permits determining the little-studied characteristics of the above materials such as the radiative and conductive heat conductivities, the spectral scattering and absorption coefficients, the scattering indicatrix, and the dielectric constants, which are of great practical interest but are difficult to investigate.

  5. An Overview of Mesoscale Material Modeling with Eulerian Hydrocodes

    NASA Astrophysics Data System (ADS)

    Benson, David

    2013-06-01

    Eulerian hydrocodes were originally developed for simulating strong shocks in solids and fluids, but their ability to handle arbitrarily large deformations and the formation of new free surfaces makes them attractive for simulating the deformation and failure of materials at the mesoscopic scale. A summary of some of the numerical techniques that have been developed to address common issues for this class of problems is presented with the shock compression of powders used as a model problem. Achieving the correct packing density with the correct statistical distribution of particle sizes and shapes is, in itself, a challenging problem. However, since Eulerian codes permit multiple materials within each element, or cell, the material interfaces do not have to follow the mesh lines. The use of digital image processing to map the pixels of micrographs to the Eulerian mesh has proven to be a popular and useful means of creating accurate models of complex microstructures. Micro CT scans have been used to extend this approach to three dimensions for several classes of materials. The interaction between the particles is of considerable interest. During shock compression, individual particles may melt and form jets, and the voids between them collapse. Dynamic interface ordering has become a necessity, and many codes now have a suite of options for handling multi-material mechanics. True contact algorithms are now replacing multi-material approximations in some cases. At the mesoscale, material properties often vary spatially due to sub-scale effects. Using a large number of material species to represent the variations is usually unattractive. Directly specifying the properties point-wise as history variables has not proven successful because the limiters in the transport algorithms quickly smooth out the variations. Circumventing the limiter problem is shown to be relatively simple with the use of a reference configuration and the transport of the initial coordinates

  6. Modeling and Simulation of Nuclear Fuel Materials

    SciTech Connect

    Devanathan, Ram; Van Brutzel, Laurent; Tikare, Veena; Bartel, Timothy; Besmann, Theodore M; Stan, Marius; Van Uffelen, Paul

    2010-01-01

    We review the state of modeling and simulation of nuclear fuels with emphasis on the most widely used nuclear fuel, UO2. The hierarchical scheme presented represents a science-based approach to modeling nuclear fuels by progressively passing information in several stages from ab initio to continuum levels. Such an approach is essential to overcome the challenges posed by radioactive materials handling, experimental limitations in modeling extreme conditions and accident scenarios and small time and distance scales of fundamental defect processes. When used in conjunction with experimental validation, this multiscale modeling scheme can provide valuable guidance to development of fuel for advanced reactors to meet rising global energy demand.

  7. Modeling and Simulation of Nuclear Fuel Materials

    SciTech Connect

    Devanathan, Ramaswami; Van Brutzel, Laurent; Chartier, Alan; Gueneau, Christine; Mattsson, Ann E.; Tikare, Veena; Bartel, Timothy; Besmann, T. M.; Stan, Marius; Van Uffelen, Paul

    2010-10-01

    We review the state of modeling and simulation of nuclear fuels with emphasis on the most widely used nuclear fuel, UO2. The hierarchical scheme presented represents a science-based approach to modeling nuclear fuels by progressively passing information in several stages from ab initio to continuum levels. Such an approach is essential to overcome the challenges posed by radioactive materials handling, experimental limitations in modeling extreme conditions and accident scenarios, and the small time and distance scales of fundamental defect processes. When used in conjunction with experimental validation, this multiscale modeling scheme can provide valuable guidance to development of fuel for advanced reactors to meet rising global energy demand.

  8. An Accurate Quartic Force Field, Fundamental Frequencies, and Binding Energy for the High Energy Density Material T(d)N4

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Martin, Jan M. L.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The CCSD(T) method has been used to compute a highly accurate quartic force field and fundamental frequencies for all N-14 and N-15 isotopomers of the high energy density material T(sub d)N(sub 4). The computed fundamental frequencies show beyond doubt that the bands observed in a matrix isolation experiment by Radziszewski and coworkers are not due to different isotopomers of T(sub d)N(sub 4). The most sophisticated thermochemical calculations to date yield a N(sub 4) -> 2N(sub 2) heat of reaction of 182.22 +/- 0.5 kcal/mol at 0 K (180.64 +/- 0.5 at 298 K). It is hoped that the data reported herein will aid in the ultimate detection of T(sub d)N(sub 4).

  9. Parameter estimation approach for particle flow model of rockfill materials using response surface method

    NASA Astrophysics Data System (ADS)

    Li, Shouju; Li, De; Cao, Lijuan; Shangguan, Zichang

    2015-02-01

    Particle flow code (PFC) is widely used to model deformation and stress states of rockfill materials. The accuracy of numerical modeling with PFC is dependent upon the model parameter values. How to accurately determine model parameters remains one of the main challenges. In order to determine model parameters of particle flow model of rockfill materials, some triaxial compression experiments are performed, and the inversion procedure of model parameters based on response surface method is proposed. Parameters of particle flow model of rockfill materials are determined according to the observed data in triaxial compression tests for rockfill materials. The investigation shows that the normal stiffness, tangent stiffness and friction coefficient of rockfill materials will slightly increase with increase of confining pressure in triaxial compression tests. The experiments in laboratory show that the proposed inversion procedure behaves higher computing efficiency and the forecasted stress-strain relations agree well with observed values.

  10. Modeling Bamboo as a Functionally Graded Material

    SciTech Connect

    Silva, Emilio Carlos Nelli; Walters, Matthew C.; Paulino, Glaucio H.

    2008-02-15

    Natural fibers are promising for engineering applications due to their low cost. They are abundantly available in tropical and subtropical regions of the world, and they can be employed as construction materials. Among natural fibers, bamboo has been widely used for housing construction around the world. Bamboo is an optimized composite material which exploits the concept of Functionally Graded Material (FGM). Biological structures, such as bamboo, are composite materials that have complicated shapes and material distribution inside their domain, and thus the use of numerical methods such as the finite element method and multiscale methods such as homogenization, can help to further understanding of the mechanical behavior of these materials. The objective of this work is to explore techniques such as the finite element method and homogenization to investigate the structural behavior of bamboo. The finite element formulation uses graded finite elements to capture the varying material distribution through the bamboo wall. To observe bamboo behavior under applied loads, simulations are conducted considering a spatially-varying Young's modulus, an averaged Young's modulus, and orthotropic constitutive properties obtained from homogenization theory. The homogenization procedure uses effective, axisymmetric properties estimated from the spatially-varying bamboo composite. Three-dimensional models of bamboo cells were built and simulated under tension, torsion, and bending load cases.

  11. Integrated finite element model of composite materials

    NASA Astrophysics Data System (ADS)

    Teply, Jan L.; Herbein, William C.

    1989-05-01

    Two problems traditionally addressed in the area of micromechanics of composite materials can be briefly summarized as follows: (1) for a macroscopically uniform volume of composite material, which is subjected to macroscopically uniform boundary tractions, displacements or heat influx, find overall thermomechanical properties in terms of the thermomechanical properties of the individual constituents; and (2) for the same material volume and boundary conditions as above, find the local stress, strain, and temperature fields in the constituents and on the interfaces. Two different types of micromechanical models are usually applied to the solutions of these two types of problems. For linear elastic materials, the micromechanical models to solve problem (1) offer simple solutions of overall thermomechanical properties either in terms of bound which are derived from periodic or random microstructures, or in terms of single estimates, which are derived from a solution of an isolated inclusion. The finite element variational approaches are applied to integrate the solutions of problems (1) and (2) into one model. The application of displacement and equilibrium variational approaches to the calculation of overall elastic-plastic properties, are extended to the solution of the second problem. The integrated model is then applied to calculate the overall properties and local stress and strain fields of boron-aluminum composites subjected to transverse tension, in-plane shear and bending.

  12. Mathematical and physical modelling of materials processing

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Mathematical and physical modeling of turbulence phenomena in metals processing, electromagnetically driven flows in materials processing, gas-solid reactions, rapid solidification processes, the electroslag casting process, the role of cathodic depolarizers in the corrosion of aluminum in sea water, and predicting viscoelastic flows are described.

  13. Material model for physically based rendering

    NASA Astrophysics Data System (ADS)

    Robart, Mathieu; Paulin, Mathias; Caubet, Rene

    1999-09-01

    In computer graphics, a complete knowledge of the interactions between light and a material is essential to obtain photorealistic pictures. Physical measurements allow us to obtain data on the material response, but are limited to industrial surfaces and depend on measure conditions. Analytic models do exist, but they are often inadequate for common use: the empiric ones are too simple to be realistic, and the physically-based ones are often to complex or too specialized to be generally useful. Therefore, we have developed a multiresolution virtual material model, that not only describes the surface of a material, but also its internal structure thanks to distribution functions of microelements, arranged in layers. Each microelement possesses its own response to an incident light, from an elementary reflection to a complex response provided by its inner structure, taking into account geometry, energy, polarization, . . ., of each light ray. This model is virtually illuminated, in order to compute its response to an incident radiance. This directional response is stored in a compressed data structure using spherical wavelets, and is destined to be used in a rendering model such as directional radiosity.

  14. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  15. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM{sup +}-up scheme

    SciTech Connect

    Chang, Chih-Hao . E-mail: chchang@engineering.ucsb.edu; Liou, Meng-Sing . E-mail: meng-sing.liou@grc.nasa.gov

    2007-07-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM{sup +} scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM{sup +}-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion.

  16. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  17. Computational modeling of composite material fires.

    SciTech Connect

    Brown, Alexander L.; Erickson, Kenneth L.; Hubbard, Joshua Allen; Dodd, Amanda B.

    2010-10-01

    Composite materials behave differently from conventional fuel sources and have the potential to smolder and burn for extended time periods. As the amount of composite materials on modern aircraft continues to increase, understanding the response of composites in fire environments becomes increasingly important. An effort is ongoing to enhance the capability to simulate composite material response in fires including the decomposition of the composite and the interaction with a fire. To adequately model composite material in a fire, two physical model development tasks are necessary; first, the decomposition model for the composite material and second, the interaction with a fire. A porous media approach for the decomposition model including a time dependent formulation with the effects of heat, mass, species, and momentum transfer of the porous solid and gas phase is being implemented in an engineering code, ARIA. ARIA is a Sandia National Laboratories multiphysics code including a range of capabilities such as incompressible Navier-Stokes equations, energy transport equations, species transport equations, non-Newtonian fluid rheology, linear elastic solid mechanics, and electro-statics. To simulate the fire, FUEGO, also a Sandia National Laboratories code, is coupled to ARIA. FUEGO represents the turbulent, buoyantly driven incompressible flow, heat transfer, mass transfer, and combustion. FUEGO and ARIA are uniquely able to solve this problem because they were designed using a common architecture (SIERRA) that enhances multiphysics coupling and both codes are capable of massively parallel calculations, enhancing performance. The decomposition reaction model is developed from small scale experimental data including thermogravimetric analysis (TGA) and Differential Scanning Calorimetry (DSC) in both nitrogen and air for a range of heating rates and from available data in the literature. The response of the composite material subject to a radiant heat flux boundary

  18. A Hysteresis Model for Piezoceramic Materials

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Ounaies, Zoubeida

    1999-01-01

    This paper addresses the modeling of nonlinear constitutive relations and hysteresis inherent to piezoceramic materials at moderate to high drive levels. Such models are, necessary to realize the, full potential of the materials in high performance control applications, and a necessary prerequisite is the development of techniques which permit control implementation. The approach employed here is based on the qualification of reversible and irreversible domain wall motion in response to applied electric fields. A comparison with experimental data illustrates that because the resulting ODE model is physics-based, it can be employed for both characterization and prediction of polarization levels throughout the range of actuator operation. Finally, the ODE formulation is amenable to inversion which facilitates the development of an inverse compensator for linear control design.

  19. Do inverse ecosystem models accurately reconstruct plankton trophic flows? Comparing two solution methods using field data from the California Current

    NASA Astrophysics Data System (ADS)

    Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.

    2012-03-01

    Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.

  20. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    SciTech Connect

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  1. Theory of bi-molecular association dynamics in 2D for accurate model and experimental parameterization of binding rates

    NASA Astrophysics Data System (ADS)

    Yogurtcu, Osman N.; Johnson, Margaret E.

    2015-08-01

    The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute

  2. Mapping of photon distribution and imaging of MR-derived anatomically accurate optical models of the female breast

    NASA Astrophysics Data System (ADS)

    Barbour, San-Lian S.; Barbour, Randall L.; Koo, Ping C.; Graber, Harry L.; Chang, Jenghwa

    1995-05-01

    results reported are the first to demonstrate that high quality images of small added inclusions can be obtained from anatomically accurate models of thick tissues having arbitrary boundaries based on the analysis of diffusely sscattered light.

  3. Modeling organohalide perovskites for photovoltaic applications: From materials to interfaces

    NASA Astrophysics Data System (ADS)

    de Angelis, Filippo

    2015-03-01

    The field of hybrid/organic photovoltaics has been revolutionized in 2012 by the first reports of solid-state solar cells based on organohalide perovskites, now topping at 20% efficiency. First-principles modeling has been widely applied to the dye-sensitized solar cells field, and more recently to perovskite-based solar cells. The computational design and screening of new materials has played a major role in advancing the DSCs field. Suitable modeling strategies may also offer a view of the crucial heterointerfaces ruling the device operational mechanism. I will illustrate how simulation tools can be employed in the emerging field of perovskite solar cells. The performance of the proposed simulation toolbox along with the fundamental modeling strategies are presented using selected examples of relevant materials and interfaces. The main issue with hybrid perovskite modeling is to be able to accurately describe their structural, electronic and optical features. These materials show a degree of short range disorder, due to the presence of mobile organic cations embedded within the inorganic matrix, requiring to average their properties over a molecular dynamics trajectory. Due to the presence of heavy atoms (e.g. Sn and Pb) their electronic structure must take into account spin-orbit coupling (SOC) in an effective way, possibly including GW corrections. The proposed SOC-GW method constitutes the basis for tuning the materials electronic and optical properties, rationalizing experimental trends. Modeling charge generation in perovskite-sensitized TiO2 interfaces is then approached based on a SOC-DFT scheme, describing alignment of energy levels in a qualitatively correct fashion. The role of interfacial chemistry on the device performance is finally discussed. The research leading to these results has received funding from the European Union Seventh Framework Programme [FP7/2007 2013] under Grant Agreement No. 604032 of the MESO project.

  4. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    SciTech Connect

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  5. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  6. Accurate modeling of fluorescence line narrowing difference spectra: Direct measurement of the single-site fluorescence spectrum

    NASA Astrophysics Data System (ADS)

    Reppert, Mike; Naibo, Virginia; Jankowiak, Ryszard

    2010-07-01

    Accurate lineshape functions for modeling fluorescence line narrowing (FLN) difference spectra (ΔFLN spectra) in the low-fluence limit are derived and examined in terms of the physical interpretation of various contributions, including photoproduct absorption and emission. While in agreement with the earlier results of Jaaniso [Proc. Est. Acad. Sci., Phys., Math. 34, 277 (1985)] and Fünfschilling et al. [J. Lumin. 36, 85 (1986)], the derived formulas differ substantially from functions used recently [e.g., M. Rätsep et al., Chem. Phys. Lett. 479, 140 (2009)] to model ΔFLN spectra. In contrast to traditional FLN spectra, it is demonstrated that for most physically reasonable parameters, the ΔFLN spectrum reduces simply to the single-site fluorescence lineshape function. These results imply that direct measurement of a bulk-averaged single-site fluorescence lineshape function can be accomplished with no complicated extraction process or knowledge of any additional parameters such as site distribution function shape and width. We argue that previous analysis of ΔFLN spectra obtained for many photosynthetic complexes led to strong artificial lowering of apparent electron-phonon coupling strength, especially on the high-energy side of the pigment site distribution function.

  7. Increasingly accurate dynamic molecular models of G-protein coupled receptor oligomers: Panacea or Pandora's box for novel drug discovery?

    PubMed Central

    Filizola, Marta

    2009-01-01

    For years conventional drug design at G-protein coupled receptors (GPCRs) has mainly focused on the inhibition of a single receptor at a usually well-defined ligand-binding site. The recent discovery of more and more physiologically relevant GPCR dimers/oligomers suggests that selectively targeting these complexes or designing small molecules that inhibit receptor-receptor interactions might provide new opportunities for novel drug discovery. To uncover the fundamental mechanisms and dynamics governing GPCR dimerization/oligomerization, it is crucial to understand the dynamic process of receptor-receptor association, and to identify regions that are suitable for selective drug binding. This minireview highlights current progress in the development of increasingly accurate dynamic molecular models of GPCR oligomers based on structural, biochemical, and biophysical information that has recently appeared in the literature. In view of this new information, there has never been a more exciting time for computational research into GPCRs than at present. Information-driven modern molecular models of GPCR complexes are expected to efficiently guide the rational design of GPCR oligomer-specific drugs, possibly allowing researchers to reach for the high-hanging fruits in GPCR drug discovery, i.e. more potent and selective drugs for efficient therapeutic interventions. PMID:19465029

  8. Toward an Accurate Modeling of Hydrodynamic Effects on the Translational and Rotational Dynamics of Biomolecules in Many-Body Systems.

    PubMed

    Długosz, Maciej; Antosiewicz, Jan M

    2015-07-01

    Proper treatment of hydrodynamic interactions is of importance in evaluation of rigid-body mobility tensors of biomolecules in Stokes flow and in simulations of their folding and solution conformation, as well as in simulations of the translational and rotational dynamics of either flexible or rigid molecules in biological systems at low Reynolds numbers. With macromolecules conveniently modeled in calculations or in dynamic simulations as ensembles of spherical frictional elements, various approximations to hydrodynamic interactions, such as the two-body, far-field Rotne-Prager approach, are commonly used, either without concern or as a compromise between the accuracy and the numerical complexity. Strikingly, even though the analytical Rotne-Prager approach fails to describe (both in the qualitative and quantitative sense) mobilities in the simplest system consisting of two spheres, when the distance between their surfaces is of the order of their size, it is commonly applied to model hydrodynamic effects in macromolecular systems. Here, we closely investigate hydrodynamic effects in two and three-body systems, consisting of bead-shell molecular models, using either the analytical Rotne-Prager approach, or an accurate numerical scheme that correctly accounts for the many-body character of hydrodynamic interactions and their short-range behavior. We analyze mobilities, and translational and rotational velocities of bodies resulting from direct forces acting on them. We show, that with the sufficient number of frictional elements in hydrodynamic models of interacting bodies, the far-field approximation is able to provide a description of hydrodynamic effects that is in a reasonable qualitative as well as quantitative agreement with the description resulting from the application of the virtually exact numerical scheme, even for small separations between bodies. PMID:26068580

  9. X-ray and microwave emissions from the July 19, 2012 solar flare: Highly accurate observations and kinetic models

    NASA Astrophysics Data System (ADS)

    Gritsyk, P. A.; Somov, B. V.

    2016-08-01

    The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.

  10. Coarse-Grain Modeling of Energetic Materials

    NASA Astrophysics Data System (ADS)

    Brennan, John

    2015-06-01

    Mechanical and thermal loading of energetic materials can incite responses over a wide range of spatial and temporal scales due to inherent nano- and microscale features. Many energy transfer processes within these materials are atomistically governed, yet the material response is manifested at the micro- and mesoscale. The existing state-of-the-art computational methods include continuum level approaches that rely on idealized field-based formulations that are empirically based. Our goal is to bridge the spatial and temporal modeling regimes while ensuring multiscale consistency. However, significant technical challenges exist, including that the multiscale methods linking the atomistic and microscales for molecular crystals are immature or nonexistent. To begin addressing these challenges, we have implemented a bottom-up approach for deriving microscale coarse-grain models directly from quantum mechanics-derived atomistic models. In this talk, a suite of computational tools is described for particle-based microscale simulations of the nonequilibrium response of energetic solids. Our approach builds upon recent advances both in generating coarse-grain models under high strains and in developing a variant of dissipative particle dynamics that includes chemical reactions.

  11. Constitutive modeling for isotropic materials (HOST)

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.; Cassenti, B. N.

    1985-01-01

    This report presents the results of the second year of work on a problem which is part of the NASA HOST Program. Its goals are: (1) to develop and validate unified constitutive models for isotropic materials, and (2) to demonstrate their usefulness for structural analyses of hot section components of gas turbine engines. The unified models selected for development and evaluation are that of Bodner-Partom and Walker. For model evaluation purposes, a large constitutive data base is generated for a B1900 + Hf alloy by performing uniaxial tensile, creep, cyclic, stress relation, and thermomechanical fatigue (TMF) tests as well as biaxial (tension/torsion) tests under proportional and nonproportional loading over a wide range of strain rates and temperatures. Systematic approaches for evaluating material constants from a small subset of the data base are developed. Correlations of the uniaxial and biaxial tests data with the theories of Bodner-Partom and Walker are performed to establish the accuracy, range of applicability, and integability of the models. Both models are implemented in the MARC finite element computer code and used for TMF analyses. Benchmark notch round experiments are conducted and the results compared with finite-element analyses using the MARC code and the Walker model.

  12. How effective are traditional methods of compositional analysis in providing an accurate material balance for a range of softwood derived residues?

    PubMed Central

    2013-01-01

    Background Forest residues represent an abundant and sustainable source of biomass which could be used as a biorefinery feedstock. Due to the heterogeneity of forest residues, such as hog fuel and bark, one of the expected challenges is to obtain an accurate material balance of these feedstocks. Current compositional analytical methods have been standardised for more homogenous feedstocks such as white wood and agricultural residues. The described work assessed the accuracy of existing and modified methods on a variety of forest residues both before and after a typical pretreatment process. Results When “traditional” pulp and paper methods were used, the total amount of material that could be quantified in each of the six softwood-derived residues ranged from 88% to 96%. It was apparent that the extractives present in the substrate were most influential in limiting the accuracy of a more representative material balance. This was particularly evident when trying to determine the lignin content, due to the incomplete removal of the extractives, even after a two stage water-ethanol extraction. Residual extractives likely precipitated with the acid insoluble lignin during analysis, contributing to an overestimation of the lignin content. Despite the minor dissolution of hemicellulosic sugars, extraction with mild alkali removed most of the extractives from the bark and improved the raw material mass closure to 95% in comparison to the 88% value obtained after water-ethanol extraction. After pretreatment, the extent of extractive removal and their reaction/precipitation with lignin was heavily dependent on the pretreatment conditions used. The selective removal of extractives and their quantification after a pretreatment proved to be even more challenging. Regardless of the amount of extractives that were originally present, the analytical methods could be refined to provide reproducible quantification of the carbohydrates present in both the starting material and

  13. Blast-induced biomechanical loading of the rat: an experimental and anatomically accurate computational blast injury model.

    PubMed

    Sundaramurthy, Aravind; Alai, Aaron; Ganpule, Shailesh; Holmberg, Aaron; Plougonven, Erwan; Chandra, Namas

    2012-09-01

    Blast waves generated by improvised explosive devices (IEDs) cause traumatic brain injury (TBI) in soldiers and civilians. In vivo animal models that use shock tubes are extensively used in laboratories to simulate field conditions, to identify mechanisms of injury, and to develop injury thresholds. In this article, we place rats in different locations along the length of the shock tube (i.e., inside, outside, and near the exit), to examine the role of animal placement location (APL) in the biomechanical load experienced by the animal. We found that the biomechanical load on the brain and internal organs in the thoracic cavity (lungs and heart) varied significantly depending on the APL. When the specimen is positioned outside, organs in the thoracic cavity experience a higher pressure for a longer duration, in contrast to APL inside the shock tube. This in turn will possibly alter the injury type, severity, and lethality. We found that the optimal APL is where the Friedlander waveform is first formed inside the shock tube. Once the optimal APL was determined, the effect of the incident blast intensity on the surface and intracranial pressure was measured and analyzed. Noticeably, surface and intracranial pressure increases linearly with the incident peak overpressures, though surface pressures are significantly higher than the other two. Further, we developed and validated an anatomically accurate finite element model of the rat head. With this model, we determined that the main pathway of pressure transmission to the brain was through the skull and not through the snout; however, the snout plays a secondary role in diffracting the incoming blast wave towards the skull. PMID:22620716

  14. Non-targeted screening for contaminants in paper and board food-contact materials using effect-directed analysis and accurate mass spectrometry.

    PubMed

    Bengtström, Linda; Rosenmai, Anna Kjerstine; Trier, Xenia; Jensen, Lisbeth Krüger; Granby, Kit; Vinggaard, Anne Marie; Driffield, Malcolm; Højslev Petersen, Jens

    2016-06-01

    Due to large knowledge gaps in chemical composition and toxicological data for substances involved, paper and board food-contact materials (P&B FCM) have been emerging as a FCM type of particular concern for consumer safety. This study describes the development of a step-by-step strategy, including extraction, high-performance liquid chromatography (HPLC) fractionation, tentative identification of relevant substances and in vitro testing of selected tentatively identified substances. As a case study, we used two fractions from a recycled pizza box sample which exhibited aryl hydrocarbon receptor (AhR) activity. These fractions were analysed by gas chromatography (GC) and ultra-HPLC (UHPLC) coupled to quadrupole time-of-flight mass spectrometers (QTOF MS) in order tentatively to identify substances. The elemental composition was determined for peaks above a threshold, and compared with entries in a commercial mass spectral library for GC-MS (GC-EI-QTOF MS) analysis and an in-house built library of accurate masses for substances known to be used in P&B packaging for UHPLC-QTOF analysis. Of 75 tentatively identified substances, 15 were initially selected for further testing in vitro; however, only seven were commercially available and subsequently tested in vitro and quantified. Of these seven, the identities of three pigments found in printing inks were confirmed by UHPLC tandem mass spectrometry (QqQ MS/MS). Two pigments had entries in the database, meaning that a material relevant accurate mass database can provide a fast tentative identification. Pure standards of the seven tentatively identified substances were tested in vitro but could not explain a significant proportion of the AhR-response in the extract. Targeted analyses of dioxins and PCBs, both well-known AhR agonists, was performed. However, the dioxins could explain approximately 3% of the activity observed in the pizza box extract indicating that some very AhR active substance(s) still remain to be

  15. High-Fidelity Micromechanics Model Developed for the Response of Multiphase Materials

    NASA Technical Reports Server (NTRS)

    Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.

    2002-01-01

    A new high-fidelity micromechanics model has been developed under funding from the NASA Glenn Research Center for predicting the response of multiphase materials with arbitrary periodic microstructures. The model's analytical framework is based on the homogenization technique, but the method of solution for the local displacement and stress fields borrows concepts previously employed in constructing the higher order theory for functionally graded materials. The resulting closed-form macroscopic and microscopic constitutive equations, valid for both uniaxial and multiaxial loading of periodic materials with elastic and inelastic constitutive phases, can be incorporated into a structural analysis computer code. Consequently, this model now provides an alternative, accurate method.

  16. Thermal Ablation Modeling for Silicate Materials

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq

    2016-01-01

    A thermal ablation model for silicates is proposed. The model includes the mass losses through the balance between evaporation and condensation, and through the moving molten layer driven by surface shear force and pressure gradient. This model can be applied in ablation simulations of the meteoroid or glassy Thermal Protection Systems for spacecraft. Time-dependent axi-symmetric computations are performed by coupling the fluid dynamics code, Data-Parallel Line Relaxation program, with the material response code, Two-dimensional Implicit Thermal Ablation simulation program, to predict the mass lost rates and shape change. For model validation, the surface recession of fused amorphous quartz rod is computed, and the recession predictions reasonably agree with available data. The present parametric studies for two groups of meteoroid earth entry conditions indicate that the mass loss through moving molten layer is negligibly small for heat-flux conditions at around 1 MW/cm(exp. 2).

  17. Thermal Ablation Modeling for Silicate Materials

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq

    2016-01-01

    A general thermal ablation model for silicates is proposed. The model includes the mass losses through the balance between evaporation and condensation, and through the moving molten layer driven by surface shear force and pressure gradient. This model can be applied in the ablation simulation of the meteoroid and the glassy ablator for spacecraft Thermal Protection Systems. Time-dependent axisymmetric computations are performed by coupling the fluid dynamics code, Data-Parallel Line Relaxation program, with the material response code, Two-dimensional Implicit Thermal Ablation simulation program, to predict the mass lost rates and shape change. The predicted mass loss rates will be compared with available data for model validation, and parametric studies will also be performed for meteoroid earth entry conditions.

  18. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  19. Survey of Multi-Material Closure Models in 1D Lagrangian Hydrodynamics

    SciTech Connect

    Maeng, Jungyeoul Brad; Hyde, David Andrew Bulloch

    2015-07-28

    Accurately treating the coupled sub-cell thermodynamics of computational cells containing multiple materials is an inevitable problem in hydrodynamics simulations, whether due to initial configurations or evolutions of the materials and computational mesh. When solving the hydrodynamics equations within a multi-material cell, we make the assumption of a single velocity field for the entire computational domain, which necessitates the addition of a closure model to attempt to resolve the behavior of the multi-material cells’ constituents. In conjunction with a 1D Lagrangian hydrodynamics code, we present a variety of both the popular as well as more recently proposed multi-material closure models and survey their performances across a spectrum of examples. We consider standard verification tests as well as practical examples using combinations of fluid, solid, and composite constituents within multi-material mixtures. Our survey provides insights into the advantages and disadvantages of various multi-material closure models in different problem configurations.

  20. Multidimensional DDT modeling of energetic materials

    SciTech Connect

    Baer, M.R.; Hertel, E.S.; Bell, R.L.

    1995-07-01

    To model the shock-induced behavior of porous or damaged energetic materials, a nonequilibrium mixture theory has been developed and incorporated into the shock physics code, CTH. The foundation for this multiphase model is based on a continuum mixture formulation given by Baer and Nunziato. This multiphase mixture model provides a thermodynamic and mathematically-consistent description of the self-accelerated combustion processes associated with deflagration-to-detonation and delayed detonation behavior which are key modeling issues in safety assessment of energetic systems. An operator-splitting method is used in the implementation of this model, whereby phase diffusion effects are incorporated using a high resolution transport method. Internal state variables, forming the basis for phase interaction quantities, are resolved during the Lagrangian step requiring the use of a stiff matrix-free solver. Benchmark calculations are presented which simulate low-velocity piston impact on a propellant porous bed and experimentally-measured wave features are well replicated with this model. This mixture model introduces micromechanical models for the initiation and growth of reactive multicomponent flow that are key features to describe shock initiation and self-accelerated deflagration-to-detonation combustion behavior. To complement one-dimensional simulation, two-dimensional numerical calculations are presented which indicate wave curvature effects due to the loss of wall confinement. This study is pertinent for safety analysis of weapon systems.

  1. Modelling the Constraints of Spatial Environment in Fauna Movement Simulations: Comparison of a Boundaries Accurate Function and a Cost Function

    NASA Astrophysics Data System (ADS)

    Jolivet, L.; Cohen, M.; Ruas, A.

    2015-08-01

    Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.

  2. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  3. Importance of housekeeping gene selection for accurate reverse transcription-quantitative polymerase chain reaction in a wound healing model.

    PubMed

    Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A

    2010-01-01

    Studies in the field of wound healing have utilized a variety of different housekeeping genes for reverse transcription-quantitative polymerase chain reaction (RT-qPCR) analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expression in wound healing, we examined all articles published in Wound Repair and Regeneration that cited RT-qPCR during the period of January/February 2008 until July/August 2009. We determined that ACTβ, GAPDH, 18S, and β2M were the most frequently used housekeeping genes in human, mouse, and pig studies. We also investigated nine commonly used housekeeping genes that are not generally used in wound healing models: GUS, TBP, RPLP2, ATP5B, SDHA, UBC, CANX, CYC1, and YWHAZ. We observed that wounded and unwounded tissues have contrasting housekeeping gene expression stability. The results demonstrate that commonly used housekeeping genes must be validated as accurate normalizing genes for each individual experimental condition. PMID:20731795

  4. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  5. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  6. Mathematical models for accurate prediction of atmospheric visibility with particular reference to the seasonal and environmental patterns in Hong Kong.

    PubMed

    Mui, K W; Wong, L T; Chung, L Y

    2009-11-01

    Atmospheric visibility impairment has gained increasing concern as it is associated with the existence of a number of aerosols as well as common air pollutants and produces unfavorable conditions for observation, dispersion, and transportation. This study analyzed the atmospheric visibility data measured in urban and suburban Hong Kong (two selected stations) with respect to time-matched mass concentrations of common air pollutants including nitrogen dioxide (NO(2)), nitrogen monoxide (NO), respirable suspended particulates (PM(10)), sulfur dioxide (SO(2)), carbon monoxide (CO), and meteorological parameters including air temperature, relative humidity, and wind speed. No significant difference in atmospheric visibility was reported between the two measurement locations (p > or = 0.6, t test); and good atmospheric visibility was observed more frequently in summer and autumn than in winter and spring (p < 0.01, t test). It was also found that atmospheric visibility increased with temperature but decreased with the concentrations of SO(2), CO, PM(10), NO, and NO(2). The results showed that atmospheric visibility was season dependent and would have significant correlations with temperature, the mass concentrations of PM(10) and NO(2), and the air pollution index API (correlation coefficients mid R: R mid R: > or = 0.7, p < or = 0.0001, t test). Mathematical expressions catering to the seasonal variations of atmospheric visibility were thus proposed. By comparison, the proposed visibility prediction models were more accurate than some existing regional models. In addition to improving visibility prediction accuracy, this study would be useful for understanding the context of low atmospheric visibility, exploring possible remedial measures, and evaluating the impact of air pollution and atmospheric visibility impairment in this region. PMID:18951139

  7. Anisotropic Cloth Modeling for Material Fabric

    NASA Astrophysics Data System (ADS)

    Zhang, Mingmin; Pan, Zhigengx; Mi, Qingfeng

    Physically based cloth simulation has been challenging the graphics community for more than three decades. With the developing of virtual reality and clothing CAD, it has become the key technique of virtual garment and try-on system. Although it has received considerable attention in computer graphics, due to its flexible property and realistic feeling that the textile engineers pay much attention to, there is not a successful methodology to simulate cloth both in visual realism and physical accuracy. We present a new anisotropic textile modeling method based on physical mass-spring system, which models the warps and wefts separately according to the different material fabrics. The simulation process includes two main steps: firstly the rigid object simulation and secondly the flexible mass simulation near to be equilibrium. A multiresolution modeling is applied to enhance the tradeoff fruit of the realistic presentation and computation cost. Finally, some examples and the analysis results show the efficiency of the proposed method.

  8. Dixon sequence with superimposed model-based bone compartment provides highly accurate PET/MR attenuation correction of the brain

    PubMed Central

    Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.

    2016-01-01

    Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone compartment. Methods We directly compared SUV estimation for MR-based AC methods to reference CT AC in 16 patients undergoing same-day, single 18FDG dose PET/CT and PET/MR for suspected neurodegeneration. Three Dixon-based MR AC methods were compared to CT – standard Dixon 4-compartment segmentation alone, Dixon with a superimposed model-based bone compartment, and Dixon with a superimposed bone compartment and linear attenuation correction optimized specifically for brain tissue. The brain was segmented using a 3D T1-weighted volumetric MR sequence and SUV estimations compared to CT AC for whole-image, whole-brain and 91 FreeSurfer-based regions-of-interest. Results Modifying the linear AC value specifically for brain and superimposing a model-based bone compartment reduced whole-brain SUV estimation bias of Dixon-based PET/MR AC by 95% compared to reference CT AC (P < 0.05) – this resulted in a residual −0.3% whole-brain mean SUV bias. Further, brain regional analysis demonstrated only 3 frontal lobe regions with SUV estimation bias of 5% or greater (P < 0.05). These biases appeared to correlate with high individual variability in the frontal bone thickness and pneumatization. Conclusion Bone compartment and linear AC modifications result in a highly accurate MR AC method in subjects with suspected neurodegeneration. This prototype MR AC solution appears equivalent than other recently proposed solutions, and does not require additional MR sequences and scan time. These

  9. Fire and materials modeling for transportation systems

    SciTech Connect

    Skocypec, R.D.; Gritzo, L.A.; Moya, J.L.; Nicolette, V.F.; Tieszen, S.R.; Thomas, R.

    1994-10-01

    Fire is an important threat to the safety of transportation systems. Therefore, understanding the effects of fire (and its interaction with materials) on transportation systems is crucial to quantifying and mitigating the impact of fire on the safety of those systems. Research and development directed toward improving the fire safety of transportation systems must address a broad range of phenomena and technologies, including: crash dynamics, fuel dispersion, fire environment characterization, material characterization, and system/cargo thermal response modeling. In addition, if the goal of the work is an assessment and/or reduction of risk due to fires, probabilistic risk assessment technology is also required. The research currently underway at Sandia National Laboratories in each of these areas is summarized in this paper.

  10. Simulation of human atherosclerotic femoral plaque tissue: the influence of plaque material model on numerical results

    PubMed Central

    2015-01-01

    discrepancies, future studies should seek to employ vessel-appropriate material models to simulate the response of diseased femoral tissue in order to obtain the most accurate numerical results. PMID:25602515

  11. Computational modeling of multicellular constructs with the material point method.

    PubMed

    Guilkey, James E; Hoying, James B; Weiss, Jeffrey A

    2006-01-01

    Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter

  12. Comparison of Material Models for Spring Back Prediction in an Automotive Panel Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Peng, Xiongqi; Shi, Shaoqing; Hu, Kangkang

    2013-10-01

    Springback is a crucial factor in sheet metal forming process. An accurate prediction of springback is the premise for its control. An elasto-plastic constitutive model that can fully reflect anisotropic character of sheet metal has a crucial influence in the forming simulation. The forming process simulation and springback prediction of an automobile body panel is implemented by using JSTAMP/LS-DYNA with the Yoshida-Uemori, the 3-parameter Barlat and transversely anisotropic elasto-plastic model, respectively. Simulation predictions on spingback from the three constitutive models are compared with experiment measurements to demonstrate the effectiveness and accuracy of the Yoshida-Uemori model in characterizing the anisotropic material behavior of sheet metal during forming. With an accurate prediction of springback, it can provide design guideline for the practical application in mold design with springback compensation and to achieve an accurate forming.

  13. Geochemistry Model Validation Report: Material Degradation and Release Model

    SciTech Connect

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  14. Material modeling for multistage tube hydroforming process simulation

    NASA Astrophysics Data System (ADS)

    Saboori, Mehdi

    The Aerospace industries of the 21st century demand the use of cutting edge materials and manufacturing technology. New manufacturing methods such as hydroforming are relatively new and are being used to produce commercial vehicles. This process allows for part consolidation and reducing the number of parts in an assembly compared to conventional methods such as stamping, press forming and welding of multiple components. Hydroforming in particular, provides an endless opportunity to achieve multiple crosssectional shapes in a single tube. A single tube can be pre-bent and subsequently hydroformed to create an entire component assembly instead of welding many smaller sheet metal sections together. The knowledge of tube hydroforming for aerospace materials is not well developed yet, thus new methods are required to predict and study the formability, and the critical forming limits for aerospace materials. In order to have a better understanding of the formability and the mechanical properties of aerospace materials, a novel online measurement approach based on free expansion test is developed using a 3D automated deformation measurement system (AramisRTM) to extract the coordinates of the bulge profile during the test. These coordinates are used to calculate the circumferential and longitudinal curvatures, which are utilized to determine the effective stresses and effective strains at different stages of the tube hydroforming process. In the second step, two different methods, a weighted average method and a new hardening function are utilized to define accurately the true stress-strain curve for post-necking regime of different aerospace alloys, such as inconel 718 (IN 718), stainless steel 321 (SS 321) and titanium (Ti6Al4V). The flow curves are employed in the simulation of the dome height test, which is utilized for generating the forming limit diagrams (FLDs). Then, the effect of stress triaxiality, the stress concentration factor and the effective plastic

  15. A hierarchical framework for the multiscale modeling of microstructure evolution in heterogeneous materials.

    SciTech Connect

    Luscher, Darby J.

    2010-04-01

    All materials are heterogeneous at various scales of observation. The influence of material heterogeneity on nonuniform response and microstructure evolution can have profound impact on continuum thermomechanical response at macroscopic “engineering” scales. In many cases, it is necessary to treat this behavior as a multiscale process thus integrating the physical understanding of material behavior at various physical (length and time) scales in order to more accurately predict the thermomechanical response of materials as their microstructure evolves. The intent of the dissertation is to provide a formal framework for multiscale hierarchical homogenization to be used in developing constitutive models.

  16. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    NASA Technical Reports Server (NTRS)

    Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2015-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material

  17. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    NASA Technical Reports Server (NTRS)

    Hoffarth, Canio; Harrington, Joseph; Subramaniam, D. Rajan; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2014-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800- F3900 fiber/resin composite material.

  18. Modeling segregation of bidisperse granular materials: A parametric study

    NASA Astrophysics Data System (ADS)

    Schlick, Conor; Fan, Yi; Umbanhowar, Paul; Ottino, Julio; Lueptow, Richard

    2013-11-01

    Predicting segregation and mixing of size bidisperse granular material is a challenging problem with many industrial applications. Using an accurate segregation model based on kinematic properties of the flow that we recently developed, we present a parametric study of segregation of bidisperse granular material in quasi-two-dimensional bounded heaps. The model depends on the Péclet number, Pe, which is the ratio of the advection rate to the diffusion rate, and Λ, which is the ratio of the segregation rate to the advection rate. Both dimensionless parameters depend on the feed rate, the particle size ratio, and the system size. Systematic variation of Λ and Pe demonstrates how the spatial particle configuration depends on the interplay of advection, segregation, and diffusion. At large values of Pe and Λ, segregation dominates and the heap consists of distinct regions of small (upstream) and large (downstream) particles, whereas at low values of Pe and Λ, diffusion dominates which results in a well-mixed heap. Advection plays an important role for large Pe and small Λ and preserves the initial configuration of particles in the feed zone. Y.F. was funded by The Dow Chemical Company. C.S. was supported by NSF Grant CMMI-1000469.

  19. Rapid Bayesian point source inversion using pattern recognition --- bridging the gap between regional scaling relations and accurate physical modelling

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.

    2014-12-01

    Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern

  20. Modeling spherical explosions with aluminized energetic materials

    NASA Astrophysics Data System (ADS)

    Massoni, J.; Saurel, R.; Lefrançois, A.; Baudin, G.

    2006-11-01

    This paper deals with the numerical solution and validation of a reactive flow model dedicated to the study of spherical explosions with an aluminized energetic material. Situations related to air blast as well as underwater explosions are examined. Such situations involve multiscale phenomena associated with the detonation reaction zone, the aluminium reaction zone, the shock propagation distance and the bubble oscillation period. A detonation tracking method is developed in order to avoid the detonation structure computation. An ALE formulation is combined to the detonation tracking method in order to solve the material interface between detonation products and the environment as well as shock propagation. The model and the algorithm are then validated over a wide range of spherical explosions involving several types of explosives, both in air and liquid water environment. Large-scale experiments have been done in order to determine the blast wave effects with explosive compositions of variable aluminium content. In all situations the agreement between computed and experimental results is very good.

  1. Adapting Data Processing To Compare Model and Experiment Accurately: A Discrete Element Model and Magnetic Resonance Measurements of a 3D Cylindrical Fluidized Bed.

    PubMed

    Boyce, Christopher M; Holland, Daniel J; Scott, Stuart A; Dennis, John S

    2013-12-18

    Discrete element modeling is being used increasingly to simulate flow in fluidized beds. These models require complex measurement techniques to provide validation for the approximations inherent in the model. This paper introduces the idea of modeling the experiment to ensure that the validation is accurate. Specifically, a 3D, cylindrical gas-fluidized bed was simulated using a discrete element model (DEM) for particle motion coupled with computational fluid dynamics (CFD) to describe the flow of gas. The results for time-averaged, axial velocity during bubbling fluidization were compared with those from magnetic resonance (MR) experiments made on the bed. The DEM-CFD data were postprocessed with various methods to produce time-averaged velocity maps for comparison with the MR results, including a method which closely matched the pulse sequence and data processing procedure used in the MR experiments. The DEM-CFD results processed with the MR-type time-averaging closely matched experimental MR results, validating the DEM-CFD model. Analysis of different averaging procedures confirmed that MR time-averages of dynamic systems correspond to particle-weighted averaging, rather than frame-weighted averaging, and also demonstrated that the use of Gaussian slices in MR imaging of dynamic systems is valid. PMID:24478537

  2. Adapting Data Processing To Compare Model and Experiment Accurately: A Discrete Element Model and Magnetic Resonance Measurements of a 3D Cylindrical Fluidized Bed

    PubMed Central

    2013-01-01

    Discrete element modeling is being used increasingly to simulate flow in fluidized beds. These models require complex measurement techniques to provide validation for the approximations inherent in the model. This paper introduces the idea of modeling the experiment to ensure that the validation is accurate. Specifically, a 3D, cylindrical gas-fluidized bed was simulated using a discrete element model (DEM) for particle motion coupled with computational fluid dynamics (CFD) to describe the flow of gas. The results for time-averaged, axial velocity during bubbling fluidization were compared with those from magnetic resonance (MR) experiments made on the bed. The DEM-CFD data were postprocessed with various methods to produce time-averaged velocity maps for comparison with the MR results, including a method which closely matched the pulse sequence and data processing procedure used in the MR experiments. The DEM-CFD results processed with the MR-type time-averaging closely matched experimental MR results, validating the DEM-CFD model. Analysis of different averaging procedures confirmed that MR time-averages of dynamic systems correspond to particle-weighted averaging, rather than frame-weighted averaging, and also demonstrated that the use of Gaussian slices in MR imaging of dynamic systems is valid. PMID:24478537

  3. Theoretical Development of an Orthotropic Elasto-Plastic Generalized Composite Material Model

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Harrington, Joseph; Subramanian, Rajan; Blankenhorn, Gunther

    2014-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within LS-DYNA (Registered), there are several features that have been identified that could improve the predictive capability of a composite model. To address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed and is being implemented into LS-DYNA as MAT_213. A key feature of the improved material model is the use of tabulated stress-strain data in a variety of coordinate directions to fully define the stress-strain response of the material. To date, the model development efforts have focused on creating the plasticity portion of the model. The Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic material model with a non-associative flow rule. The coefficients of the yield function, and the stresses to be used in both the yield function and the flow rule, are computed based on the input stress-strain curves using the effective plastic strain as the tracking variable. The coefficients in the flow rule are computed based on the obtained stress-strain data. The developed material model is suitable for implementation within LS-DYNA for use in analyzing the nonlinear response of polymer composites.

  4. Constitutive modeling for isotropic materials (HOST)

    NASA Technical Reports Server (NTRS)

    Lindholm, Ulric S.; Chan, Kwai S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.; Cassenti, B. N.

    1984-01-01

    The results of the first year of work on a program to validate unified constitutive models for isotropic materials utilized in high temperature regions of gas turbine engines and to demonstrate their usefulness in computing stress-strain-time-temperature histories in complex three-dimensional structural components. The unified theories combine all inelastic strain-rate components in a single term avoiding, for example, treating plasticity and creep as separate response phenomena. An extensive review of existing unified theories is given and numerical methods for integrating these stiff time-temperature-dependent constitutive equations are discussed. Two particular models, those developed by Bodner and Partom and by Walker, were selected for more detailed development and evaluation against experimental tensile, creep and cyclic strain tests on specimens of a cast nickel base alloy, B19000+Hf. Initial results comparing computed and test results for tensile and cyclic straining for temperature from ambient to 982 C and strain rates from 10(exp-7) 10(exp-3) s(exp-1) are given. Some preliminary date correlations are presented also for highly non-proportional biaxial loading which demonstrate an increase in biaxial cyclic hardening rate over uniaxial or proportional loading conditions. Initial work has begun on the implementation of both constitutive models in the MARC finite element computer code.

  5. Initial investigation of cryogenic wind tunnel model filler materials

    NASA Technical Reports Server (NTRS)

    Rush, H. F.; Firth, G. C.

    1985-01-01

    Various filler materials are being investigated for applicability to cryogenic wind tunnel models. The filler materials will be used to fill surface grooves, holes and flaws. The severe test environment of cryogenic models precludes usage of filler materials used on conventional wind tunnel models. Coefficients of thermal expansion, finishing characteristics, adhesion and stability of several candidate filler materials were examined. Promising filler materials are identified.

  6. Computational Modeling of Ultrafast Pulse Propagation in Nonlinear Optical Materials

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Agrawal, Govind P.; Kwak, Dochan (Technical Monitor)

    1996-01-01

    There is an emerging technology of photonic (or optoelectronic) integrated circuits (PICs or OEICs). In PICs, optical and electronic components are grown together on the same chip. rib build such devices and subsystems, one needs to model the entire chip. Accurate computer modeling of electromagnetic wave propagation in semiconductors is necessary for the successful development of PICs. More specifically, these computer codes would enable the modeling of such devices, including their subsystems, such as semiconductor lasers and semiconductor amplifiers in which there is femtosecond pulse propagation. Here, the computer simulations are made by solving the full vector, nonlinear, Maxwell's equations, coupled with the semiconductor Bloch equations, without any approximations. The carrier is retained in the description of the optical pulse, (i.e. the envelope approximation is not made in the Maxwell's equations), and the rotating wave approximation is not made in the Bloch equations. These coupled equations are solved to simulate the propagation of femtosecond optical pulses in semiconductor materials. The simulations describe the dynamics of the optical pulses, as well as the interband and intraband.

  7. Numerical modeling of flowing soft materials

    NASA Astrophysics Data System (ADS)

    Toschi, Federico; Benzi, Roberto; Bernaschi, Massimo; Perlekar, Prasad; Sbragaglia, Mauro; Succi, Sauro

    2012-11-01

    The structural properties of soft-flowing and non-ergodic materials, such as emulsions, foams and gels shares similarities with the three basic states of matter (solid, liquid and gas). The macroscopic properties are characterized by non-standard features such as non-Newtonian rheology, long-time relaxation, caging effects, enhanced viscosity, structural arrest, hysteresis, dynamic disorder, aging and related phenomena. Large scale non-homogeneities can develop, even under simple shear conditions, by means of the formation of macroscopic bands of widely different viscosities (``shear banding'' phenomena). We employ a numerical model based on the Lattice Boltzmann method to perform numerical simulations of soft-matter under flowing conditions. Results of 3d simulations are presented and compared to previous 2d investigations.

  8. Modelling the shock response of a damageable anisotropic composite material

    NASA Astrophysics Data System (ADS)

    Lukyanov, Alexander A.

    2012-09-01

    The purpose of this paper is the investigation of the effect of fibre orientation on the shock response of a damageable carbon fibre-epoxy composite (CFEC). A carbon fibre-epoxy composite (CFEC) shock response in the through-thickness orientation and in one of the fibre directions is significantly different. Modelling the effect of fibre orientation on the shock response of a CFEC has been performed using a generalised decomposition of the stress tensor [A.A. Lukyanov, Int. J. Plasticity 24, 140 (2008)] and an accurate extrapolation of high-pressure shock Hugoniot states to other thermodynamics states for shocked CFEC materials. The analysis of the experimental data subject to the linear relation between shock velocities and particle velocities has shown that damage softening process produces discontinuities both in value and slope in the generalized bulk shock velocity and particle velocity relation [A.A. Lukyanov, Eur Phys J B 74, 35 (2010)]. Therefore, in order to remove these discontinuities, the three-wave structure (non-linear anisotropic, fracture and isotropic elastic waves) that accompanies damage softening process is proposed in this work for describing CFEC behavior under shock loading. A numerical calculation shows that Hugoniot Stress Levels (HELs) agree with the experimental data for selected CFEC material in different directions at low and at high intensities. In the through-thickness orientation, the material behaves similar to a simple polymer. In the fibre direction, the proposed model explains a pronounced ramp, before at sufficiently high stresses, and a much faster rising shock above it. The results are presented and discussed, and future studies are outlined.

  9. Modeling segregation of bidisperse granular materials: Model development

    NASA Astrophysics Data System (ADS)

    Fan, Yi; Schlick, Conor; Umbanhowar, Paul; Ottino, Julio; Lueptow, Richard

    2013-11-01

    Predicting segregation of size bidisperse granular materials is a challenging problem. In this talk, we present a theoretical model that captures the interplay between advection, segregation, and diffusion. The fluxes associated with these three driving factors depend on the underlying kinematics, whose characteristics play key roles in determining final particle segregation configurations. Unlike previous models of segregation, our model uses parameters based on kinematic measures instead of arbitrarily adjustable fitting parameters. This permits the theoretical prediction of species concentration within the entire flowing layer as particles segregate in the depth direction while they flow downhill. The model achieves quantitative agreement with both experimental and DEM simulation results when applied to quasi-two-dimensional bounded heaps, and can be readily adapted to other flow geometries. Y.F. was funded by The Dow Chemical Company. C.P.S. was supported by NSF Grant CMMI-1000469.

  10. Modeling, simulation and experimental verification of constitutive models for energetic materials

    SciTech Connect

    Haberman, K.S.; Bennett, J.G.; Assay, B.W.

    1997-09-01

    Simulation of the complete response of components and systems composed of energetic materials, such as PBX-9501 is important in the determination of the safety of various explosive systems. For example, predicting the correct state of stress, rate of deformation and temperature during penetration is essential in the prediction of ignition. Such simulation requires accurate constitutive models. These models must also be computationally efficient to enable analysis of large scale three dimensional problems using explicit lagrangian finite element codes such as DYNA3D. However, to be of maximum utility, these predictions must be validated against robust dynamic experiments. In this paper, the authors report comparisons between experimental and predicted displacement fields in PBX-9501 during dynamic deformation, and describe the modeling approach. The predictions used Visco-SCRAM and the Generalized Method of Cells which have been implemented into DYNA3D. The experimental data were obtained using laser-induced fluorescence speckle photography. Results from this study have lead to more accurate models and have also guided further experimental work.

  11. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  12. Radioactive materials in biosolids : dose modeling.

    SciTech Connect

    Wolbarst, A. B.; Chiu, W. A; Yu, C.; Aiello, K.; Bachmaier, J. T.; Bastian, R. K.; Cheng, J. -J.; Goodman, J.; Hogan, R.; Jones, A. R.; Kamboj, S.; Lenhartt, T.; Ott, W. R.; Rubin, A.; Salomon, S. N.; Schmidt, D. W.; Setlow, L. W.; Environmental Science Division; U.S. EPA; Middlesex County Utilities Authority; U.S. DOE; U.S. NRC; NE Ohio Regional Sewer District

    2006-01-01

    The Interagency Steering Committee on Radiation Standards (ISCORS) has recently completed a study of the occurrence within the United States of radioactive materials in sewage sludge and sewage incineration ash. One component of that effort was an examination of the possible transport of radioactivity from sludge into the local environment and the subsequent exposure of humans. A stochastic environmental pathway model was applied separately to seven hypothetical, generic sludge-release scenarios, leading to the creation of seven tables of Dose-to-Source Ratios (DSR), which can be used in translating from specific activity in sludge into dose to an individual. These DSR values were then combined with the results of an ISCORS survey of sludge and ash at more than 300 publicly owned treatment works, to explore the potential for radiation exposure of sludge workers and members of the public. This paper provides a brief overview of the pathway modeling methodology employed in the exposure and dose assessments and discusses technical aspects of the results obtained.

  13. Initial Investigation of Cryogenic Wind Tunnel Model Filler Materials

    NASA Technical Reports Server (NTRS)

    Firth, G. C.

    1985-01-01

    Filler materials are used for surface flaws, instrumentation grooves, and fastener holes in wind tunnel models. More stringent surface quality requirements and the more demanding test environment encountered by cryogenic wind tunnels eliminate filler materials such as polyester resins, plaster, and waxes used on conventional wind tunnel models. To provide a material data base for cryogenic models, various filler materials are investigated. Surface quality requirements and test temperature extremes require matching of coefficients of thermal expansion or interfacing materials. Microstrain versus temperature curves are generated for several candidate filler materials for comparison with cryogenically acceptable materials. Matches have been achieved for aluminum alloys and austenitic steels. Simulated model surfaces are filled with candidate filler materials to determine finishing characteristics, adhesion and stability when subjected to cryogenic cycling. Filler material systems are identified which meet requirements for usage with aluminum model components.

  14. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information

    SciTech Connect

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan; Jin, Ke; Du, Yingge; Neeway, James J.; Ryan, Joseph V.; Hu, Dehong; Zhang, Hongliang; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampillai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    For the first time, the use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass, SON68, and layered hole-perovskite oxide thin films were selected as model systems due to their fundamental and practical significance. Our study shows that if the size of analysis areas is same, the highest sputter rate of argon cluster sputtering can be 2-3 times faster than the highest sputter rates of oxygen or cesium sputtering. More importantly, high quality data and high sputter rates can be achieved simultaneously for argon cluster sputtering while this is not the case for cesium and oxygen sputtering. Therefore, for deep depth profiling of insulating samples, the measurement efficiency of argon cluster sputtering can be about 6-15 times better than traditional cesium and oxygen sputtering. Moreover, for a SrTiO3/SrCrO3 bi-layer thin film on a SrTiO3 substrate, the true 18O/16O isotopic distribution at the interface is better revealed when using the argon cluster sputtering source. Therefore, the implementation of an argon cluster sputtering source can significantly improve the measurement efficiency of insulating materials, and thus can expand the application of ToF-SIMS to the study of glass corrosion, perovskite oxide thin films, and many other potential systems.

  15. Advanced material modelling in numerical simulation of primary acetabular press-fit cup stability.

    PubMed

    Souffrant, R; Zietz, C; Fritsche, A; Kluess, D; Mittelmeier, W; Bader, R

    2012-01-01

    Primary stability of artificial acetabular cups, used for total hip arthroplasty, is required for the subsequent osteointegration and good long-term clinical results of the implant. Although closed-cell polymer foams represent an adequate bone substitute in experimental studies investigating primary stability, correct numerical modelling of this material depends on the parameter selection. Material parameters necessary for crushable foam plasticity behaviour were originated from numerical simulations matched with experimental tests of the polymethacrylimide raw material. Experimental primary stability tests of acetabular press-fit cups consisting of static shell assembly with consecutively pull-out and lever-out testing were subsequently simulated using finite element analysis. Identified and optimised parameters allowed the accurate numerical reproduction of the raw material tests. Correlation between experimental tests and the numerical simulation of primary implant stability depended on the value of interference fit. However, the validated material model provides the opportunity for subsequent parametric numerical studies. PMID:22817471

  16. Theoretical Development of an Orthotropic Elasto-Plastic Generalized Composite Material Model

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert; Carney, Kelly; DuBois, Paul; Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam; Blankenhorn, Gunther

    2014-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within LSDYNA (Livermore Software Technology Corporation), there are several features that have been identified that could improve the predictive capability of a composite model. To address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed and is being implemented into LS-DYNA as MAT_213. A key feature of the improved material model is the use of tabulated stress-strain data in a variety of coordinate directions to fully define the stress-strain response of the material. To date, the model development efforts have focused on creating the plasticity portion of the model. The Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic yield function with a nonassociative flow rule. The coefficients of the yield function, and the stresses to be used in both the yield function and the flow rule, are computed based on the input stress-strain curves using the effective plastic strain as the tracking variable. The coefficients in the flow rule are computed based on the obtained stress-strain data. The developed material model is suitable for implementation within LS-DYNA for use in analyzing the nonlinear response of polymer composites.

  17. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    NASA Astrophysics Data System (ADS)

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  18. A robust MRI-compatible system to facilitate highly accurate stereotactic administration of therapeutic agents to targets within the brain of a large animal model

    PubMed Central

    White, E.; Woolley, M.; Bienemann, A.; Johnson, D.E.; Wyatt, M.; Murray, G.; Taylor, H.; Gill, S.S.

    2011-01-01

    Achieving accurate intracranial electrode or catheter placement is critical in clinical practice in order to maximise the efficacy of deep brain stimulation and drug delivery respectively as well as to minimise side-effects. We have developed a highly accurate and robust method for MRI-guided, stereotactic delivery of catheters and electrodes to deep target structures in the brain of pigs. This study outlines the development of this equipment and animal model. Specifically this system enables reliable head immobilisation, acquisition of high-resolution MR images, precise co-registration of MRI and stereotactic spaces and overall rigidity to facilitate accurate burr hole-generation and catheter implantation. To demonstrate the utility of this system, in this study a total of twelve catheters were implanted into the putamen of six Large White Landrace pigs. All implants were accurately placed into the putamen. Target accuracy had a mean Euclidean distance of 0.623 mm (standard deviation of 0.33 mm). This method has allowed us to accurately insert fine cannulae, suitable for the administration of therapeutic agents by convection-enhanced delivery (CED), into the brain of pigs. This study provides summary evidence of a robust system for catheter implantation into the brain of a large animal model. We are currently using this stereotactic system, implantation procedure and animal model to develop catheter-based drug delivery systems that will be translated into human clinical trials, as well as to model the distribution of therapeutic agents administered by CED over large volumes of brain. PMID:21074564

  19. Minimum risk route model for hazardous materials

    SciTech Connect

    Ashtakala, B.; Eno, L.A.

    1996-09-01

    The objective of this study is to determine the minimum risk route for transporting a specific hazardous material (HM) between a point of origin and a point of destination (O-D pair) in the study area which minimizes risk to population and environment. The southern part of Quebec is chosen as the study area and major cities are identified as points of origin and destination on the highway network. Three classes of HM, namely chlorine gas, liquefied petroleum gas (LPG), and sulfuric acid, are chosen. A minimum risk route model has been developed to determine minimum risk routes between an O-D pair by using population or environment risk units as link impedances. The risk units for each link are computed by taking into consideration the probability of an accident and its consequences on that link. The results show that between the same O-D pair, the minimum risk routes are different for various HM. The concept of risk dissipation from origin to destination on the minimum risk route has been developed and dissipation curves are included.

  20. Detection of 15NNH+ in L1544: non-LTE modelling of dyazenilium hyperfine line emission and accurate 14N/15N values

    NASA Astrophysics Data System (ADS)

    Bizzocchi, L.; Caselli, P.; Leonardo, E.; Dore, L.

    2013-07-01

    Context. Samples of pristine solar system material found in meteorites and interplanetary dust particles are highly enriched in 15N. Conspicuous nitrogen isotopic anomalies have also been measured in comets, and the 14N/15N abundance ratio of the Earth is itself higher than the recognised presolar value by almost a factor of two. Low-temperature ion/molecule reactions in the proto-solar nebula have been repeatedly indicated as being responsible for these 15N-enhancements. Aims: We have searched for 15N variants of the N2H+ ion in L1544, a prototypical starless cloud core that is one of the best candidate sources for detection owing to its low central core temperature and high CO depletion. The goal is to evaluate accurate and reliable 14N/15N ratio values for this species in the interstellar gas. Methods: A deep integration of the 15NNH+(1-0) line at 90.4 GHz was obtained with the IRAM 30 m telescope. Non-LTE radiative transfer modelling was performed on the J = 1-0 emissions of the parent and 15N-containing dyazenilium ions, using a Bonnor-Ebert sphere as a model for the source. Results: A high-quality fit of the N2H+(1-0) hyperfine spectrum has allowed us to derive a revised value of the N2H+ column density in L1544. Analysis of the observed N15NH+ and 15NNH+ spectra yielded an abundance ratio N(N15NH+)/N(15NNH+) = 1.1 ± 0.3. The obtained 14N/15N isotopic ratio is ~1000 ± 200, suggestive of a sizeable 15N depletion in this molecular ion. Such a result is not consistent with the prediction of the current nitrogen chemical models. Conclusions: Since chemical models predict high 15N fractionation of N2H+, we suggest that 15N14N, or 15N in some other molecular form, tends to deplete onto dust grains. Based on observations carried out with the IRAM 30 m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).Full Tables B.1-B.6 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http

  1. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation. [PUCSF code

    SciTech Connect

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications.

  2. Modeling ultrashort-pulse laser ablation of dielectric materials

    SciTech Connect

    Christensen, B. H.; Balling, P.

    2009-04-15

    An approach to modeling ablation thresholds and depths in dielectric materials is proposed. The model is based on the multiple-rate-equation description suggested by Rethfeld [Phys. Rev. Lett. 92, 187401 (2004)]. This model has been extended to include a description of the propagation of the light into the dielectric sample. The generic model is based on only a few experimental quantities that characterize the native material. A Drude model describing the evolution of the dielectric constant owing to an excitation of the electrons in the material is applied. The model is compared to experimental ablation data for different dielectric materials from the literature.

  3. Modelling challenges for battery materials and electrical energy storage

    NASA Astrophysics Data System (ADS)

    Muller, Richard P.; Schultz, Peter A.

    2013-10-01

    Many vital requirements in world-wide energy production, from the electrification of transportation to better utilization of renewable energy production, depend on developing economical, reliable batteries with improved performance characteristics. Batteries reduce the need for gasoline and liquid hydrocarbons in an electrified transportation fleet, but need to be lighter, longer-lived and have higher energy densities, without sacrificing safety. Lighter and higher-capacity batteries make portable electronics more convenient. Less expensive electrical storage accelerates the introduction of renewable energy to electrical grids by buffering intermittent generation from solar or wind. Meeting these needs will probably require dramatic changes in the materials and chemistry used by batteries for electrical energy storage. New simulation capabilities, in both methods and computational resources, promise to fundamentally accelerate and advance the development of improved materials for electric energy storage. To fulfil this promise significant challenges remain, both in accurate simulations at various relevant length scales and in the integration of relevant information across multiple length scales. This focus section of Modelling and Simulation in Materials Science and Engineering surveys the challenges of modelling for energy storage, describes recent successes, identifies remaining challenges, considers various approaches to surmount these challenges and discusses the potential of these methods for future battery development. Zhang et al begin with atoms and electrons, with a review of first-principles studies of the lithiation of silicon electrodes, and then Fan et al examine the development and use of interatomic potentials to the study the mechanical properties of lithiated silicon in larger atomistic simulations. Marrocchelli et al study ionic conduction, an important aspect of lithium-ion battery performance, simulated by molecular dynamics. Emerging high

  4. Exploring the interdependencies between parameters in a material model.

    SciTech Connect

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  5. Estimating proportions of materials using mixture models

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Basu, R.

    1983-01-01

    An approach to proportion estimation based on the notion of a mixture model, appropriate parametric forms for a mixture model that appears to fit observed remotely sensed data, methods for estimating the parameters in these models, methods for labelling proportion determination from the mixture model, and methods which use the mixture model estimates as auxiliary variable values in some proportion estimation schemes are addressed.

  6. Effects of Material Degradation on the Structural Integrity of Composite Materials: Experimental Investigation and Modeling of High Temperature Degradation Mechanisms

    NASA Technical Reports Server (NTRS)

    Cunningham, Ronan A.; McManus, Hugh L.

    1996-01-01

    It has previously been demonstrated that simple coupled reaction-diffusion models can approximate the aging behavior of PMR-15 resin subjected to different oxidative environments. Based on empirically observed phenomena, a model coupling chemical reactions, both thermal and oxidative, with diffusion of oxygen into the material bulk should allow simulation of the aging process. Through preliminary modeling techniques such as this it has become apparent that accurate analytical models cannot be created until the phenomena which cause the aging of these materials are quantified. An experimental program is currently underway to quantify all of the reaction/diffusion related mechanisms involved. The following contains a summary of the experimental data which has been collected through thermogravimetric analyses of neat PMR-15 resin, along with analytical predictions from models based on the empirical data. Thermogravimetric analyses were carried out in a number of different environments - nitrogen, air and oxygen. The nitrogen provides data for the purely thermal degradation mechanisms while those in air provide data for the coupled oxidative-thermal process. The intent here is to effectively subtract the nitrogen atmosphere data (assumed to represent only thermal reactions) from the air and oxygen atmosphere data to back-figure the purely oxidative reactions. Once purely oxidative (concentration dependent) reactions have been quantified it should then be possible to quantify the diffusion of oxygen into the material bulk.

  7. Micromechanical modeling of heterogeneous energetic materials

    SciTech Connect

    Baer, M.R.; Kipp, M.E.; Swol, F. van

    1998-09-01

    In this work, the mesoscale processes of consolidation, deformation and reaction of shocked porous energetic materials are studied using shock physics analysis of impact on a collection of discrete HMX crystals. High resolution three-dimensional CTH simulations indicate that rapid deformation occurs at material contact points causing large amplitude fluctuations of stress states having wavelengths of the order of several particle diameters. Localization of energy produces hot-spots due to shock focusing and plastic work near grain boundaries as material flows to interstitial regions. These numerical experiments demonstrate that hot-spots are strongly influenced by multiple crystal interactions. Chemical reaction processes also produce multiple wave structures associated with particle distribution effects. This study provides new insights into the micromechanical behavior of heterogeneous energetic materials strongly suggesting that initiation and reaction of shocked heterogeneous materials involves states distinctly different than single jump state descriptions.

  8. Process modeling for carbon-phenolic nozzle materials

    NASA Technical Reports Server (NTRS)

    Letson, Mischell A.; Bunker, Robert C.; Remus, Walter M., III; Clinton, R. G.

    1989-01-01

    A thermochemical model based on the SINDA heat transfer program is developed for carbon-phenolic nozzle material processes. The model can be used to optimize cure cycles and to predict material properties based on the types of materials and the process by which these materials are used to make nozzle components. Chemical kinetic constants for Fiberite MX4926 were determined so that optimization of cure cycles for the current Space Shuttle Solid Rocket Motor nozzle rings can be determined.

  9. Development of a mechanism and an accurate and simple mathematical model for the description of drug release: Application to a relevant example of acetazolamide-controlled release from a bio-inspired elastin-based hydrogel.

    PubMed

    Fernández-Colino, A; Bermudez, J M; Arias, F J; Quinteros, D; Gonzo, E

    2016-04-01

    Transversality between mathematical modeling, pharmacology, and materials science is essential in order to achieve controlled-release systems with advanced properties. In this regard, the area of biomaterials provides a platform for the development of depots that are able to achieve controlled release of a drug, whereas pharmacology strives to find new therapeutic molecules and mathematical models have a connecting function, providing a rational understanding by modeling the parameters that influence the release observed. Herein we present a mechanism which, based on reasonable assumptions, explains the experimental data obtained very well. In addition, we have developed a simple and accurate “lumped” kinetics model to correctly fit the experimentally observed drug-release behavior. This lumped model allows us to have simple analytic solutions for the mass and rate of drug release as a function of time without limitations of time or mass of drug released, which represents an important step-forward in the area of in vitro drug delivery when compared to the current state of the art in mathematical modeling. As an example, we applied the mechanism and model to the release data for acetazolamide from a recombinant polymer. Both materials were selected because of a need to develop a suitable ophthalmic formulation for the treatment of glaucoma. The in vitro release model proposed herein provides a valuable predictive tool for ensuring product performance and batch-to-batch reproducibility, thus paving the way for the development of further pharmaceutical devices. PMID:26838852

  10. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-07-01

    Routine measurements of the beam irradiance at normal incidence (DNI) include the irradiance originating from within the extent of the solar disc only (DNIS) whose angular extent is 0.266° ± 1.7 %, and that from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates if the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and a collocated Sun and Aureole Measurement (SAM) instrument which offers reference measurements of the monochromatic profile of solar radiance, were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 5 %, a relative bias of +1 % and acoefficient of determination greater than 0.97. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a Two Term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 22 and -19 % and a coefficient of determination of 0.89. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard DNI measurements.

  11. Modelling cohesive, frictional and viscoplastic materials

    NASA Astrophysics Data System (ADS)

    Alehossein, Habib; Qin, Zongyi

    2016-06-01

    Most materials in mining and civil engineering construction are not only viscoplastic, but also cohesive frictional. Fresh concrete, fly ash and mining slurries are all granular-frictional-visco-plastic fluids, although solid concrete is normally considered as a cohesive frictional material. Presented here is both a formulation of the pipe and disc flow rates as a function of pressure and pressure gradient and the CFD application to fresh concrete flow in L-Box tests.

  12. Interatomic Potential Models for Ionic Materials

    NASA Astrophysics Data System (ADS)

    Gale, Julian D.

    Ionic materials are present in many key technological applications of the modern era, from solid state batteries and fuel cells, nuclear waste immobiliza tion, through to industrial heterogeneous catalysis, such as that found in automotive exhaust systems. With the boundless possibilities for their utilization, it is natural that there has been a long history of computer simulation of their structure and properties in order to understand the materials science of these systems at the atomic level.

  13. Fabrication, Characterization and Modeling of Functionally Graded Materials

    NASA Astrophysics Data System (ADS)

    Lee, Po-Hua

    model. This method is initially applied to study the case of one drop moving in a viscous fluid; the solution recovers the closed form classic solution when the drop is spherical. Moreover, this method is general and can be applied to the cases of different drop shapes and the interaction between multiple drops. The translation velocities of the drops depend on the relative position, the center-to-center distance of drops, the viscosity and size of drops. For the case of a pair of identical spherical drops, the present method using a linear approximation of the eigenstrain rate has provided a very close solution to the classic explicit solution. If a higher order of the polynomial form of the eigenstrain rate is used, one can expect a more accurate result. To meet the final goal of mass production of the aforementioned Al-HDPE FGM, a faster and more economical material manufacturing method is proposed through a vibration method. The particle segregation of larger aluminum particles embedded in the concentrated suspension of smaller high-density polyethylene is investigated under vibration with different frequencies and magnitudes. Altering experimental parameters including time and amplitude of vibration, the suspension exhibits different particle segregation patterns: uniform-like, graded and bi-layered. For material characterization, small cylinder films of Al-HDPE system FGM are obtained after the stages of dry, melt and solidification. Solar panel prototypes are fabricated and tested at different water flow rates and solar irradiation intensities. The temperature distribution in the solar panel is measured and simulated to evaluate the performance of the solar panel. Finite element simulation results are very consistent with the experimental data. The understanding of heat transfer in the hybrid solar panel prototypes gained through this study will provide a foundation for future solar panel design and optimization.

  14. Impact Testing of Aluminum 2024 and Titanium 6Al-4V for Material Model Development

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael; Revilock, Duane M.; Lerch, Bradley A.; Ruggeri, Charles R.

    2013-01-01

    One of the difficulties with developing and verifying accurate impact models is that parameters such as high strain rate material properties, failure modes, static properties, and impact test measurements are often obtained from a variety of different sources using different materials, with little control over consistency among the different sources. In addition there is often a lack of quantitative measurements in impact tests to which the models can be compared. To alleviate some of these problems, a project is underway to develop a consistent set of material property, impact test data and failure analysis for a variety of aircraft materials that can be used to develop improved impact failure and deformation models. This project is jointly funded by the NASA Glenn Research Center and the FAA William J. Hughes Technical Center. Unique features of this set of data are that all material property data and impact test data are obtained using identical material, the test methods and procedures are extensively documented and all of the raw data is available. Four parallel efforts are currently underway: Measurement of material deformation and failure response over a wide range of strain rates and temperatures and failure analysis of material property specimens and impact test articles conducted by The Ohio State University; development of improved numerical modeling techniques for deformation and failure conducted by The George Washington University; impact testing of flat panels and substructures conducted by NASA Glenn Research Center. This report describes impact testing which has been done on aluminum (Al) 2024 and titanium (Ti) 6Al-4vanadium (V) sheet and plate samples of different thicknesses and with different types of projectiles, one a regular cylinder and one with a more complex geometry incorporating features representative of a jet engine fan blade. Data from this testing will be used in validating material models developed under this program. The material

  15. Constitutive and damage material modeling in a high pressure hydrogen environment

    NASA Astrophysics Data System (ADS)

    Russell, D. A.; Fritzemeier, L. G.

    1991-05-01

    Numerous components in reusable space propulsion systems such as the SSME are exposed to high pressure gaseous hydrogen environments. Flow areas and passages in the fuel turbopump, fuel and oxidizer preburners, main combustion chamber, and injector assembly contain high pressure hydrogen either high in purity or as hydrogen rich steam. Accurate constitutive and damage material models applicable to high pressure hydrogen environments are therefore needed for engine design and analysis. Existing constitutive and cyclic crack initiation models were evaluated only for conditions of oxidizing environments. The main objective is to evaluate these models for applicability to high pressure hydrogen environments.

  16. Constitutive and damage material modeling in a high pressure hydrogen environment

    NASA Technical Reports Server (NTRS)

    Russell, D. A.; Fritzemeier, L. G.

    1991-01-01

    Numerous components in reusable space propulsion systems such as the SSME are exposed to high pressure gaseous hydrogen environments. Flow areas and passages in the fuel turbopump, fuel and oxidizer preburners, main combustion chamber, and injector assembly contain high pressure hydrogen either high in purity or as hydrogen rich steam. Accurate constitutive and damage material models applicable to high pressure hydrogen environments are therefore needed for engine design and analysis. Existing constitutive and cyclic crack initiation models were evaluated only for conditions of oxidizing environments. The main objective is to evaluate these models for applicability to high pressure hydrogen environments.

  17. Strength and failure models for epoxy mortar polymer concrete materials

    SciTech Connect

    Salami, M.R.; Zhao, S.

    1995-06-01

    Since the polymer concrete materials are used in construction, there is a need for developing a fundamental failure and constitutive model for predicting material behavior. The present research is undertaken as an initial step toward developing a fundamental failure and constitutive model for polymer concrete materials, as well as providing benchmark data on the strength and failure characteristics of material specimens for future work. The failure model will be developed based on introducing a failure function. This model will predict the changes in constitutive properties and resistance values in aggressive environments.

  18. SRM (Solid Rocket Motor) propellant and polymer materials structural modeling

    NASA Technical Reports Server (NTRS)

    Moore, Carleton J.

    1988-01-01

    The following investigation reviews and evaluates the use of stress relaxation test data for the structural analysis of Solid Rocket Motor (SRM) propellants and other polymer materials used for liners, insulators, inhibitors, and seals. The stress relaxation data is examined and a new mathematical structural model is proposed. This model has potentially wide application to structural analysis of polymer materials and other materials generally characterized as being made of viscoelastic materials. A dynamic modulus is derived from the new model for stress relaxation modulus and is compared to the old viscoelastic model and experimental data.

  19. Coupling 1D Navier Stokes equation with autoregulation lumped parameter networks for accurate cerebral blood flow modeling

    NASA Astrophysics Data System (ADS)

    Ryu, Jaiyoung; Hu, Xiao; Shadden, Shawn C.

    2014-11-01

    The cerebral circulation is unique in its ability to maintain blood flow to the brain under widely varying physiologic conditions. Incorporating this autoregulatory response is critical to cerebral blood flow modeling, as well as investigations into pathological conditions. We discuss a one-dimensional nonlinear model of blood flow in the cerebral arteries that includes coupling of autoregulatory lumped parameter networks. The model is tested to reproduce a common clinical test to assess autoregulatory function - the carotid artery compression test. The change in the flow velocity at the middle cerebral artery (MCA) during carotid compression and release demonstrated strong agreement with published measurements. The model is then used to investigate vasospasm of the MCA, a common clinical concern following subarachnoid hemorrhage. Vasospasm was modeled by prescribing vessel area reduction in the middle portion of the MCA. Our model showed similar increases in velocity for moderate vasospasms, however, for serious vasospasm (~ 90% area reduction), the blood flow velocity demonstrated decrease due to blood flow rerouting. This demonstrates a potentially important phenomenon, which otherwise would lead to false-negative decisions on clinical vasospasm if not properly anticipated.

  20. Detailed and Highly Accurate 3d Models of High Mountain Areas by the Macs-Himalaya Aerial Camera Platform

    NASA Astrophysics Data System (ADS)

    Brauchle, J.; Hein, D.; Berger, R.

    2015-04-01

    Remote sensing in areas with extreme altitude differences is particularly challenging. In high mountain areas specifically, steep slopes result in reduced ground pixel resolution and degraded quality in the DEM. Exceptionally high brightness differences can in part no longer be imaged by the sensors. Nevertheless, detailed information about mountainous regions is highly relevant: time and again glacier lake outburst floods (GLOFs) and debris avalanches claim dozens of victims. Glaciers are sensitive to climate change and must be carefully monitored. Very detailed and accurate 3D maps provide a basic tool for the analysis of natural hazards and the monitoring of glacier surfaces in high mountain areas. There is a gap here, because the desired accuracies are often not achieved. It is for this reason that the DLR Institute of Optical Sensor Systems has developed a new aerial camera, the MACS-Himalaya. The measuring unit comprises four camera modules with an overall aperture angle of 116° perpendicular to the direction of flight. A High Dynamic Range (HDR) mode was introduced so that within a scene, bright areas such as sun-flooded snow and dark areas such as shaded stone can be imaged. In 2014, a measuring survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at heights up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced. Special challenges and gaps in the investigation of high mountain areas, approaches for resolution of these problems, the camera system and the state of evaluation are presented with examples.

  1. Creep characterization of gels and nonlinear viscoelastic material model

    NASA Astrophysics Data System (ADS)

    Ishikawa, Kiyotaka; Fujikawa, Masaki; Makabe, Chobin; Tanaka, Kou

    2016-07-01

    In this paper, we examine gel creep behavior and develop a material model for useful and simple numerical simulation of this behavior. This study has three stages and aims: (1) gel creep behavior is examined; (2) the material model is determined and the material constants are identified; and (3) the versatility of the material model and the constants are evaluated. The creep behavior is found to be independent of the initial stress level in the present experiment. Thus, the viscoelastic model proposed by Simo is selected, and its material constants are identified using the results of creep tests. Moreover, from the results of numerical calculations and experiments, it is found that the chosen material model has good reproducibility, predictive performance and high versatility.

  2. Simplified biased random walk model for RecA-protein-mediated homology recognition offers rapid and accurate self-assembly of long linear arrays of binding sites

    PubMed Central

    Kates-Harbeck, Julian; Tilloy, Antoine; Prentiss, Mara

    2016-01-01

    Inspired by RecA-protein-based homology recognition, we consider the pairing of two long linear arrays of binding sites. We propose a fully reversible, physically realizable biased random walk model for rapid and accurate self-assembly due to the spontaneous pairing of matching binding sites, where the statistics of the searched sample are included. In the model, there are two bound conformations, and the free energy for each conformation is a weakly nonlinear function of the number of contiguous matched bound sites. PMID:23944487

  3. Accurate relativistic adapted Gaussian basis sets for francium through Ununoctium without variational prolapse and to be used with both uniform sphere and Gaussian nucleus models.

    PubMed

    Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade

    2013-10-15

    Accurate relativistic adapted Gaussian basis sets (RAGBSs) for 87 Fr up to 118 Uuo atoms without variational prolapse were developed here with the use of a polynomial version of the Generator Coordinate Dirac-Fock method. Two finite nuclear models have been used, the Gaussian and uniform sphere models. The largest RAGBS error, with respect to numerical Dirac-Fock results, is 15.4 miliHartree for Ununoctium with a basis set size of 33s30p19d14f functions. PMID:23913741

  4. RCK: accurate and efficient inference of sequence- and structure-based protein–RNA binding models from RNAcompete data

    PubMed Central

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-01-01

    Motivation: Protein–RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein–RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. Results: We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein–RNA structure-based models on an unprecedented scale. Availability and Implementation: Software and models are freely available at http://rck.csail.mit.edu/ Contact: bab@mit.edu Supplementary information

  5. An accurate radiative heating and cooling algorithm for use in a dynamical model of the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1982-01-01

    The circulation of the middle atmosphere of the earth (15-90 km) is driven by the unequal distribution of net radiative heating. Calculations have shown that local radiative heating is nearly balanced by radiative cooling throughout parts of the stratosphere and mesosphere. The 15 micrometer band of CO2 is the dominant component of the infrared cooling. The present investigation is concerned with an algorithm regarding the involved cooling process. The algorithm was designed for the semispectral primitive equation model of the stratosphere and mesosphere described by Holton and Wehrbein (1980). The model consists of 16 layers, each nominally 5 km thick, between the base of the stratosphere at 100 mb (approximately 16 km) and the base of the thermosphere (approximately 96 km). The considered algorithm provides a convenient means of incorporating cooling due to CO2 into dynamical models of the middle atmosphere.

  6. Multiscale Modeling of Carbon/Phenolic Composite Thermal Protection Materials: Atomistic to Effective Properties

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Murthy, Pappu L.; Bednarcyk, Brett A.; Lawson, John W.; Monk, Joshua D.; Bauschlicher, Charles W., Jr.

    2016-01-01

    Next generation ablative thermal protection systems are expected to consist of 3D woven composite architectures. It is well known that composites can be tailored to achieve desired mechanical and thermal properties in various directions and thus can be made fit-for-purpose if the proper combination of constituent materials and microstructures can be realized. In the present work, the first, multiscale, atomistically-informed, computational analysis of mechanical and thermal properties of a present day - Carbon/Phenolic composite Thermal Protection System (TPS) material is conducted. Model results are compared to measured in-plane and out-of-plane mechanical and thermal properties to validate the computational approach. Results indicate that given sufficient microstructural fidelity, along with lowerscale, constituent properties derived from molecular dynamics simulations, accurate composite level (effective) thermo-elastic properties can be obtained. This suggests that next generation TPS properties can be accurately estimated via atomistically informed multiscale analysis.

  7. Experiments with a low-cost system for computer graphics material model acquisition

    NASA Astrophysics Data System (ADS)

    Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David

    2015-03-01

    We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.

  8. Finite element implementation of a new model of slight compressibility for transversely isotropic materials.

    PubMed

    Pierrat, B; Murphy, J G; MacManus, D B; Gilchrist, M D

    2016-01-01

    Modelling transversely isotropic materials in finite strain problems is a complex task in biomechanics, and is usually addressed by using finite element (FE) simulations. The standard method developed to account for the quasi-incompressible nature of soft tissues is to decompose the strain energy function (SEF) into volumetric and deviatoric parts. However, this decomposition is only valid for fully incompressible materials, and its use for slightly compressible materials yields an unphysical response during the simulation of hydrostatic tension/compression of a transversely isotropic material. This paper presents the FE implementation as subroutines of a new volumetric model solving this deficiency in two FE codes: Abaqus and FEBio. This model also has the specificity of restoring the compatibility with small strain theory. The stress and elasticity tensors are first derived for a general SEF. This is followed by a successful convergence check using a particular SEF and a suite of single-element tests showing that this new model does not only correct the hydrostatic deficiency but may also affect stresses during shear tests (Poynting effect) and lateral stretches during uniaxial tests (Poisson's effect). These FE subroutines have numerous applications including the modelling of tendons, ligaments, heart tissue, etc. The biomechanics community should be aware of specificities of the standard model, and the new model should be used when accurate FE results are desired in the case of compressible materials. PMID:26252069

  9. Advanced Process Model for Polymer Pyrolysis and Uranium Ceramic Material Processing

    SciTech Connect

    Wang, Xiaolin; Zunjarrao, Suraj C.; Zhang, Hui; Singh, Raman P.

    2006-07-01

    Silicon carbide (SiC) based uranium ceramic material can be fabricated as hosts for ultra high temperature applications, such as gas-cooled fast reactor fuels and in-core materials. A pyrolysis-based material processing technique allows for the fabrication of SiC based uranium ceramic materials at a lower temperature compared to sintering route. Modeling of the process is considered important for optimizing the fabrication and producing material with high uniformity. This study presents a process model describing polymer pyrolysis and uranium ceramic material processing, including heat transfer, polymer pyrolysis, SiC crystallization, chemical reactions, and species transport of a porous uranium oxide mixed polymer. Three key reactions for polymer pyrolysis and one key reaction for uranium oxide polymer interaction are established for the processing. Included in the model formulation are the effects of transport processes such as heat-up, polymer decomposition, and volatiles escape. The model is capable of accurately predicting the polymer pyrolysis and chemical reactions of the source material. Processing of a sample with certain geometry is simulated. The effects of heating rate, particle size and volume ratio of uranium oxide and polymer on porosity evolution, species uniformity, reaction rate are investigated. (authors)

  10. Efficient and physically accurate modeling and simulation of anisoplanatic imaging through the atmosphere: a space-variant volumetric image blur method

    NASA Astrophysics Data System (ADS)

    Reinhardt, Colin N.; Ritcey, James A.

    2015-09-01

    We present a novel method for efficient and physically-accurate modeling & simulation of anisoplanatic imaging through the atmosphere; in particular we present a new space-variant volumetric image blur algorithm. The method is based on the use of physical atmospheric meteorology models, such as vertical turbulence profiles and aerosol/molecular profiles which can be in general fully spatially-varying in 3 dimensions and also evolving in time. The space-variant modeling method relies on the metadata provided by 3D computer graphics modeling and rendering systems to decompose the image into a set of slices which can be treated in an independent but physically consistent manner to achieve simulated image blur effects which are more accurate and realistic than the homogeneous and stationary blurring methods which are commonly used today. We also present a simple illustrative example of the application of our algorithm, and show its results and performance are in agreement with the expected relative trends and behavior of the prescribed turbulence profile physical model used to define the initial spatially-varying environmental scenario conditions. We present the details of an efficient Fourier-transform-domain formulation of the SV volumetric blur algorithm and detailed algorithm pseudocode description of the method implementation and clarification of some nonobvious technical details.

  11. Developing Interactive Instructional Materials: A Model.

    ERIC Educational Resources Information Center

    Henderson, Craig; And Others

    Many colleges and departments at Tennessee Technological University, as well as most other major universities, are progressing toward more interactive instructional materials. The benefits of implementing instructional technology are numerous and diverse. However, because of increasingly austere budgets, a focused and cost-effective approach to…

  12. RADIOACTIVE MATERIALS IN BIOSOLIDS: DOSE MODELING

    EPA Science Inventory

    The Interagency Steering Committee on Radiation Standards (ISCORS) has recently completed a study of the occurrence within the United States of radioactive materials in sewage sludge and sewage incineration ash. One component of that effort was an examination of the possible tra...

  13. Comparisons of a Constrained Least Squares Model versus Human-in-the-Loop for Spectral Unmixing to Determine Material Type of GEO Debris

    NASA Technical Reports Server (NTRS)

    Abercromby, Kira J.; Rapp, Jason; Bedard, Donald; Seitzer, Patrick; Cardona, Tommaso; Cowardin, Heather; Barker, Ed; Lederer, Susan

    2013-01-01

    Constrained Linear Least Squares model is generally more accurate than the "human-in-the-loop". However, "human-in-the-loop" can remove materials that make no sense. The speed of the model in determining a "first cut" at the material ID makes it a viable option for spectral unmixing of debris objects.

  14. Can Impacts of Climate Change and Agricultural Adaptation Strategies Be Accurately Quantified if Crop Models Are Annually Re-Initialized?

    PubMed Central

    Basso, Bruno; Hyndman, David W.; Kendall, Anthony D.; Grace, Peter R.; Robertson, G. Philip

    2015-01-01

    Estimates of climate change impacts on global food production are generally based on statistical or process-based models. Process-based models can provide robust predictions of agricultural yield responses to changing climate and management. However, applications of these models often suffer from bias due to the common practice of re-initializing soil conditions to the same state for each year of the forecast period. If simulations neglect to include year-to-year changes in initial soil conditions and water content related to agronomic management, adaptation and mitigation strategies designed to maintain stable yields under climate change cannot be properly evaluated. We apply a process-based crop system model that avoids re-initialization bias to demonstrate the importance of simulating both year-to-year and cumulative changes in pre-season soil carbon, nutrient, and water availability. Results are contrasted with simulations using annual re-initialization, and differences are striking. We then demonstrate the potential for the most likely adaptation strategy to offset climate change impacts on yields using continuous simulations through the end of the 21st century. Simulations that annually re-initialize pre-season soil carbon and water contents introduce an inappropriate yield bias that obscures the potential for agricultural management to ameliorate the deleterious effects of rising temperatures and greater rainfall variability. PMID:26043188

  15. The accurate determination of bismuth in lead concentrates and other non-ferrous materials by AAS after separation and preconcentration of the bismuth with mercaptoacetic acid.

    PubMed

    Howell, D J; Dohnt, B R

    1982-05-01

    A method for determining 0.0001% and upwards of bismuth in lead, zinc or copper concentrates, metals or alloys and other smelter residues is described. Bismuth is separated from lead, iron and gangue materials with mercaptoacetic acid after reduction of the iron with hydrazine. Large quantities of tin can be removed during the dissolution. An additional separation is made for materials high in copper and/or sulphate. The separated and concentrated bismuth is determined by atomic-absorption spectrometry using the Bi line at 223.1 nm. The proposed method also allows the simultaneous separation and determination of silver. PMID:18963145

  16. A guide to using material model No. 11 in NIKE2D: An internal variable, viscoplasticity model

    SciTech Connect

    Flower, E.C.; Nikkel, D.J. Jr.

    1990-10-30

    The need to accurately model the superplastic forming process which is highly rate and temperature dependent motivated the evaluation of Bammann's internal variable, viscoplasticity material model. The model is based upon the concepts of unified creep plasticity, but employs a yield surface for efficient implementation into large-scale numerical computer codes. It has proven elsewhere to be quite successful in describing large strain, thermal-mechanical behavior of crystalline materials. Features of the model enable it to simulate the apparent strain-rate behavior exhibited by many metals above one half the melt temperature. It is the efficient incorporation of features that make the model attractive for use in finite element modeling of metal deformation processes. Although this model was implemented into the Lawrence Livermore National Laboratory's NIKE2D finite element program in 1986, there have been no known reports of successful use by NIKE2D users. The purpose of this report is to provide the user the proper format to input model parameters, a procedure for determining appropriate values for material constants from experimental data, and supplemental information on the model relevant to the implementation in the NIKE2D finite element program. Detailed accounts of the theoretical aspects of the model can be found in the cited references. 4 refs., 8 figs.

  17. Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties

    PubMed Central

    Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon

    2014-01-01

    Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874

  18. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids

    SciTech Connect

    Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.

    2014-12-14

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb{sup +} and Sr{sup 2+}) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein–Zernike equations, with results from the Kovalenko–Hirata closure being closest to experiment for the cases studied here.

  19. A two-parameter kinetic model based on a time-dependent activity coefficient accurately describes enzymatic cellulose digestion

    PubMed Central

    Kostylev, Maxim; Wilson, David

    2014-01-01

    Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567

  20. Establishing Magnetic Resonance Imaging as an Accurate and Reliable Tool to Diagnose and Monitor Esophageal Cancer in a Rat Model

    PubMed Central

    Kosovec, Juliann E.; Zaidi, Ali H.; Komatsu, Yoshihiro; Kasi, Pashtoon M.; Cothron, Kyle; Thompson, Diane V.; Lynch, Edward; Jobe, Blair A.

    2014-01-01

    Objective To assess the reliability of magnetic resonance imaging (MRI) for detection of esophageal cancer in the Levrat model of end-to-side esophagojejunostomy. Background The Levrat model has proven utility in terms of its ability to replicate Barrett’s carcinogenesis by inducing gastroduodenoesophageal reflux (GDER). Due to lack of data on the utility of non-invasive methods for detection of esophageal cancer, treatment efficacy studies have been limited, as adenocarcinoma histology has only been validated post-mortem. It would therefore be of great value if the validity and reliability of MRI could be established in this setting. Methods Chronic GDER reflux was induced in 19 male Sprague-Dawley rats using the modified Levrat model. At 40 weeks post-surgery, all animals underwent endoscopy, MRI scanning, and post-mortem histological analysis of the esophagus and anastomosis. With post-mortem histology serving as the gold standard, assessment of presence of esophageal cancer was made by five esophageal specialists and five radiologists on endoscopy and MRI, respectively. Results The accuracy of MRI and endoscopic analysis to correctly identify cancer vs. no cancer was 85.3% and 50.5%, respectively. ROC curves demonstrated that MRI rating had an AUC of 0.966 (p<0.001) and endoscopy rating had an AUC of 0.534 (p = 0.804). The sensitivity and specificity of MRI for identifying cancer vs. no-cancer was 89.1% and 80% respectively, as compared to 45.5% and 57.5% for endoscopy. False positive rates of MRI and endoscopy were 20% and 42.5%, respectively. Conclusions MRI is a more reliable diagnostic method than endoscopy in the Levrat model. The non-invasiveness of the tool and its potential to volumetrically quantify the size and number of tumors likely makes it even more useful in evaluating novel agents and their efficacy in treatment studies of esophageal cancer. PMID:24705451

  1. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.

    2014-12-01

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.

  2. Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data

    SciTech Connect

    Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.; Solberg, Timothy D.; Chetty, Indrin J.

    2009-12-15

    electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were {approx}420 ({approx}253 min on a single processor) and {approx}175 ({approx}58 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of {approx}1400 ({approx}6 min on a single processor) and {approx}60 ({approx}4 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences ({+-}1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.

  3. Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data

    PubMed Central

    Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.; Solberg, Timothy D.; Chetty, Indrin J.

    2009-01-01

    DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were ∼420 (∼253 min on a single processor) and ∼175 (∼58 min on a single processor) for the 10×10 and 40×40 cm2 field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of ∼1400 (∼6 min on a single processor) and ∼60 (∼4 min on a single processor) for the 10×10 and 40×40 cm2 field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences (±1%–3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe–Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs∕AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water∕air∕water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe–Heitler default case. PMID:20095258

  4. Compendium of Material Composition Data for Radiation Transport Modeling

    SciTech Connect

    Williams, Ralph G.; Gesh, Christopher J.; Pagh, Richard T.

    2006-10-31

    Computational modeling of radiation transport problems including homeland security, radiation shielding and protection, and criticality safety all depend upon material definitions. This document has been created to serve two purposes: 1) to provide a quick reference of material compositions for analysts and 2) a standardized reference to reduce the differences between results from two independent analysts. Analysts are always encountering a variety of materials for which elemental definitions are not readily available or densities are not defined. This document provides a location where unique or hard to define materials will be located to reduce duplication in research for modeling purposes. Additionally, having a common set of material definitions helps to standardize modeling across PNNL and provide two separate researchers the ability to compare different modeling results from a common materials basis.

  5. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    NASA Technical Reports Server (NTRS)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  6. Building accurate sequence-to-affinity models from high-throughput in vitro protein-DNA binding data using FeatureREDUCE.

    PubMed

    Riley, Todd R; Lazarovici, Allan; Mann, Richard S; Bussemaker, Harmen J

    2015-01-01

    Transcription factors are crucial regulators of gene expression. Accurate quantitative definition of their intrinsic DNA binding preferences is critical to understanding their biological function. High-throughput in vitro technology has recently been used to deeply probe the DNA binding specificity of hundreds of eukaryotic transcription factors, yet algorithms for analyzing such data have not yet fully matured. Here, we present a general framework (FeatureREDUCE) for building sequence-to-affinity models based on a biophysically interpretable and extensible model of protein-DNA interaction that can account for dependencies between nucleotides within the binding interface or multiple modes of binding. When training on protein binding microarray (PBM) data, we use robust regression and modeling of technology-specific biases to infer specificity models of unprecedented accuracy and precision. We provide quantitative validation of our results by comparing to gold-standard data when available. PMID:26701911

  7. Diffusion in Condensed Matter: Methods, Materials, Models

    NASA Astrophysics Data System (ADS)

    Heitjans, Paul; Kärger, Jög

    This comprehensive, handbook-style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. It is understood and presented as a phenomenon of crucial relevance for a large variety of processes and materials. In this book, all aspects of the theoretical fundamentals, experimental techniques, highlights of current developments and results for solids, liquids and interfaces are presented.

  8. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    NASA Astrophysics Data System (ADS)

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-12-01

    Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.

  9. Modeling of sorption characteristics of backfill materials

    SciTech Connect

    Chitra, S.; Sasidhar, P.; Lal, K.B.; Ahmed, J.

    1998-06-01

    Sorption data analysis was carried out using the Freundlich, Langmuir, and Modified Freundlich isotherms for the uptake of sodium and potassium in an initial concentration range of 10--100 mg/L on backfill materials, viz., bentonite, vermiculite, and soil samples. The soil samples were collected from a shallow land disposal facility at Kalpakkam. The Freundlich isotherm equation is validated as a preferred general mathematical tool for representing the sorption of K{sup +} by all the selected backfill materials. The Modified Freundlich isotherm equation is validated as a preferred mathematical tool for representing the sorption of Na{sup +} by the soil samples. Since a negative sorption was observed for the uptake of Na{sup +} by commercial clay minerals (vermiculite and bentonite clay in the laboratory experiments), sorption analysis could not be carried out using the above-mentioned isotherm equations. Hill plots of the sorption data suggest that in the region of low saturation (10--40 mg/L), sorption of K{sup +} by vermiculite is impeded by interaction among sorption sites. In the region of higher saturation (60--100 mg/L), sorption of K{sup +} by all three backfill materials is enhanced by interaction among sorption sites. The Hill plot of the sorption data for Na{sup +} by soil suggests that irrespective of Na{sup +} concentration, sorption of Na{sup +} at one exchange size enhances sorption at other exchange sites.

  10. Designing and modeling doubly porous polymeric materials

    NASA Astrophysics Data System (ADS)

    Ly, H.-B.; Le Droumaguet, B.; Monchiet, V.; Grande, D.

    2015-07-01

    Doubly porous organic materials based on poly(2-hydroxyethyl methacrylate) are synthetized through the use of two distinct types of porogen templates, namely a macroporogen and a nanoporogen. Two complementary strategies are implemented by using either sodium chloride particles or fused poly(methyl methacrylate) beads as macroporogens, in conjunction with ethanol as a porogenic solvent. The porogen removal respectively allows for the generation of either non-interconnected or interconnected macropores with an average diameter of about 100-200 μm and nanopores with sizes lying within the 100 nm order of magnitude, as evidenced by mercury intrusion porosimetry and scanning electron microscopy. Nitrogen sorption measurements evidence the formation of materials with rather high specific surface areas, i.e. higher than 140 m2.g-1. This paper also addresses the development of numerical tools for computing the permeability of such doubly porous materials. Due to the coexistence of well separated scales between nanopores and macropores, a consecutive double homogenization approach is proposed. A nanoscopic scale and a mesoscopic scale are introduced, and the flow is evaluated by means of the Finite Element Method to determine the macroscopic permeability. At the nanoscopic scale, the flow is described by the Stokes equations with an adherence condition at the solid surface. At the mesoscopic scale, the flow obeys the Stokes equations in the macropores and the Darcy equation in the permeable polymer in order to account for the presence of the nanopores.

  11. Model of high-temperature plastic deformation of nanocrystalline materials: Application to yttria tetragonal zirconia

    NASA Astrophysics Data System (ADS)

    Gómez-García, D.; Lorenzo-Martín, C.; Muñoz-Bernabé, A.; Domínguez-Rodríguez, A.

    2003-04-01

    The possibility of the influence of segregation-induced local electric fields in the bulk diffusion of the species controlling the plastic deformation of nanocrystalline materials has been pointed out. Until now, there is only a model applicable to the case of a monodimensional system. In spite of its simplicity, it predicts a significative influence of a local electric field in creep. Our work develops a different model applicable to three-dimensional systems. It takes as a starting point the diffusional model, and it can be generalized to those systems in which the grain-boundary sliding model accommodated by diffusional processes accurately describes plasticity in the submicron range of grain size. The range of validity, as well as the different behavior of nanocrystalline materials from the submicron ones is discussed. Preliminary results are in good agreement with the published data for yttria tetragonal zirconia (YTZP) nanocrystalline ceramics.

  12. Mechanical strength model for plastic bonded granular materials at high strain rates and large strains

    SciTech Connect

    Browning, R.V.; Scammon, R.J.

    1997-07-01

    Modeling impact events on systems containing plastic bonded explosive materials requires accurate models for stress evolution at high strain rates out to large strains. For example, in the Steven test geometry reactions occur after strains of 0.5 or more are reached for PBX-950l. The morphology of this class of materials and properties of the constituents are briefly described. We then review the viscoelastic behavior observed at small strains for this class of material, and evaluate large strain models used for granular materials such as cap models. Dilatation under shearing deformations of the PBX is experimentally observed and is one of the key features modeled in cap style plasticity theories, together with bulk plastic flow at high pressures. We propose a model that combines viscoelastic behavior at small strains but adds intergranular stresses at larger strains. A procedure using numerical simulations and comparisons with results from flyer plate tests and low rate uniaxial stress tests is used to develop a rough set of constants for PBX-9501. Comparisons with the high rate flyer plate tests demonstrate the viscoelastic based model show that the observed characteristic behavior is captured by this model.

  13. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    NASA Astrophysics Data System (ADS)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  14. On the Use of Biaxial Properties in Modeling Annulus as a Holzapfel–Gasser–Ogden Material

    PubMed Central

    Momeni Shahraki, Narjes; Fatemi, Ali; Goel, Vijay K.; Agarwal, Anand

    2015-01-01

    Besides the biology, stresses and strains within the tissue greatly influence the location of damage initiation and mode of failure in an intervertebral disk. Finite element models of a functional spinal unit (FSU) that incorporate reasonably accurate geometry and appropriate material properties are suitable to investigate such issues. Different material models and techniques have been used to model the anisotropic annulus fibrosus, but the abilities of these models to predict damage initiation in the annulus and to explain clinically observed phenomena are unclear. In this study, a hyperelastic anisotropic material model for the annulus with two different sets of material constants, experimentally determined using uniaxial and biaxial loading conditions, were incorporated in a 3D finite element model of a ligamentous FSU. The purpose of the study was to highlight the biomechanical differences (e.g., intradiscal pressure, motion, forces, stresses, strains, etc.) due to the dissimilarity between the two sets of material properties (uniaxial and biaxial). Based on the analyses, the biaxial constants simulations resulted in better agreements with the in vitro and in vivo data, and thus are more suitable for future damage analysis and failure prediction of the annulus under complex multiaxial loading conditions. PMID:26090359

  15. Verification and Validation of EnergyPlus Conduction Finite Difference and Phase Change Material Models for Opaque Wall Assemblies

    SciTech Connect

    Tabares-Velasco, Paulo Cesar; Christensen, Craig; Bianchi, Marcus; Booten, Chuck

    2012-07-01

    Phase change materials (PCMs) represent a potential technology to reduce peak loads and HVAC energy consumption in buildings. There are few building energy simulation programs that have the capability to simulate PCM but their accuracy has not been completely tested. This report summarizes NREL efforts to develop diagnostic tests cases to obtain accurate energy simulations when PCMs are modeled in residential buildings.

  16. Toward Accurate Modelling of Enzymatic Reactions: All Electron Quantum Chemical Analysis combined with QM/MM Calculation of Chorismate Mutase

    SciTech Connect

    Ishida, Toyokazu

    2008-09-17

    To further understand the catalytic role of the protein environment in the enzymatic process, the author has analyzed the reaction mechanism of the Claisen rearrangement of Bacillus subtilis chorismate mutase (BsCM). By introducing a new computational strategy that combines all-electron QM calculations with ab initio QM/MM modelings, it was possible to simulate the molecular interactions between the substrate and the protein environment. The electrostatic nature of the transition state stabilization was characterized by performing all-electron QM calculations based on the fragment molecular orbital technique for the entire enzyme.

  17. Model of plasticity of amorphous materials

    NASA Astrophysics Data System (ADS)

    Marchenko, V. I.; Misbah, Chaouqi

    2011-08-01

    Starting from a classical Kröener-Rieder kinematic picture for plasticity, we derive a set of dynamical equations describing plastic flow in a Lagrangian formulation. Our derivation is a natural and straightforward extension of simple fluids, elastic, and viscous solids theories. These equations contain the Maxwell model as a special limit. This paper is inspired by the particularly important work of Langer and coworkers. We shall show that our equations bear some resemblance with the shear-transformation zones model developed by Langer and coworkers. We shall point out some important differences. We discuss some results of plasticity, which can be described by the present model. We exploit the model equations for the simple examples: straining of a slab and a rod. We find that necking manifests always itself (not as a result of instability), except if the very special constant-velocity stretching process is imposed.

  18. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    NASA Astrophysics Data System (ADS)

    Bouneb, I.; Kerrour, F.

    2016-03-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc

  19. Development of an accurate molecular mechanics model for buckling behavior of multi-walled carbon nanotubes under axial compression.

    PubMed

    Safaei, B; Naseradinmousavi, P; Rahmani, A

    2016-04-01

    In the present paper, an analytical solution based on a molecular mechanics model is developed to evaluate the elastic critical axial buckling strain of chiral multi-walled carbon nanotubes (MWCNTs). To this end, the total potential energy of the system is calculated with the consideration of the both bond stretching and bond angular variations. Density functional theory (DFT) in the form of generalized gradient approximation (GGA) is implemented to evaluate force constants used in the molecular mechanics model. After that, based on the principle of molecular mechanics, explicit expressions are proposed to obtain elastic surface Young's modulus and Poisson's ratio of the single-walled carbon nanotubes corresponding to different types of chirality. Selected numerical results are presented to indicate the influence of the type of chirality, tube diameter, and number of tube walls in detailed. An excellent agreement is found between the present numerical results and those found in the literature which confirms the validity as well as the accuracy of the present closed-form solution. It is found that the value of critical axial buckling strain exhibit significant dependency on the type of chirality and number of tube walls. PMID:26930445

  20. Charge Central Interpretation of the Full Nonlinear PB Equation: Implications for Accurate and Scalable Modeling of Solvation Interactions.

    PubMed

    Xiao, Li; Wang, Changhao; Ye, Xiang; Luo, Ray

    2016-08-25

    Continuum solvation modeling based upon the Poisson-Boltzmann equation (PBE) is widely used in structural and functional analysis of biomolecules. In this work, we propose a charge-central interpretation of the full nonlinear PBE electrostatic interactions. The validity of the charge-central view or simply charge view, as formulated as a vacuum Poisson equation with effective charges, was first demonstrated by reproducing both electrostatic potentials and energies from the original solvated full nonlinear PBE. There are at least two benefits when the charge-central framework is applied. First the convergence analyses show that the use of polarization charges allows a much faster converging numerical procedure for electrostatic energy and forces calculation for the full nonlinear PBE. Second, the formulation of the solvated electrostatic interactions as effective charges in vacuum allows scalable algorithms to be deployed for large biomolecular systems. Here, we exploited the charge-view interpretation and developed a particle-particle particle-mesh (P3M) strategy for the full nonlinear PBE systems. We also studied the accuracy and convergence of solvation forces with the charge-view and the P3M methods. It is interesting to note that the convergence of both the charge-view and the P3M methods is more rapid than the original full nonlinear PBE method. Given the developments and validations documented here, we are working to adapt the P3M treatment of the full nonlinear PBE model to molecular dynamics simulations. PMID:27146097

  1. Identification of fractional-derivative-model parameters of viscoelastic materials from measured FRFs

    NASA Astrophysics Data System (ADS)

    Kim, Sun-Yong; Lee, Doo-Ho

    2009-07-01

    The dynamic properties of viscoelastic damping materials are highly frequency- and temperature-dependent. Numerical methods of structural and acoustic systems require the mathematical model for these dependencies. The fractional-derivative model on damping material has become a powerful solution that describes the frequency-dependent dynamic characteristics of damping materials. The model parameters on a damping material are very important information both for describing the responses of damped structures and in the design of damped structures. The authors proposed an efficient identification method of the material parameters using an optimization technique, showing its applicability through numerical studies in a previous work. In this study, the proposed procedure is applied to a damping material to identify the fractional-derivative-model parameters of viscoelastic materials. In the proposed method, frequency response functions (FRFs) are measured via a cantilever beam impact test. The FRFs on the points identical to those measured are calculated using an FE model with the equivalent stiffness approach. The differences between the measured and the calculated FRFs are minimized using a gradient-based optimization algorithm in order to estimate the true values of the parameters. The FRFs of a damped beam structure are measured in an environmental chamber at different temperatures and used as reference responses. A light impact hammer and a laser vibrometer are used to measure the reference responses. Both linear and nonlinear relationships between the logarithmically scaled shift factors and temperatures are examined during the identification of the material parameters. The applied results show that the proposed method accurately identifies the fractional-derivative-model parameters of a viscoelastic material.

  2. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  3. Large-strain quasi-static compression materials tests in support of penetration modeling research

    SciTech Connect

    Brandon, S.L.; Totten, J.J.

    1990-09-01

    Target penetration by projectiles typically generates large strains, at least locally. Hence, accurate analytic modeling of penetration demands that constitutive models be calibrated using large strain material test data. Tensile test data is limited by specimen necking (the Considere criterion), restricting attainable strains. Linear extrapolation of tensile data to target strains can seriously overestimate the material flow stress, resulting in erroneously stiff analytical predictions. That is, other tests which can attain larger strains often reveal a continually decreasing tangent modulus at large strains. We report quasistatic room temperature compression tests approaching true strains of {var epsilon} = {minus}1. A few tensile tests are included to illustrate the previous point. Materials tested are 7075-T651, 5083-H131, and 6061-T651 aluminum alloys, 4340 steel and X21-C tungsten alloy. 7 refs., 6 figs.

  4. The CPA Equation of State and an Activity Coefficient Model for Accurate Molar Enthalpy Calculations of Mixtures with Carbon Dioxide and Water/Brine

    SciTech Connect

    Myint, P. C.; Hao, Y.; Firoozabadi, A.

    2015-03-27

    Thermodynamic property calculations of mixtures containing carbon dioxide (CO2) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO2 activity coefficient model by Duan and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO2, pure water, and both CO2-rich and aqueous (H2O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO2. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H2O-CO2-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.

  5. Highly Accurate Infrared Line Lists of SO2 Isotopologues Computed for Atmospheric Modeling on Venus and Exoplanets

    NASA Astrophysics Data System (ADS)

    Huang, X.; Schwenke, D.; Lee, T. J.

    2014-12-01

    Last year we reported a semi-empirical 32S16O2 spectroscopic line list (denoted Ames-296K) for its atmospheric characterization in Venus and other Exoplanetary environments. In order to facilitate the Sulfur isotopic ratio and Sulfur chemistry model determination, now we present Ames-296K line lists for both 626 (upgraded) and other 4 symmetric isotopologues: 636, 646, 666 and 828. The line lists are computed on an ab initio potential energy surface refined with most reliable high resolution experimental data, using a high quality CCSD(T)/aug-cc-pV(Q+d)Z dipole moment surface. The most valuable part of our approach is to provide "truly reliable" predictions (and alternatives) for those unknown or hard-to-measure/analyze spectra. This strategy has guaranteed the lists are the best available alternative for those wide spectra region missing from spectroscopic databases such as HITRAN and GEISA, where only very limited data exist for 626/646 and no Infrared data at all for 636/666 or other minor isotopologues. Our general line position accuracy up to 5000 cm-1 is 0.01 - 0.02 cm-1 or better. Most transition intensity deviations are less than 5%, compare to experimentally measured quantities. Note that we have solved a convergence issue and further improved the quality and completeness of the main isotopologue 626 list at 296K. We will compare the lists to available models in CDMS/JPL/HITRAN and discuss the future mutually beneficial interactions between theoretical and experimental efforts.

  6. Validation of an Advanced Material Model for Simulating the Impact and Shock Response of Composite Materials

    NASA Astrophysics Data System (ADS)

    Clegg, Richard A.; Hayhurst, Colin J.; Nahme, Hartwig

    2002-07-01

    Composite materials are now commonly used as ballistic and hypervelocity protection materials and the demand for simulation of impact on these materials is increasing. A new material model specifically designed for the shock response of anisotropic materials has been developed and implemented in the hydrocode AUTODYN. The model allows for the representation of non-linear shock effects in combination with anisotropic material stiffness and damage. The coupling of the equation of state and anisotropic response is based on the methodology proposed by Anderson et al. [2]. An overview of the coupled formulation is described