Sample records for mass computation model

  1. Vehicle - Bridge interaction, comparison of two computing models

    NASA Astrophysics Data System (ADS)

    Melcer, Jozef; Kuchárová, Daniela

    2017-07-01

    The paper presents the calculation of the bridge response on the effect of moving vehicle moves along the bridge with various velocities. The multi-body plane computing model of vehicle is adopted. The bridge computing models are created in two variants. One computing model represents the bridge as the Bernoulli-Euler beam with continuously distributed mass and the second one represents the bridge as the lumped mass model with 1 degrees of freedom. The mid-span bridge dynamic deflections are calculated for both computing models. The results are mutually compared and quantitative evaluated.

  2. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  3. Improving the XAJ Model on the Basis of Mass-Energy Balance

    NASA Astrophysics Data System (ADS)

    Fang, Yuanhao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco

    2014-11-01

    Introduction: The Xin'anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.

  4. Improving the XAJ Model on the Basis of Mass-Energy Balance

    NASA Astrophysics Data System (ADS)

    Fang, Yuanghao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco

    2014-11-01

    The Xin’anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.

  5. Structural characterisation of medically relevant protein assemblies by integrating mass spectrometry with computational modelling.

    PubMed

    Politis, Argyris; Schmidt, Carla

    2018-03-20

    Structural mass spectrometry with its various techniques is a powerful tool for the structural elucidation of medically relevant protein assemblies. It delivers information on the composition, stoichiometries, interactions and topologies of these assemblies. Most importantly it can deal with heterogeneous mixtures and assemblies which makes it universal among the conventional structural techniques. In this review we summarise recent advances and challenges in structural mass spectrometric techniques. We describe how the combination of the different mass spectrometry-based methods with computational strategies enable structural models at molecular levels of resolution. These models hold significant potential for helping us in characterizing the function of protein assemblies related to human health and disease. In this review we summarise the techniques of structural mass spectrometry often applied when studying protein-ligand complexes. We exemplify these techniques through recent examples from literature that helped in the understanding of medically relevant protein assemblies. We further provide a detailed introduction into various computational approaches that can be integrated with these mass spectrometric techniques. Last but not least we discuss case studies that integrated mass spectrometry and computational modelling approaches and yielded models of medically important protein assembly states such as fibrils and amyloids. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. ELEMENT MASSES IN THE CRAB NEBULA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.

    Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii]  λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.

  7. An Isopycnal Box Model with predictive deep-ocean structure for biogeochemical cycling applications

    NASA Astrophysics Data System (ADS)

    Goodwin, Philip

    2012-07-01

    To simulate global ocean biogeochemical tracer budgets a model must accurately determine both the volume and surface origins of each water-mass. Water-mass volumes are dynamically linked to the ocean circulation in General Circulation Models, but at the cost of high computational load. In computationally efficient Box Models the water-mass volumes are simply prescribed and do not vary when the circulation transport rates or water mass densities are perturbed. A new computationally efficient Isopycnal Box Model is presented in which the sub-surface box volumes are internally calculated from the prescribed circulation using a diffusive conceptual model of the thermocline, in which upwelling of cold dense water is balanced by a downward diffusion of heat. The volumes of the sub-surface boxes are set so that the density stratification satisfies an assumed link between diapycnal diffusivity, κd, and buoyancy frequency, N: κd = c/(Nα), where c and α are user prescribed parameters. In contrast to conventional Box Models, the volumes of the sub-surface ocean boxes in the Isopycnal Box Model are dynamically linked to circulation, and automatically respond to circulation perturbations. This dynamical link allows an important facet of ocean biogeochemical cycling to be simulated in a highly computationally efficient model framework.

  8. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  9. Model implementation for dynamic computation of system cost

    NASA Astrophysics Data System (ADS)

    Levri, J.; Vaccari, D.

    The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.

  10. Higgs boson mass in the standard model at two-loop order and beyond

    DOE PAGES

    Martin, Stephen P.; Robertson, David G.

    2014-10-01

    We calculate the mass of the Higgs boson in the standard model in terms of the underlying Lagrangian parameters at complete 2-loop order with leading 3-loop corrections. A computer program implementing the results is provided. The program also computes and minimizes the standard model effective potential in Landau gauge at 2-loop order with leading 3-loop corrections.

  11. Validation of a numerical method for interface-resolving simulation of multicomponent gas-liquid mass transfer and evaluation of multicomponent diffusion models

    NASA Astrophysics Data System (ADS)

    Woo, Mino; Wörner, Martin; Tischer, Steffen; Deutschmann, Olaf

    2018-03-01

    The multicomponent model and the effective diffusivity model are well established diffusion models for numerical simulation of single-phase flows consisting of several components but are seldom used for two-phase flows so far. In this paper, a specific numerical model for interfacial mass transfer by means of a continuous single-field concentration formulation is combined with the multicomponent model and effective diffusivity model and is validated for multicomponent mass transfer. For this purpose, several test cases for one-dimensional physical or reactive mass transfer of ternary mixtures are considered. The numerical results are compared with analytical or numerical solutions of the Maxell-Stefan equations and/or experimental data. The composition-dependent elements of the diffusivity matrix of the multicomponent and effective diffusivity model are found to substantially differ for non-dilute conditions. The species mole fraction or concentration profiles computed with both diffusion models are, however, for all test cases very similar and in good agreement with the analytical/numerical solutions or measurements. For practical computations, the effective diffusivity model is recommended due to its simplicity and lower computational costs.

  12. Bulk refrigeration of fruits and vegetables. Part 2: Computer algorithm for heat loads and moisture loss

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, B.; Misra, A.; Fricke, B.A.

    1997-12-31

    A computer algorithm was developed that estimates the latent and sensible heat loads due to the bulk refrigeration of fruits and vegetables. The algorithm also predicts the commodity moisture loss and temperature distribution which occurs during refrigeration. Part 1 focused upon the thermophysical properties of commodities and the flowfield parameters which govern the heat and mass transfer from fresh fruits and vegetables. This paper, Part 2, discusses the modeling methodology utilized in the current computer algorithm and describes the development of the heat and mass transfer models. Part 2 also compares the results of the computer algorithm to experimental datamore » taken from the literature and describes a parametric study which was performed with the algorithm. In addition, this paper also reviews existing numerical models for determining the heat and mass transfer in bulk loads of fruits and vegetables.« less

  13. Experimental validation of convection-diffusion discretisation scheme employed for computational modelling of biological mass transport

    PubMed Central

    2010-01-01

    Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816

  14. Radiation-driven winds of hot stars. V - Wind models for central stars of planetary nebulae

    NASA Technical Reports Server (NTRS)

    Pauldrach, A.; Puls, J.; Kudritzki, R. P.; Mendez, R. H.; Heap, S. R.

    1988-01-01

    Wind models using the recent improvements of radiation driven wind theory by Pauldrach et al. (1986) and Pauldrach (1987) are presented for central stars of planetary nebulae. The models are computed along evolutionary tracks evolving with different stellar mass from the Asymptotic Giant Branch. We show that the calculated terminal wind velocities are in agreement with the observations and allow in principle an independent determination of stellar masses and radii. The computed mass-loss rates are in qualitative agreement with the occurrence of spectroscopic stellar wind features as a function of stellar effective temperature and gravity.

  15. An Improved Computing Method for 3D Mechanical Connectivity Rates Based on a Polyhedral Simulation Model of Discrete Fracture Network in Rock Masses

    NASA Astrophysics Data System (ADS)

    Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye

    2018-06-01

    Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.

  16. [Study on the dynamic model with supercritical CO2 fluid extracting the lipophilic components in Panax notoginseng].

    PubMed

    Duan, Xian-Chun; Wang, Yong-Zhong; Zhang, Jun-Ru; Luo, Huan; Zhang, Heng; Xia, Lun-Zhu

    2011-08-01

    To establish a dynamics model for extracting the lipophilic components in Panax notoginseng with supercritical carbon dioxide (CO2). Based on the theory of counter-flow mass transfer and the molecular mass transfer between the material and the supercritical CO2 fluid under differential mass-conservation equation, a dynamics model was established and computed to compare forecasting result with the experiment process. A dynamics model has been established for supercritical CO2 to extract the lipophilic components in Panax notoginseng, the computed result of this model was consistent with the experiment process basically. The supercritical fluid extract dynamics model established in this research can expound the mechanism in the extract process of which lipophilic components of Panax notoginseng dissolve the mass transfer and is tallied with the actual extract process. This provides certain instruction for the supercritical CO2 fluid extract' s industrialization enlargement.

  17. A fast mass spring model solver for high-resolution elastic objects

    NASA Astrophysics Data System (ADS)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  18. Two-dimensional CFD modeling of the heat and mass transfer process during sewage sludge drying in a solar dryer

    NASA Astrophysics Data System (ADS)

    Krawczyk, Piotr; Badyda, Krzysztof

    2011-12-01

    The paper presents key assumptions of the mathematical model which describes heat and mass transfer phenomena in a solar sewage drying process, as well as techniques used for solving this model with the Fluent computational fluid dynamics (CFD) software. Special attention was paid to implementation of boundary conditions on the sludge surface, which is a physical boundary between the gaseous phase - air, and solid phase - dried matter. Those conditions allow to model heat and mass transfer between the media during first and second drying stages. Selection of the computational geometry is also discussed - it is a fragment of the entire drying facility. Selected modelling results are presented in the final part of the paper.

  19. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  20. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Canhai; ...

    2018-03-27

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  1. VizieR Online Data Catalog: Stellar yields and the initial mass function (Molla+, 2015)

    NASA Astrophysics Data System (ADS)

    Molla, M.; Cavichia, O.; Gavilan, M.; Gibson, B. K.

    2017-10-01

    These tables give the theoretical chemical evolution models applied for the Milky Way Galaxy (MWG) from the cited paper. Basically give tables 2, 4 of stellar yields used and results of table 6 for the 144 models computed that work. Tables 2 and 4 give the stellar yields q_i(m) and remmnant mass for low and intermediate stars and massive stars, respectively, in a similar format for all authors. Table 6 gives the value of Chi2 for the 144 models computed for MWG using those stellar yields and different Initial Mass Function (see paper). Moreover, we give the table with results of the present time state of the Galactic disk for these 144 models. (12 data files).

  2. Masses and Regge trajectories of triply heavy Ω_{ccc} and Ω_{bbb} baryons

    NASA Astrophysics Data System (ADS)

    Shah, Zalak; Rai, Ajay Kumar

    2017-10-01

    The excited state masses of triply charm and triply bottom Ω baryons are exhibited in the present study. The masses are computed for 1 S-5 S, 1 P-5 P, 1 D-4 D and 1 F-2 F states in the Hypercentral Constituent Quark Model (hCQM) with the hyper Coulomb plus linear potential. The triply charm/bottom baryon masses are experimentally unknown so that the Regge trajectories are plotted using computed masses to assign the quantum numbers of these unknown states.

  3. Modeling hazardous mass flows Geoflows09: Mathematical and computational aspects of modeling hazardous geophysical mass flows; Seattle, Washington, 9–11 March 2009

    USGS Publications Warehouse

    Iverson, Richard M.; LeVeque, Randall J.

    2009-01-01

    A recent workshop at the University of Washington focused on mathematical and computational aspects of modeling the dynamics of dense, gravity-driven mass movements such as rock avalanches and debris flows. About 30 participants came from seven countries and brought diverse backgrounds in geophysics; geology; physics; applied and computational mathematics; and civil, mechanical, and geotechnical engineering. The workshop was cosponsored by the U.S. Geological Survey Volcano Hazards Program, by the U.S. National Science Foundation through a Vertical Integration of Research and Education (VIGRE) in the Mathematical Sciences grant to the University of Washington, and by the Pacific Institute for the Mathematical Sciences. It began with a day of lectures open to the academic community at large and concluded with 2 days of focused discussions and collaborative work among the participants.

  4. The numerical approach adopted in toba computer code for mass and heat transfer dynamic analysis of metal hydride hydrogen storage beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Osery, I.A.

    1983-12-01

    Modelling studies of metal hydride hydrogen storage beds is a part of an extensive R and D program conducted in Egypt on hydrogen energy. In this context two computer programs; namely RET and RET1; have been developed. In RET computer program, a cylindrical conduction bed model is considered and an approximate analytical solution is used for solution of the associated mass and heat transfer problem. This problem is solved in RET1 computer program numerically allowing more flexibility in operating conditions but still limited to cylindrical configuration with only two alternatives for heat exchange; either fluid is passing through tubes imbeddedmore » in the solid alloy matrix or solid rods are surrounded by annular fluid tubes. The present computer code TOBA is more flexible and realistic. It performs the mass and heat transfer dynamic analysis of metal hydride storage beds using a variety of geometrical and operating alternatives.« less

  5. Advanced subgrid-scale modeling for convection-dominated species transport at fluid interfaces with application to mass transfer from rising bubbles

    NASA Astrophysics Data System (ADS)

    Weiner, Andre; Bothe, Dieter

    2017-10-01

    This paper presents a novel subgrid scale (SGS) model for simulating convection-dominated species transport at deformable fluid interfaces. One possible application is the Direct Numerical Simulation (DNS) of mass transfer from rising bubbles. The transport of a dissolving gas along the bubble-liquid interface is determined by two transport phenomena: convection in streamwise direction and diffusion in interface normal direction. The convective transport for technical bubble sizes is several orders of magnitude higher, leading to a thin concentration boundary layer around the bubble. A true DNS, fully resolving hydrodynamic and mass transfer length scales results in infeasible computational costs. Our approach is therefore a DNS of the flow field combined with a SGS model to compute the mass transfer between bubble and liquid. An appropriate model-function is used to compute the numerical fluxes on all cell faces of an interface cell. This allows to predict the mass transfer correctly even if the concentration boundary layer is fully contained in a single cell layer around the interface. We show that the SGS-model reduces the resolution requirements at the interface by a factor of ten and more. The integral flux correction is also applicable to other thin boundary layer problems. Two flow regimes are investigated to validate the model. A semi-analytical solution for creeping flow is used to assess local and global mass transfer quantities. For higher Reynolds numbers ranging from Re = 100 to Re = 460 and Péclet numbers between Pe =104 and Pe = 4 ṡ106 we compare the global Sherwood number against correlations from literature. In terms of accuracy, the predicted mass transfer never deviates more than 4% from the reference values.

  6. Calculating distributed glacier mass balance for the Swiss Alps from RCM output: Development and testing of downscaling and validation methods

    NASA Astrophysics Data System (ADS)

    Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.

    2009-04-01

    Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.

  7. Glacier modeling in support of field observations of mass balance at South Cascade Glacier, Washington, USA

    USGS Publications Warehouse

    Josberger, Edward G.; Bidlake, William R.

    2010-01-01

    The long-term USGS measurement and reporting of mass balance at South Cascade Glacier was assisted in balance years 2006 and 2007 by a new mass balance model. The model incorporates a temperature-index melt computation and accumulation is modeled from glacier air temperature and gaged precipitation at a remote site. Mass balance modeling was used with glaciological measurements to estimate dates and magnitudes of critical mass balance phenomena. In support of the modeling, a detailed analysis was made of the "glacier cooling effect" that reduces summer air temperature near the ice surface as compared to that predicted on the basis of a spatially uniform temperature lapse rate. The analysis was based on several years of data from measurements of near-surface air temperature on the glacier. The 2006 and 2007 winter balances of South Cascade Glacier, computed with this new, model-augmented methodology, were 2.61 and 3.41 mWE, respectively. The 2006 and 2007 summer balances were -4.20 and -3.63 mWE, respectively, and the 2006 and 2007 net balances were -1.59 and -0.22 mWE. PDF version of a presentation on the mass balance of South Cascade Glacier in Washington state. Presented at the American Geophysical Union Fall Meeting 2010.

  8. Beyond the standard two-film theory: Computational fluid dynamics simulations for carbon dioxide capture in a wetted wall column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO2) capture to predict the CO2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive and reactive massmore » transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less

  9. An overview of recent applications of computational modelling in neonatology

    PubMed Central

    Wrobel, Luiz C.; Ginalski, Maciej K.; Nowak, Andrzej J.; Ingham, Derek B.; Fic, Anna M.

    2010-01-01

    This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass-transfer mechanisms taking place in medical devices, such as incubators, radiant warmers and oxygen hoods. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and improving the design of medical devices. PMID:20439275

  10. Light curves for bump Cepheids computed with a dynamically zoned pulsation code

    NASA Technical Reports Server (NTRS)

    Adams, T. F.; Castor, J. I.; Davis, C. G.

    1980-01-01

    The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.

  11. Computing Models of M-type Host Stars and their Panchromatic Spectral Output

    NASA Astrophysics Data System (ADS)

    Linsky, Jeffrey; Tilipman, Dennis; France, Kevin

    2018-06-01

    We have begun a program of computing state-of-the-art model atmospheres from the photospheres to the coronae of M stars that are the host stars of known exoplanets. For each model we are computing the emergent radiation at all wavelengths that are critical for assessingphotochemistry and mass-loss from exoplanet atmospheres. In particular, we are computing the stellar extreme ultraviolet radiation that drives hydrodynamic mass loss from exoplanet atmospheres and is essential for determing whether an exoplanet is habitable. The model atmospheres are computed with the SSRPM radiative transfer/statistical equilibrium code developed by Dr. Juan Fontenla. The code solves for the non-LTE statistical equilibrium populations of 18,538 levels of 52 atomic and ion species and computes the radiation from all species (435,986 spectral lines) and about 20,000,000 spectral lines of 20 diatomic species.The first model computed in this program was for the modestly active M1.5 V star GJ 832 by Fontenla et al. (ApJ 830, 152 (2016)). We will report on a preliminary model for the more active M5 V star GJ 876 and compare this model and its emergent spectrum with GJ 832. In the future, we will compute and intercompare semi-empirical models and spectra for all of the stars observed with the HST MUSCLES Treasury Survey, the Mega-MUSCLES Treasury Survey, and additional stars including Proxima Cen and Trappist-1.This multiyear theory program is supported by a grant from the Space Telescope Science Institute.

  12. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  13. Path Integral Computation of Quantum Free Energy Differences Due to Alchemical Transformations Involving Mass and Potential.

    PubMed

    Pérez, Alejandro; von Lilienfeld, O Anatole

    2011-08-09

    Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.

  14. Thermal Ablation Modeling for Silicate Materials

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq

    2016-01-01

    A thermal ablation model for silicates is proposed. The model includes the mass losses through the balance between evaporation and condensation, and through the moving molten layer driven by surface shear force and pressure gradient. This model can be applied in ablation simulations of the meteoroid or glassy Thermal Protection Systems for spacecraft. Time-dependent axi-symmetric computations are performed by coupling the fluid dynamics code, Data-Parallel Line Relaxation program, with the material response code, Two-dimensional Implicit Thermal Ablation simulation program, to predict the mass lost rates and shape change. For model validation, the surface recession of fused amorphous quartz rod is computed, and the recession predictions reasonably agree with available data. The present parametric studies for two groups of meteoroid earth entry conditions indicate that the mass loss through moving molten layer is negligibly small for heat-flux conditions at around 1 MW/cm(exp. 2).

  15. Turbulent reacting flow computations including turbulence-chemistry interactions

    NASA Technical Reports Server (NTRS)

    Narayan, J. R.; Girimaji, S. S.

    1992-01-01

    A two-equation (k-epsilon) turbulence model has been extended to be applicable for compressible reacting flows. A compressibility correction model based on modeling the dilatational terms in the Reynolds stress equations has been used. A turbulence-chemistry interaction model is outlined. In this model, the effects of temperature and species mass concentrations fluctuations on the species mass production rates are decoupled. The effect of temperature fluctuations is modeled via a moment model, and the effect of concentration fluctuations is included using an assumed beta-pdf model. Preliminary results obtained using this model are presented. A two-dimensional reacting mixing layer has been used as a test case. Computations are carried out using the Navier-Stokes solver SPARK using a finite rate chemistry model for hydrogen-air combustion.

  16. A Computer Model for Analyzing Volatile Removal Assembly

    NASA Technical Reports Server (NTRS)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  17. ISSM-SESAW v1.0: mesh-based computation of gravitationally consistent sea-level and geodetic signatures caused by cryosphere and climate driven mass change

    NASA Astrophysics Data System (ADS)

    Adhikari, Surendra; Ivins, Erik R.; Larour, Eric

    2016-03-01

    A classical Green's function approach for computing gravitationally consistent sea-level variations associated with mass redistribution on the earth's surface employed in contemporary sea-level models naturally suits the spectral methods for numerical evaluation. The capability of these methods to resolve high wave number features such as small glaciers is limited by the need for large numbers of pixels and high-degree (associated Legendre) series truncation. Incorporating a spectral model into (components of) earth system models that generally operate on a mesh system also requires repetitive forward and inverse transforms. In order to overcome these limitations, we present a method that functions efficiently on an unstructured mesh, thus capturing the physics operating at kilometer scale yet capable of simulating geophysical observables that are inherently of global scale with minimal computational cost. The goal of the current version of this model is to provide high-resolution solid-earth, gravitational, sea-level and rotational responses for earth system models operating in the domain of the earth's outer fluid envelope on timescales less than about 1 century when viscous effects can largely be ignored over most of the globe. The model has numerous important geophysical applications. For example, we compute time-varying computations of global geodetic and sea-level signatures associated with recent ice-sheet changes that are derived from space gravimetry observations. We also demonstrate the capability of our model to simultaneously resolve kilometer-scale sources of the earth's time-varying surface mass transport, derived from high-resolution modeling of polar ice sheets, and predict the corresponding local and global geodetic signatures.

  18. Effective lepton flavor violating H ℓiℓj vertex from right-handed neutrinos within the mass insertion approximation

    NASA Astrophysics Data System (ADS)

    Arganda, E.; Herrero, M. J.; Marcano, X.; Morales, R.; Szynkman, A.

    2017-05-01

    In this work we present a new computation of the lepton flavor violating Higgs boson decays that are generated radiatively to one-loop from heavy right-handed neutrinos. We work within the context of the inverse seesaw model with three νR and three extra singlets X , but the results could be generalized to other low scale seesaw models. The novelty of our computation is that it uses a completely different method by means of the mass insertion approximation which works with the electroweak interaction states instead of the usual 9 physical neutrino mass eigenstates of the inverse seesaw model. This method also allows us to write the analytical results explicitly in terms of the most relevant model parameters, that are the neutrino Yukawa coupling matrix Yν and the right-handed mass matrix MR, which is very convenient for a phenomenological analysis. This Yν matrix, being generically nondiagonal in flavor space, is the only one responsible for the induced charged lepton flavor violating processes of our interest. We perform the calculation of the decay amplitude up to order O (Yν2+Yν4). We also study numerically the goodness of the mass insertion approximation results. In the last part we present the computation of the relevant one-loop effective vertex H ℓiℓj for the lepton flavor violating Higgs decay which is derived from a large MR mass expansion of the form factors. We believe that our simple formula found for this effective vertex can be of interest for other researchers who wish to estimate the H →ℓiℓ¯j rates in a fast way in terms of their own preferred input values for the relevant model parameters Yν and MR.

  19. Analytical and computational approaches to define the Aspergillus niger secretome.

    PubMed

    Tsang, Adrian; Butler, Gregory; Powlowski, Justin; Panisko, Ellen A; Baker, Scott E

    2009-03-01

    We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome.The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used to guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.

  20. Reducing software mass through behavior control. [of planetary roving robots

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1992-01-01

    Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.

  1. Analytical and computational approaches to define the Aspergillus niger secretome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsang, Adrian; Butler, Gregory D.; Powlowski, Justin

    2009-03-01

    We used computational and mass spectrometric approaches to characterize the Aspergillus niger secretome. The 11,200 gene models predicted in the genome of A. niger strain ATCC 1015 were the data source for the analysis. Depending on the computational methods used, 691 to 881 proteins were predicted to be secreted proteins. We cultured A. niger in six different media and analyzed the extracellular proteins produced using mass spectrometry. A total of 222 proteins were identified, with 39 proteins expressed under all six conditions and 74 proteins expressed under only one condition. The secreted proteins identified by mass spectrometry were used tomore » guide the correction of about 20 gene models. Additional analysis focused on extracellular enzymes of interest for biomass processing. Of the 63 glycoside hydrolases predicted to be capable of hydrolyzing cellulose, hemicellulose or pectin, 94% of the exo-acting enzymes and only 18% of the endo-acting enzymes were experimentally detected.« less

  2. Calculating Mass Diffusion in High-Pressure Binary Fluids

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Harstad, Kenneth

    2004-01-01

    A comprehensive mathematical model of mass diffusion has been developed for binary fluids at high pressures, including critical and supercritical pressures. Heretofore, diverse expressions, valid for limited parameter ranges, have been used to correlate high-pressure binary mass-diffusion-coefficient data. This model will likely be especially useful in the computational simulation and analysis of combustion phenomena in diesel engines, gas turbines, and liquid rocket engines, wherein mass diffusion at high pressure plays a major role.

  3. Mass Conservation and Inference of Metabolic Networks from High-Throughput Mass Spectrometry Data

    PubMed Central

    Bandaru, Pradeep; Bansal, Mukesh

    2011-01-01

    Abstract We present a step towards the metabolome-wide computational inference of cellular metabolic reaction networks from metabolic profiling data, such as mass spectrometry. The reconstruction is based on identification of irreducible statistical interactions among the metabolite activities using the ARACNE reverse-engineering algorithm and on constraining possible metabolic transformations to satisfy the conservation of mass. The resulting algorithms are validated on synthetic data from an abridged computational model of Escherichia coli metabolism. Precision rates upwards of 50% are routinely observed for identification of full metabolic reactions, and recalls upwards of 20% are also seen. PMID:21314454

  4. A mass-conserving multiphase lattice Boltzmann model for simulation of multiphase flows

    NASA Astrophysics Data System (ADS)

    Niu, Xiao-Dong; Li, You; Ma, Yi-Ren; Chen, Mu-Feng; Li, Xiang; Li, Qiao-Zhong

    2018-01-01

    In this study, a mass-conserving multiphase lattice Boltzmann (LB) model is proposed for simulating the multiphase flows. The proposed model developed in the present study is to improve the model of Shao et al. ["Free-energy-based lattice Boltzmann model for simulation of multiphase flows with density contrast," Phys. Rev. E 89, 033309 (2014)] by introducing a mass correction term in the lattice Boltzmann model for the interface. The model of Shao et al. [(the improved Zheng-Shu-Chew (Z-S-C model)] correctly considers the effect of the local density variation in momentum equation and has an obvious improvement over the Zheng-Shu-Chew (Z-S-C) model ["A lattice Boltzmann model for multiphase flows with large density ratio," J. Comput. Phys. 218(1), 353-371 (2006)] in terms of solution accuracy. However, due to the physical diffusion and numerical dissipation, the total mass of each fluid phase cannot be conserved correctly. To solve this problem, a mass correction term, which is similar to the one proposed by Wang et al. ["A mass-conserved diffuse interface method and its application for incompressible multiphase flows with large density ratio," J. Comput. Phys. 290, 336-351 (2015)], is introduced into the lattice Boltzmann equation for the interface to compensate the mass losses or offset the mass increase. Meanwhile, to implement the wetting boundary condition and the contact angle, a geometric formulation and a local force are incorporated into the present mass-conserving LB model. The proposed model is validated by verifying the Laplace law, simulating both one and two aligned droplets splashing onto a liquid film, droplets standing on an ideal wall, droplets with different wettability splashing onto smooth wax, and bubbles rising under buoyancy. Numerical results show that the proposed model can correctly simulate multiphase flows. It was found that the mass is well-conserved in all cases considered by the model developed in the present study. The developed model has been found to perform better than the improved Z-S-C model in this aspect.

  5. Computational approach to seasonal changes of living leaves.

    PubMed

    Tang, Ying; Wu, Dong-Yan; Fan, Jing

    2013-01-01

    This paper proposes a computational approach to seasonal changes of living leaves by combining the geometric deformations and textural color changes. The geometric model of a leaf is generated by triangulating the scanned image of a leaf using an optimized mesh. The triangular mesh of the leaf is deformed by the improved mass-spring model, while the deformation is controlled by setting different mass values for the vertices on the leaf model. In order to adaptively control the deformation of different regions in the leaf, the mass values of vertices are set to be in proportion to the pixels' intensities of the corresponding user-specified grayscale mask map. The geometric deformations as well as the textural color changes of a leaf are used to simulate the seasonal changing process of leaves based on Markov chain model with different environmental parameters including temperature, humidness, and time. Experimental results show that the method successfully simulates the seasonal changes of leaves.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jing, E-mail: jing.zhang2@duke.edu; Ghate, Sujata V.; Yoon, Sora C.

    Purpose: Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach tomore » trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. Methods: The authors’ algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. Results: The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (p < 0.0001). For the 7 residents only, the AUC performance of the models was 0.590 (95% CI,0.537-0.642) and was also significantly higher than 0.5 (p = 0.0009). Therefore, generally the authors’ models were able to predict which masses were detected and which were missed better than chance. Conclusions: The authors proposed an algorithm that was able to predict which masses will be detected and which will be missed by each individual trainee. This confirms existence of error-making patterns in the detection of masses among radiology trainees. Furthermore, the proposed methodology will allow for the optimized selection of difficult cases for the trainees in an automatic and efficient manner.« less

  7. A two-layer multiple-time-scale turbulence model and grid independence study

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1989-01-01

    A two-layer multiple-time-scale turbulence model is presented. The near-wall model is based on the classical Kolmogorov-Prandtl turbulence hypothesis and the semi-empirical logarithmic law of the wall. In the two-layer model presented, the computational domain of the conservation of mass equation and the mean momentum equation penetrated up to the wall, where no slip boundary condition has been prescribed; and the near wall boundary of the turbulence equations has been located at the fully turbulent region, yet very close to the wall, where the standard wall function method has been applied. Thus, the conservation of mass constraint can be satisfied more rigorously in the two-layer model than in the standard wall function method. In most of the two-layer turbulence models, the number of grid points to be used inside the near-wall layer posed the issue of computational efficiency. The present finite element computational results showed that the grid independent solutions were obtained with as small as two grid points, i.e., one quadratic element, inside the near wall layer. Comparison of the computational results obtained by using the two-layer model and those obtained by using the wall function method is also presented.

  8. A comparison of homogeneous equilibrium and relaxation model for CO2 expansion inside the two-phase ejector

    NASA Astrophysics Data System (ADS)

    Palacz, M.; Haida, M.; Smolka, J.; Nowak, A. J.; Hafner, A.

    2016-09-01

    In this study, the comparison of the accuracy of the homogeneous equilibrium model (HEM) and homogeneous relaxation model (HRM) is presented. Both models were applied to simulate the CO2 expansion inside the two-phase ejectors. Moreover, the mentioned models were implemented in the robust and efficient computational tool ejectorPL. That tool guarantees the fully automated computational process and the repeatable computations for the various ejector shapes and operating conditions. The simulated motive nozzle mass flow rates were compared to the experimentally measured mass flow rates. That comparison was made for both, HEM and HRM. The results showed the unsatisfying fidelity of the HEM for the operating regimes far from the carbon dioxide critical point. On the other hand, the HRM accuracy for such conditions was slightly higher. The approach presented in this paper, showed the limitation of applicability of both two-phase models for the expansion phenomena inside the ejectors.

  9. Thermal Aspects of Lithium Ion Cells

    NASA Technical Reports Server (NTRS)

    Frank, H.; Shakkottai, P.; Bugga, R.; Smart, M.; Huang, C. K.; Timmerman, P.; Surampudi, S.

    2000-01-01

    This viewgraph presentation outlines the development of a thermal model of Li-ion cells in terms of heat generation, thermal mass, and thermal resistance. Intended for incorporation into battery model. The approach was to estimate heat generation: with semi-theoretical model, and then to check accuracy with efficiency measurements. Another objective was to compute thermal mass from component weights and specific heats, and to compute the thermal resistance from component dimensions and conductivities. Two lithium batteries are compared, the Cylindrical lithium battery, and the prismatic lithium cell. It reviews methodology for estimating the heat generation rate. Graphs of the Open-circuit curves of the cells and the heat evolution during discharge are given.

  10. Pressure Loss Predictions of the Reactor Simulator Subsystem at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.

    2016-01-01

    Testing of the Fission Power System (FPS) Technology Demonstration Unit (TDU) is being conducted at NASA Glenn Research Center. The TDU consists of three subsystems: the reactor simulator (RxSim), the Stirling Power Conversion Unit (PCU), and the heat exchanger manifold (HXM). An annular linear induction pump (ALIP) is used to drive the working fluid. A preliminary version of the TDU system (which excludes the PCU for now) is referred to as the "RxSim subsystem" and was used to conduct flow tests in Vacuum Facility 6 (VF 6). In parallel, a computational model of the RxSim subsystem was created based on the computer-aided-design (CAD) model and was used to predict loop pressure losses over a range of mass flows. This was done to assess the ability of the pump to meet the design intent mass flow demand. Measured data indicates that the pump can produce 2.333 kg/sec of flow, which is enough to supply the RxSim subsystem with a nominal flow of 1.75 kg/sec. Computational predictions indicated that the pump could provide 2.157 kg/sec (using the Spalart-Allmaras (S?A) turbulence model) and 2.223 kg/sec (using the k- turbulence model). The computational error of the predictions for the available mass flow is ?0.176 kg/sec (with the S-A turbulence model) and -0.110 kg/sec (with the k- turbulence model) when compared to measured data.

  11. New Estimates of Hydrological and Oceanic Excitations of Variations of Earth's Rotation, Geocenter and Gravitational Field

    NASA Technical Reports Server (NTRS)

    Chao, Benjamin F.; Chen, J. L.; Johnson, T.; Au, A. Y.

    1998-01-01

    Hydrological mass transport in the geophysical fluids of the atmosphere-hydrosphere-solid Earth surface system can excite Earth's rotational variations in both length-of-day and polar motion. These effects can be computed in terms of the hydrological angular momentum by proper integration of global meteorological data. We do so using the 40-year NCEP data and the 18-year NASA GEOS-1 data, where the precipitation and evapotranspiration budgets are computed via the water mass balance of the atmosphere based on Oki et al.'s (1995) algorithm. This hydrological mass redistribution will also cause geocenter motion and changes in Earth's gravitational field, which are similarly computed using the same data sets. Corresponding geodynamic effects due to the oceanic mass transports (i.e. oceanic angular momentum and ocean-induced geocenter/gravity changes) have also been computed in a similar manner. We here compare two independent sets of the result from: (1) non-steric ocean surface topography observations based on Topex/Poseidon, and (2) the model output of the mass field by the Parallel Ocean Climate Model. Finally, the hydrological and the oceanic time series are combined in an effort to better explain the observed non-atmospheric effects. The latter are obtained by subtracting the atmospheric angular momentum from Earth rotation observations, and the atmosphere- induced geocenter/gravity effects from corresponding geodetic observations, both using the above-mentioned atmospheric data sets.

  12. On the mass and thermodynamics of the Higgs boson

    NASA Astrophysics Data System (ADS)

    Fokas, A. S.; Vayenas, C. G.; Grigoriou, D. P.

    2018-02-01

    In two recent works we have shown that the masses of the W± and Zo bosons can be computed from first principles by modeling these bosons as bound relativistic gravitationally confined rotational states consisting of e±-νe pairs in the case of W± bosons and of a e+-νe-e- triplet in the case of the Zo boson. Here, we present similar calculations for the Higgs boson which we model as a bound rotational state consisting of a positron, an electron, a neutrino and an antineutrino. The model contains no adjustable parameters and the computed boson mass of 125.7 GeV/c2, is in very good agreement with the experimental value of 125.1 ± 1 GeV/c2. The thermodynamics and potential connection of this particle with the Higgs field are also briefly addressed.

  13. A computer test of holographic flavour dynamics. Part II

    NASA Astrophysics Data System (ADS)

    Asano, Yuhma; Filev, Veselin G.; Kováčik, Samuel; O'Connor, Denjoe

    2018-03-01

    We study the second derivative of the free energy with respect to the fundamental mass (the mass susceptibility) for the Berkooz-Douglas model as a function of temperature and at zero mass. The model is believed to be holographically dual to a D0/D4 intersection. We perform a lattice simulation of the system at finite temperature and find excellent agreement with predictions from the gravity dual.

  14. ITFITS model for vibration--translation energy partitioning in atom-- polyatomic molecule collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shobatake, K.; Rice, S.A.; Lee, Y.T.

    1973-09-01

    A model for vibration-translation energy partitioning in the collinear collision of an atom and an axially symmetric polyatonaic molecule is proposed. The model is based on an extension of the ideas of Mahan and Heidrich, Wilson, and Rapp. Comparison of energy transfers computed from classical trajesctory calculations and the model proposed indicate good agreement when the mass of the free atom is small relative to the mass of the bound atom it strikes. The agreement is less satisfactory when that mass ratio becomes large. (auth)

  15. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-04-01

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  16. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  17. Irradiation-driven Mass Transfer Cycles in Compact Binaries

    NASA Astrophysics Data System (ADS)

    Büning, A.; Ritter, H.

    2005-08-01

    We elaborate on the analytical model of Ritter, Zhang, & Kolb (2000) which describes the basic physics of irradiation-driven mass transfer cycles in semi-detached compact binary systems. In particular, we take into account a contribution to the thermal relaxation of the donor star which is unrelated to irradiation and which was neglected in previous studies. We present results of simulations of the evolution of compact binaries undergoing mass transfer cycles, in particular also of systems with a nuclear evolved donor star. These computations have been carried out with a stellar evolution code which computes mass transfer implicitly and models irradiation of the donor star in a point source approximation, thereby allowing for much more realistic simulations than were hitherto possible. We find that low-mass X-ray binaries (LMXBs) and cataclysmic variables (CVs) with orbital periods ⪉ 6hr can undergo mass transfer cycles only for low angular momentum loss rates. CVs containing a giant donor or one near the terminal age main sequence are more stable than previously thought, but can possibly also undergo mass transfer cycles.

  18. Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features

    PubMed Central

    Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin

    2017-01-01

    Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353

  19. Mass balance computation in SAGUARO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, B.L.; Eaton, R.R.

    1986-12-01

    This report describes the development of the mass balance subroutines used with the finite-element code, SAGUARO, which models fluid flow in partially saturated porous media. Derivation of the basic mass storage and mass flux equations is included. The results of the SAGUARO mass-balance subroutine, MASS, are shown to compare favorably with the linked results of FEMTRAN. Implementation of the MASS option in SAGUARO is described. Instructions for use of the MASS option are demonstrated with the three sample cases.

  20. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-07-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalized waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalized model for extreme-mass-ratio inspirals constructed on deformed black hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  1. Hypersonic Combustor Model Inlet CFD Simulations and Experimental Comparisons

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; TokarcikPolsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    Numerous two-and three-dimensional computational simulations were performed for the inlet associated with the combustor model for the hypersonic propulsion experiment in the NASA Ames 16-Inch Shock Tunnel. The inlet was designed to produce a combustor-inlet flow that is nearly two-dimensional and of sufficient mass flow rate for large scale combustor testing. The three-dimensional simulations demonstrated that the inlet design met all the design objectives and that the inlet produced a very nearly two-dimensional combustor inflow profile. Numerous two-dimensional simulations were performed with various levels of approximations such as in the choice of chemical and physical models, as well as numerical approximations. Parametric studies were conducted to better understand and to characterize the inlet flow. Results from the two-and three-dimensional simulations were used to predict the mass flux entering the combustor and a mass flux correlation as a function of facility stagnation pressure was developed. Surface heat flux and pressure measurements were compared with the computed results and good agreement was found. The computational simulations helped determine the inlet low characteristics in the high enthalpy environment, the important parameters that affect the combustor-inlet flow, and the sensitivity of the inlet flow to various modeling assumptions.

  2. Towards a framework for testing general relativity with extreme-mass-ratio-inspiral observations

    NASA Astrophysics Data System (ADS)

    Chua, A. J. K.; Hee, S.; Handley, W. J.; Higson, E.; Moore, C. J.; Gair, J. R.; Hobson, M. P.; Lasenby, A. N.

    2018-04-01

    Extreme-mass-ratio-inspiral observations from future space-based gravitational-wave detectors such as LISA will enable strong-field tests of general relativity with unprecedented precision, but at prohibitive computational cost if existing statistical techniques are used. In one such test that is currently employed for LIGO black-hole binary mergers, generic deviations from relativity are represented by N deformation parameters in a generalised waveform model; the Bayesian evidence for each of its 2N combinatorial submodels is then combined into a posterior odds ratio for modified gravity over relativity in a null-hypothesis test. We adapt and apply this test to a generalised model for extreme-mass-ratio inspirals constructed on deformed black-hole spacetimes, and focus our investigation on how computational efficiency can be increased through an evidence-free method of model selection. This method is akin to the algorithm known as product-space Markov chain Monte Carlo, but uses nested sampling and improved error estimates from a rethreading technique. We perform benchmarking and robustness checks for the method, and find order-of-magnitude computational gains over regular nested sampling in the case of synthetic data generated from the null model.

  3. Monte Carlo Computational Modeling of the Energy Dependence of Atomic Oxygen Undercutting of Protected Polymers

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Stueber, Thomas J.; Norris, Mary Jo

    1998-01-01

    A Monte Carlo computational model has been developed which simulates atomic oxygen attack of protected polymers at defect sites in the protective coatings. The parameters defining how atomic oxygen interacts with polymers and protective coatings as well as the scattering processes which occur have been optimized to replicate experimental results observed from protected polyimide Kapton on the Long Duration Exposure Facility (LDEF) mission. Computational prediction of atomic oxygen undercutting at defect sites in protective coatings for various arrival energies was investigated. The atomic oxygen undercutting energy dependence predictions enable one to predict mass loss that would occur in low Earth orbit, based on lower energy ground laboratory atomic oxygen beam systems. Results of computational model prediction of undercut cavity size as a function of energy and defect size will be presented to provide insight into expected in-space mass loss of protected polymers with protective coating defects based on lower energy ground laboratory testing.

  4. On the computational aspects of comminution in discrete element method

    NASA Astrophysics Data System (ADS)

    Chaudry, Mohsin Ali; Wriggers, Peter

    2018-04-01

    In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.

  5. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  6. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  7. Swinging Atwood's Machine

    NASA Astrophysics Data System (ADS)

    Tufillaro, Nicholas B.; Abbott, Tyler A.; Griffiths, David J.

    1984-10-01

    We examine the motion of an Atwood's Machine in which one of the masses is allowed to swing in a plane. Computer studies reveal a rich variety of trajectories. The orbits are classified (bounded, periodic, singular, and terminating), and formulas for the critical mass ratios are developed. Perturbative techniques yield good approximations to the computer-generated trajectories. The model constitutes a simple example of a nonlinear dynamical system with two degrees of freedom.

  8. DDA Computations of Porous Aggregates with Forsterite Crystals: Effects of Crystal Shape and Crystal Mass Fraction

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lindsay, Sean S.; Harker, David; Woodward, Charles; Kelley, Michael S.; Kolokolova, Ludmilla

    2015-01-01

    Porous aggregate grains are commonly found in cometary dust samples and are needed to model cometary IR spectral energy distributions (SEDs). Models for thermal emissions from comets require two forms of silicates: amorphous and crystalline. The dominant crystal resonances observed in comet SEDs are from Forsterite (Mg2SiO4). The mass fractions that are crystalline span a large range from 0.0 < or = fcrystal < or = 0.74. Radial transport models that predict the enrichment of the outer disk (>25 AU at 1E6 yr) by inner disk materials (crystals) are challenged to yield the highend-range of cometary crystal mass fractions. However, in current thermal models, Forsterite crystals are not incorporated into larger aggregate grains but instead only are considered as discrete crystals. A complicating factor is that Forsterite crystals with rectangular shapes better fit the observed spectral resonances in wavelength (11.0-11.15 microns, 16, 19, 23.5, 27, and 33 microns), feature asymmetry and relative height (Lindley et al. 2013) than spherically or elliptically shaped crystals. We present DDA-DDSCAT computations of IR absorptivities (Qabs) of 3 micron-radii porous aggregates with 0.13 < or = fcrystal < or = 0.35 and with polyhedral-shaped Forsterite crystals. We can produce crystal resonances with similar appearance to the observed resonances of comet Hale- Bopp. Also, a lower mass fraction of crystals in aggregates can produce the same spectral contrast as a higher mass fraction of discrete crystals; the 11micron and 23 micron crystalline resonances appear amplified when crystals are incorporated into aggregates composed otherwise of spherically shaped amorphous Fe-Mg olivines and pyroxenes. We show that the optical properties of a porous aggregate is not linear combination of its monomers, so aggregates need to be computed. We discuss the consequence of lowering comet crystal mass fractions by modeling IR SEDs with aggregates with crystals, and the implications for radial transport models of our protoplanetary disk.

  9. Finite Element Aircraft Simulation of Turbulence

    NASA Technical Reports Server (NTRS)

    McFarland, R. E.

    1997-01-01

    A turbulence model has been developed for realtime aircraft simulation that accommodates stochastic turbulence and distributed discrete gusts as a function of the terrain. This model is applicable to conventional aircraft, V/STOL aircraft, and disc rotor model helicopter simulations. Vehicle angular activity in response to turbulence is computed from geometrical and temporal relationships rather than by using the conventional continuum approximations that assume uniform gust immersion and low frequency responses. By using techniques similar to those recently developed for blade-element rotor models, the angular-rate filters of conventional turbulence models are not required. The model produces rotational rates as well as air mass translational velocities in response to both stochastic and deterministic disturbances, where the discrete gusts and turbulence magnitudes may be correlated with significant terrain features or ship models. Assuming isotropy, a two-dimensional vertical turbulence field is created. A novel Gaussian interpolation technique is used to distribute vertical turbulence on the wing span or lateral rotor disc, and this distribution is used to compute roll responses. Air mass velocities are applied at significant centers of pressure in the computation of the aircraft's pitch and roll responses.

  10. Accretion flow dynamics during 1999 outburst of XTE J1859+226—modeling of broadband spectra and constraining the source mass

    NASA Astrophysics Data System (ADS)

    Nandi, Anuj; Mandal, S.; Sreehari, H.; Radhika, D.; Das, Santabrata; Chattopadhyay, I.; Iyer, N.; Agrawal, V. K.; Aktar, R.

    2018-05-01

    We examine the dynamical behavior of accretion flow around XTE J1859+226 during the 1999 outburst by analyzing the entire outburst data (˜166 days) from RXTE Satellite. Towards this, we study the hysteresis behavior in the hardness intensity diagram (HID) based on the broadband (3-150 keV) spectral modeling, spectral signature of jet ejection and the evolution of Quasi-periodic Oscillation (QPO) frequencies using the two-component advective flow model around a black hole. We compute the flow parameters, namely Keplerian accretion rate (\\dot{m}d), sub-Keplerian accretion rate (\\dot{m}h), shock location (rs) and black hole mass (M_{bh}) from the spectral modeling and study their evolution along the q-diagram. Subsequently, the kinetic jet power is computed as L^{obs}_{jet} ˜3-6 ×10^{37} erg s^{-1} during one of the observed radio flares which indicates that jet power corresponds to 8-16% mass outflow rate from the disc. This estimate of mass outflow rate is in close agreement with the change in total accretion rate (˜14%) required for spectral modeling before and during the flare. Finally, we provide a mass estimate of the source XTE J1859+226 based on the spectral modeling that lies in the range of 5.2-7.9 M_{⊙} with 90% confidence.

  11. Mathematical modeling and computer simulation of isoelectric focusing with electrochemically defined ampholytes

    NASA Technical Reports Server (NTRS)

    Palusinski, O. A.; Allgyer, T. T.; Mosher, R. A.; Bier, M.; Saville, D. A.

    1981-01-01

    A mathematical model of isoelectric focusing at the steady state has been developed for an M-component system of electrochemically defined ampholytes. The model is formulated from fundamental principles describing the components' chemical equilibria, mass transfer resulting from diffusion and electromigration, and electroneutrality. The model consists of ordinary differential equations coupled with a system of algebraic equations. The model is implemented on a digital computer using FORTRAN-based simulation software. Computer simulation data are presented for several two-component systems showing the effects of varying the isoelectric points and dissociation constants of the constituents.

  12. A MacCormack-TVD finite difference method to simulate the mass flow in mountainous terrain with variable computational domain

    NASA Astrophysics Data System (ADS)

    Ouyang, Chaojun; He, Siming; Xu, Qiang; Luo, Yu; Zhang, Wencheng

    2013-03-01

    A two-dimensional mountainous mass flow dynamic procedure solver (Massflow-2D) using the MacCormack-TVD finite difference scheme is proposed. The solver is implemented in Matlab on structured meshes with variable computational domain. To verify the model, a variety of numerical test scenarios, namely, the classical one-dimensional and two-dimensional dam break, the landslide in Hong Kong in 1993 and the Nora debris flow in the Italian Alps in 2000, are executed, and the model outputs are compared with published results. It is established that the model predictions agree well with both the analytical solution as well as the field observations.

  13. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  14. Computer Simulation of an Electric Trolley Bus

    DOT National Transportation Integrated Search

    1979-12-01

    This report describes a computer model developed at the Transportation Systems Center (TSC) to simulate power/propulsion characteristics of an urban trolley bus. The work conducted in this area is sponsored by the Urban Mass Transportation Administra...

  15. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    ERIC Educational Resources Information Center

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  16. Efficient Conservative Reformulation Schemes for Lithium Intercalation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urisanga, PC; Rife, D; De, S

    Porous electrode theory coupled with transport and reaction mechanisms is a widely used technique to model Li-ion batteries employing an appropriate discretization or approximation for solid phase diffusion with electrode particles. One of the major difficulties in simulating Li-ion battery models is the need to account for solid phase diffusion in a second radial dimension r, which increases the computation time/cost to a great extent. Various methods that reduce the computational cost have been introduced to treat this phenomenon, but most of them do not guarantee mass conservation. The aim of this paper is to introduce an inherently mass conservingmore » yet computationally efficient method for solid phase diffusion based on Lobatto III A quadrature. This paper also presents coupling of the new solid phase reformulation scheme with a macro-homogeneous porous electrode theory based pseudo 20 model for Li-ion battery. (C) The Author(s) 2015. Published by ECS. All rights reserved.« less

  17. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.; Brewe, David E.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  18. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, C. M.; Brewe, D. E.

    1989-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  19. Use of a Computer-Mediated Delphi Process to Validate a Mass Casualty Conceptual Model

    PubMed Central

    CULLEY, JOAN M.

    2012-01-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters. PMID:21076283

  20. Use of a computer-mediated Delphi process to validate a mass casualty conceptual model.

    PubMed

    Culley, Joan M

    2011-05-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters.

  1. VizieR Online Data Catalog: Low-mass helium white dwarfs evolutionary models (Istrate+, 2016)

    NASA Astrophysics Data System (ADS)

    Istrate, A.; Marchant, P.; Tauris, T. M.; Langer, N.; Stancliffe, R. J.; Grassitelli, L.

    2016-07-01

    Evolutionary models of low-mass helium white dwarfs including element diffusion and rotational mixing. The WDs are produced considering binary evolution through the LMXB channel, with final WDs masses between ~0.16-~0.44. The models are computed using MESA, for different metallicities: Z=0.02, 0.01, 0.001 and 0.0002. For each metallicity, the models are divided in three categories: (1) basic (no diffusion nor rotation are considered) (2) diffusion (element diffusion is considered) (3) rotation+diffusion (both element diffusion and rotational mixing are considered) (4 data files).

  2. Mass transfer effect of the stalk contraction-relaxation cycle of Vorticella convallaria

    NASA Astrophysics Data System (ADS)

    Zhou, Jiazhong; Admiraal, David; Ryu, Sangjin

    2014-11-01

    Vorticella convallaria is a genus of protozoa living in freshwater. Its stalk contracts and coil pulling the cell body towards the substrate at a remarkable speed, and then relaxes to its extended state much more slowly than the contraction. However, the reason for Vorticella's stalk contraction is still unknown. It is presumed that water flow induced by the stalk contraction-relaxation cycle may augment mass transfer near the substrate. We investigated this hypothesis using an experimental model with particle tracking velocimetry and a computational fluid dynamics model. In both approaches, Vorticella was modeled as a solid sphere translating perpendicular to a solid surface in water. After having been validated by the experimental model and verified by grid convergence index test, the computational model simulated water flow during the cycle based on the measured time course of stalk length changes of Vorticella. Based on the simulated flow field, we calculated trajectories of particles near the model Vorticella, and then evaluated the mass transfer effect of Vorticella's stalk contraction based on the particles' motion. We acknowlege support from Laymann Seed Grant of the University of Nebraska-Lincoln.

  3. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  4. Computational Approach to Seasonal Changes of Living Leaves

    PubMed Central

    Wu, Dong-Yan

    2013-01-01

    This paper proposes a computational approach to seasonal changes of living leaves by combining the geometric deformations and textural color changes. The geometric model of a leaf is generated by triangulating the scanned image of a leaf using an optimized mesh. The triangular mesh of the leaf is deformed by the improved mass-spring model, while the deformation is controlled by setting different mass values for the vertices on the leaf model. In order to adaptively control the deformation of different regions in the leaf, the mass values of vertices are set to be in proportion to the pixels' intensities of the corresponding user-specified grayscale mask map. The geometric deformations as well as the textural color changes of a leaf are used to simulate the seasonal changing process of leaves based on Markov chain model with different environmental parameters including temperature, humidness, and time. Experimental results show that the method successfully simulates the seasonal changes of leaves. PMID:23533545

  5. Analysis of ALTAIR 1998 Meteor Radar Data

    NASA Technical Reports Server (NTRS)

    Zinn, J.; Close, S.; Colestock, P. L.; MacDonell, A.; Loveland, R.

    2011-01-01

    We describe a new analysis of a set of 32 UHF meteor radar traces recorded with the 422 MHz ALTAIR radar facility in November 1998. Emphasis is on the velocity measurements, and on inferences that can be drawn from them regarding the meteor masses and mass densities. We find that the velocity vs altitude data can be fitted as quadratic functions of the path integrals of the atmospheric densities vs distance, and deceleration rates derived from those fits all show the expected behavior of increasing with decreasing altitude. We also describe a computer model of the coupled processes of collisional heating, radiative cooling, evaporative cooling and ablation, and deceleration - for meteors composed of defined mixtures of mineral constituents. For each of the cases in the data set we ran the model starting with the measured initial velocity and trajectory inclination, and with various trial values of the quantity mPs 2 (the initial mass times the mass density squared), and then compared the computed deceleration vs altitude curves vs the measured ones. In this way we arrived at the best-fit values of the mPs 2 for each of the measured meteor traces. Then further, assuming various trial values of the density Ps, we compared the computed mass vs altitude curves with similar curves for the same set of meteors determined previously from the measured radar cross sections and an electrostatic scattering model. In this way we arrived at estimates of the best-fit mass densities Ps for each of the cases. Keywords meteor ALTAIR radar analysis 1 Introduction This paper describes a new analysis of a set of 422 MHz meteor scatter radar data recorded with the ALTAIR High-Power-Large-Aperture radar facility at Kwajalein Atoll on 18 November 1998. The exceptional accuracy/precision of the ALTAIR tracking data allow us to determine quite accurate meteor trajectories, velocities and deceleration rates. The measurements and velocity/deceleration data analysis are described in Sections II and III. The main point of this paper is to use these deceleration rate data, together with results from a computer model, to determine values of the quantities mPs 2 (the meteor mass times its material density squared); and further, by combining these m s 2 values with meteor mass estimates for the same set of meteors determined separately from measured radar scattering

  6. Pressure Loss Predictions of the Reactor Simulator Subsystem at NASA GRC

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.

    2015-01-01

    Testing of the Fission Power System (FPS) Technology Demonstration Unit (TDU) is being conducted at NASA GRC. The TDU consists of three subsystems: the Reactor Simulator (RxSim), the Stirling Power Conversion Unit (PCU), and the Heat Exchanger Manifold (HXM). An Annular Linear Induction Pump (ALIP) is used to drive the working fluid. A preliminary version of the TDU system (which excludes the PCU for now), is referred to as the RxSim subsystem and was used to conduct flow tests in Vacuum Facility 6 (VF 6). In parallel, a computational model of the RxSim subsystem was created based on the CAD model and was used to predict loop pressure losses over a range of mass flows. This was done to assess the ability of the pump to meet the design intent mass flow demand. Measured data indicates that the pump can produce 2.333 kg/sec of flow, which is enough to supply the RxSim subsystem with a nominal flow of 1.75 kg/sec. Computational predictions indicated that the pump could provide 2.157 kg/sec (using the Spalart-Allmaras turbulence model), and 2.223 kg/sec (using the k-? turbulence model). The computational error of the predictions for the available mass flow is -0.176 kg/sec (with the S-A turbulence model) and -0.110 kg/sec (with the k-epsilon turbulence model) when compared to measured data.

  7. Manual of phosphoric acid fuel cell stack three-dimensional model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    A detailed distributed mathematical model of phosphoric acid fuel cell stack have been developed, with the FORTRAN computer program, for analyzing the temperature distribution in the stack and the associated current density distribution on the cell plates. Energy, mass, and electrochemical analyses in the stack were combined to develop the model. Several reasonable assumptions were made to solve this mathematical model by means of the finite differences numerical method.

  8. Vectors into the Future of Mass and Interpersonal Communication Research: Big Data, Social Media, and Computational Social Science.

    PubMed

    Cappella, Joseph N

    2017-10-01

    Simultaneous developments in big data, social media, and computational social science have set the stage for how we think about and understand interpersonal and mass communication. This article explores some of the ways that these developments generate 4 hypothetical "vectors" - directions - into the next generation of communication research. These vectors include developments in network analysis, modeling interpersonal and social influence, recommendation systems, and the blurring of distinctions between interpersonal and mass audiences through narrowcasting and broadcasting. The methods and research in these arenas are occurring in areas outside the typical boundaries of the communication discipline but engage classic, substantive questions in mass and interpersonal communication.

  9. Hyper-scaling relations in the conformal window from dynamic AdS/QCD

    NASA Astrophysics Data System (ADS)

    Evans, Nick; Scott, Marc

    2014-09-01

    Dynamic AdS/QCD is a holographic model of strongly coupled gauge theories with the dynamics included through the running anomalous dimension of the quark bilinear, γ. We apply it to describe the physics of massive quarks in the conformal window of SU(Nc) gauge theories with Nf fundamental flavors, assuming the perturbative two-loop running for γ. We show that to find regular, holographic renormalization group flows in the infrared, the decoupling of the quark flavors at the scale of the mass is important, and enact it through suitable boundary conditions when the flavors become on shell. We can then compute the quark condensate and the mesonic spectrum (Mρ,Mπ,Mσ) and decay constants. We compute their scaling dependence on the quark mass for a number of examples. The model matches perturbative expectations for large quark mass and naïve dimensional analysis (including the anomalous dimensions) for small quark mass. The model allows study of the intermediate regime where there is an additional scale from the running of the coupling, and we present results for the deviation of scalings from assuming only the single scale of the mass.

  10. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  11. COSMIC REIONIZATION ON COMPUTERS: NUMERICAL AND PHYSICAL CONVERGENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov; Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637; Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weakmore » convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite-resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ∼20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, such as stellar masses and metallicities. Yet other properties of model galaxies, for example, their H i masses, are recovered in the weakly converged runs only within a factor of 2.« less

  12. Simultaneous solution of the geoid and the surface density anomalies

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.; Karimi, R.; AllahTavakoli, Y.

    2012-04-01

    The main application of the land gravity data in geodesy is "local geoid" or "local gravity field" modeling, whereas the same data could play a vital role for the anomalous mass-density modeling in geophysical explorations. In the realm of local geoid computations based on Geodetic Boundary Value Problems (GBVP), it is needed that the effect of the topographic (or residual terrain) masses be removed via application of the Newton integral in order to perform the downward continuation in a harmonic space. However, harmonization of the downward continuation domain may not be perfectly possible unless accurate information about the mass-density of the topographic masses be available. On the other hand, from the exploration point of view the unwanted topographical masses within the aforementioned procedure could be regarded as the signal. In order to overcome the effect of the remaining masses within the remove step of the GBVP, which cause uncertainties in mathematical modeling of the problem, here we are proposing a methodology for simultaneous solution of the geoid and residual surface density modeling In other words, a new mathematical model will be offered which both provides the needed harmonic space for downward continuation and at the same time accounts for the non-harmonic terms of gravitational field and makes use of it for residual mass density modeling within the topographic region. The presented new model enjoys from uniqueness of the solution, opposite to the inverse application of the Newton integral for mass density modeling which is non-unique, and only needs regularization to remove its instability problem. In this way, the solution of the model provides both the incremental harmonic gravitational potential on surface of the reference ellipsoid as the gravity field model and the lateral surface mass-density variations via the second derivatives of the non harmonic terms of gravitational field. As the case study and accuracy verification, the proposed methodology is applied for identification of the salt geological structures as well as geoid computations within the northern coasts of Persian Gulf.

  13. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    NASA Astrophysics Data System (ADS)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  14. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  15. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Kevin; ...

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  16. Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source

    NASA Astrophysics Data System (ADS)

    Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.

    2014-06-01

    To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.

  17. Comparison of nonmesonic hypernuclear decay rates computed in laboratory and center-of-mass coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Conti, C.; Barbero, C.; Galeão, A. P.

    In this work we compute the one-nucleon-induced nonmesonic hypernuclear decay rates of {sub Λ}{sup 5}He, {sub Λ}{sup 12}C and {sub Λ}{sup 13}C using a formalism based on the independent particle shell model in terms of laboratory coordinates. To ascertain the correctness and precision of the method, these results are compared with those obtained using a formalism in terms of center-of-mass coordinates, which has been previously reported in the literature. The formalism in terms of laboratory coordinates will be useful in the shell-model approach to two-nucleon-induced transitions.

  18. Lunar PMAD technology assessment

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    1992-01-01

    This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.

  19. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Maynard, Matthew R.; Geyer, John W.; Aris, John P.; Shifrin, Roger Y.; Bolch, Wesley

    2011-08-01

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR™ and then imported to the 3D modeling software package Rhinoceros™ for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations in skeletal size, individual organ masses and total fetal masses. The resulting series of fetal hybrid computational phantoms is applicable to organ-level and bone-level internal and external radiation dosimetry for human fetuses of various ages and weight percentiles

  20. VizieR Online Data Catalog: Comparison of evolutionary tracks (Martins+, 2013)

    NASA Astrophysics Data System (ADS)

    Martins, F.; Palacios, A.

    2013-11-01

    Tables of evolutionary models for massive stars. The files m*_stol.dat correspond to models computed with the code STAREVOL. The files m*_mesa.dat correspond to models computed with the code MESA. For each code, models with initial masses equal to 7, 9, 15, 20, 25, 40 and 60M⊙ are provided. No rotation is included. The overshooting parameter f is equal to 0.01. The metallicity is solar. (14 data files).

  1. Phosphoric acid fuel cell power plant system performance model and computer program

    NASA Technical Reports Server (NTRS)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  2. Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling

    PubMed Central

    Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models. PMID:19225569

  3. Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.

    PubMed

    Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I

    2009-01-01

    Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models.

  4. Application research of computational mass-transfer differential equation in MBR concentration field simulation.

    PubMed

    Li, Chunqing; Tie, Xiaobo; Liang, Kai; Ji, Chanjuan

    2016-01-01

    After conducting the intensive research on the distribution of fluid's velocity and biochemical reactions in the membrane bioreactor (MBR), this paper introduces the use of the mass-transfer differential equation to simulate the distribution of the chemical oxygen demand (COD) concentration in MBR membrane pool. The solutions are as follows: first, use computational fluid dynamics to establish a flow control equation model of the fluid in MBR membrane pool; second, calculate this model by adopting direct numerical simulation to get the velocity field of the fluid in membrane pool; third, combine the data of velocity field to establish mass-transfer differential equation model for the concentration field in MBR membrane pool, and use Seidel iteration method to solve the equation model; last but not least, substitute the real factory data into the velocity and concentration field model to calculate simulation results, and use visualization software Tecplot to display the results. Finally by analyzing the nephogram of COD concentration distribution, it can be found that the simulation result conforms the distribution rule of the COD's concentration in real membrane pool, and the mass-transfer phenomenon can be affected by the velocity field of the fluid in membrane pool. The simulation results of this paper have certain reference value for the design optimization of the real MBR system.

  5. Spherical and ellipsoidal arrangement of the topography and its impact on gravity gradients in the GOCE mission

    NASA Astrophysics Data System (ADS)

    Grombein, Thomas; Seitz, Kurt; Heck, Bernhard

    2010-05-01

    The basic observables of the recently launched satellite gravity gradiometry mission GOCE are the second derivatives of the earth gravitational potential (components of the full Marussi tensor). These gravity gradients are highly sensitive to mass anomalies and mass transports in the earth system. The high- and mid-frequency components of the gradients are mainly affected by the topographic and isostatic masses whereby the downward continuation of the gradients is a rather difficult task. In order to stabilize this process the gradients have to be smoothed by applying topographic and isostatic reductions. In the space domain the modelling of topographic effects is based on the evaluation of functionals of the Newton integral. In the case of GOCE the second-order derivatives are required. Practical numerical computations rely on a discretisation of the earth's topography and a subdivision into different mass elements. Considering geographical gridlines tesseroids (spherical prisms) are well suited for the modelling of the topographic masses. Since the respective volume integrals cannot be solved in an elementary way in the case of tesseroids numerical approaches such as Taylor series expansion, Gauss-Legendre cubature or a point-mass approximation have to be applied. In this paper the topography is represented by the global Digital Terrain Model DTM2006.0 which was also used for the compilation of the Earth Gravitation Model EGM2008. In addition, each grid element of the DTM is classified as land, see or ice providing further information on the density within the evaluation of topographic effects. The computation points are located on a GOCE-like circular orbit. The mass elements are arranged on a spherical earth of constant radius and, in a more realistic composition, on the surface of an ellipsoid of revolution. The results of the modelling of each version are presented and compared to each other with regard to computation time and accuracy. Acknowledgements: This research has been financially supported by the German Federal Ministry of Education and Research (BMBF) within the REAL-GOCE project of the GEOTECHNOLOGIEN Programme.

  6. Computational model of in vivo human energy metabolism during semi-starvation and re-feeding

    PubMed Central

    Hall, Kevin D.

    2008-01-01

    Changes of body weight and composition are the result of complex interactions among metabolic fluxes contributing to macronutrient balances. To better understand these interactions, a mathematical model was constructed that used the measured dietary macronutrient intake during semi-starvation and re-feeding as model inputs and computed whole-body energy expenditure, de novo lipogenesis, gluconeogenesis, as well as turnover and oxidation of carbohydrate, fat and protein. Published in vivo human data provided the basis for the model components which were integrated by fitting a few unknown parameters to the classic Minnesota human starvation experiment. The model simulated the measured body weight and fat mass changes during semi-starvation and re-feeding and predicted the unmeasured metabolic fluxes underlying the body composition changes. The resting metabolic rate matched the experimental measurements and required a model of adaptive thermogenesis. Re-feeding caused an elevation of de novo lipogenesis which, along with increased fat intake, resulted in a rapid repletion and overshoot of body fat. By continuing the computer simulation with the pre-starvation diet and physical activity, the original body weight and composition was eventually restored, but body fat mass was predicted to take more than one additional year to return to within 5% of its original value. The model was validated by simulating a recently published short-term caloric restriction experiment without changing the model parameters. The predicted changes of body weight, fat mass, resting metabolic rate, and nitrogen balance matched the experimental measurements thereby providing support for the validity of the model. PMID:16449298

  7. A model for the formation of the Local Group

    NASA Technical Reports Server (NTRS)

    Peebles, P. J. E.; Melott, A. L.; Holmes, M. R.; Jiang, L. R.

    1989-01-01

    Observational tests of a model for the formation of the Local Group are presented and analyzed in which the mass concentration grows by gravitational accretion of local-pressure matter onto two seed masses in an otherwise homogeneous initial mass distribution. The evolution of the mass distribution is studied in an analytic approximation and a numerical computation. The initial seed mass and separation are adjusted to produce the observed present separation and relative velocity of the Andromeda Nebula and the Galaxy. If H(0) is adjusted to about 80 km/s/Mpc with density parameter Omega = 1, then the model gives a good fit to the motions of the outer members of the Local Group. The same model gives particle orbits at radius of about 100 kpc that reasonably approximate the observed distribution of redshifts of the Galactic satellites.

  8. Theoretical Study of White Dwarf Double Stars

    NASA Astrophysics Data System (ADS)

    Hira, Ajit; Koetter, Ted; Rivera, Ruben; Diaz, Juan

    2015-04-01

    We continue our interest in the computational simulation of the astrophysical phenomena with a study of gravitationally-bound binary stars, composed of at least one white dwarf star. Of particular interest to astrophysicists are the conditions inside a white dwarf star in the time frame leading up to its explosive end as a Type Ia supernova, for an understanding of the massive stellar explosions. In addition, the studies of the evolution of white dwarfs could serve as promising probes of theories of gravitation. We developed FORTRAN computer programs to implement our models for white dwarfs and other stars. These codes allow for different sizes and masses of stars. Simulations were done in the mass interval from 0.1 to 2.0 solar masses. Our goal was to obtain both atmospheric and orbital parameters. The computational results thus obtained are compared with relevant observational data. The data are further analyzed to identify trends in terms of sizes and masses of stars. We hope to extend our computational studies to blue giant stars in the future. Research Supported by National Science Foundation.

  9. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-04-01

    Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.

  10. Thermal Ablation Modeling for Silicate Materials

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq

    2016-01-01

    A general thermal ablation model for silicates is proposed. The model includes the mass losses through the balance between evaporation and condensation, and through the moving molten layer driven by surface shear force and pressure gradient. This model can be applied in the ablation simulation of the meteoroid and the glassy ablator for spacecraft Thermal Protection Systems. Time-dependent axisymmetric computations are performed by coupling the fluid dynamics code, Data-Parallel Line Relaxation program, with the material response code, Two-dimensional Implicit Thermal Ablation simulation program, to predict the mass lost rates and shape change. The predicted mass loss rates will be compared with available data for model validation, and parametric studies will also be performed for meteoroid earth entry conditions.

  11. Non-perturbative quark mass renormalisation and running in N_{f}=3 QCD

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fritzsch, P.; Pena, C.; Preti, D.; Ramos, A.; Vladikas, A.

    2018-05-01

    We determine from first principles the quark mass anomalous dimension in N_{f}=3 QCD between the electroweak and hadronic scales. This allows for a fully non-perturbative connection of the perturbative and non-perturbative regimes of the Standard Model in the hadronic sector. The computation is carried out to high accuracy, employing massless O (a)-improved Wilson quarks and finite-size scaling techniques. We also provide the matching factors required in the renormalisation of light quark masses from lattice computations with O (a)-improved Wilson fermions and a tree-level Symanzik improved gauge action. The total uncertainty due to renormalisation and running in the determination of light quark masses in the SM is thus reduced to about 1%.

  12. Advanced earth observation spacecraft computer-aided design software: Technical, user and programmer guide

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.; Krauze, L. D.

    1983-01-01

    The IDEAS computer of NASA is a tool for interactive preliminary design and analysis of LSS (Large Space System). Nine analysis modules were either modified or created. These modules include the capabilities of automatic model generation, model mass properties calculation, model area calculation, nonkinematic deployment modeling, rigid-body controls analysis, RF performance prediction, subsystem properties definition, and EOS science sensor selection. For each module, a section is provided that contains technical information, user instructions, and programmer documentation.

  13. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  14. A mass weighted chemical elastic network model elucidates closed form domain motions in proteins

    PubMed Central

    Kim, Min Hyeok; Seo, Sangjae; Jeong, Jay Il; Kim, Bum Joon; Liu, Wing Kam; Lim, Byeong Soo; Choi, Jae Boong; Kim, Moon Ki

    2013-01-01

    An elastic network model (ENM), usually Cα coarse-grained one, has been widely used to study protein dynamics as an alternative to classical molecular dynamics simulation. This simple approach dramatically saves the computational cost, but sometimes fails to describe a feasible conformational change due to unrealistically excessive spring connections. To overcome this limitation, we propose a mass-weighted chemical elastic network model (MWCENM) in which the total mass of each residue is assumed to be concentrated on the representative alpha carbon atom and various stiffness values are precisely assigned according to the types of chemical interactions. We test MWCENM on several well-known proteins of which both closed and open conformations are available as well as three α-helix rich proteins. Their normal mode analysis reveals that MWCENM not only generates more plausible conformational changes, especially for closed forms of proteins, but also preserves protein secondary structures thus distinguishing MWCENM from traditional ENMs. In addition, MWCENM also reduces computational burden by using a more sparse stiffness matrix. PMID:23456820

  15. A model of transverse fuel injection applied to the computation of supersonic combustor flow

    NASA Technical Reports Server (NTRS)

    Rogers, R. C.

    1979-01-01

    A two-dimensional, nonreacting flow model of the aerodynamic interaction of a transverse hydrogen jet within a supersonic mainstream has been developed. The model assumes profile shapes of mass flux, pressure, flow angle, and hydrogen concentration and produces downstream profiles of the other flow parameters under the constraints of the integrated conservation equations. These profiles are used as starting conditions for an existing finite difference parabolic computer code for the turbulent supersonic combustion of hydrogen. Integrated mixing and flow profile results obtained from the computer code compare favorably with existing data for the supersonic combustion of hydrogen.

  16. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  17. A unified framework for heat and mass transport at the atomic scale

    NASA Astrophysics Data System (ADS)

    Ponga, Mauricio; Sun, Dingyi

    2018-04-01

    We present a unified framework to simulate heat and mass transport in systems of particles. The proposed framework is based on kinematic mean field theory and uses a phenomenological master equation to compute effective transport rates between particles without the need to evaluate operators. We exploit this advantage and apply the model to simulate transport phenomena at the nanoscale. We demonstrate that, when calibrated to experimentally-measured transport coefficients, the model can accurately predict transient and steady state temperature and concentration profiles even in scenarios where the length of the device is comparable to the mean free path of the carriers. Through several example applications, we demonstrate the validity of our model for all classes of materials, including ones that, until now, would have been outside the domain of computational feasibility.

  18. Bone Mass in Boys with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Calarge, Chadi A.; Schlechte, Janet A.

    2017-01-01

    To examine bone mass in children and adolescents with autism spectrum disorders (ASD). Risperidone-treated 5 to 17 year-old males underwent anthropometric and bone measurements, using dual-energy X-ray absorptiometry and peripheral quantitative computed tomography. Multivariable linear regression analysis models examined whether skeletal outcomes…

  19. Computational analysis of liquid chromatography-tandem mass spectrometric steroid profiling in NCI H295R cells following angiotensin II, forskolin and abiraterone treatment.

    PubMed

    Mangelis, Anastasios; Dieterich, Peter; Peitzsch, Mirko; Richter, Susan; Jühlen, Ramona; Hübner, Angela; Willenberg, Holger S; Deussen, Andreas; Lenders, Jacques W M; Eisenhofer, Graeme

    2016-01-01

    Adrenal steroid hormones, which regulate a plethora of physiological functions, are produced via tightly controlled pathways. Investigations of these pathways, based on experimental data, can be facilitated by computational modeling for calculations of metabolic rate alterations. We therefore used a model system, based on mass balance and mass reaction equations, to kinetically evaluate adrenal steroidogenesis in human adrenal cortex-derived NCI H295R cells. For this purpose a panel of 10 steroids was measured by liquid chromatographic-tandem mass spectrometry. Time-dependent changes in cell incubate concentrations of steroids - including cortisol, aldosterone, dehydroepiandrosterone and their precursors - were measured after incubation with angiotensin II, forskolin and abiraterone. Model parameters were estimated based on experimental data using weighted least square fitting. Time-dependent angiotensin II- and forskolin-induced changes were observed for incubate concentrations of precursor steroids with peaks that preceded maximal increases in aldosterone and cortisol. Inhibition of 17-alpha-hydroxylase/17,20-lyase with abiraterone resulted in increases in upstream precursor steroids and decreases in downstream products. Derived model parameters, including rate constants of enzymatic processes, appropriately quantified observed and expected changes in metabolic pathways at multiple conversion steps. Our data demonstrate limitations of single time point measurements and the importance of assessing pathway dynamics in studies of adrenal cortical cell line steroidogenesis. Our analysis provides a framework for evaluation of steroidogenesis in adrenal cortical cell culture systems and demonstrates that computational modeling-derived estimates of kinetic parameters are an effective tool for describing perturbations in associated metabolic pathways. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. The use of gravimetric data from GRACE mission in the understanding of polar motion variations

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Nastula, J.; Bizouard, C.; Gambis, D.

    2009-08-01

    Tesseral coefficients C21 and S21 derived from Gravity Recovery and Climate Experiment (GRACE) observations allow to compute the mass term of the polar-motion excitation function. This independent estimation can improve the geophysical models and, in addition, determine the unmodelled phenomena. In this paper, we intend to validate the polar motion excitation derived from GRACE's last release (GRACE Release 4) computed by different institutes: GeoForschungsZentrum (GFZ), Postdam, Germany; Center for Space Research (CSR), Austin, USA; Jet Propulsion Laboratory (JPL), Pasadena, USA, and the Groupe de Recherche en Géodésie Spatiale (GRGS), Toulouse, France. For this purpose, we compare these excitations functions first to the mass term obtained from observed Earth's rotation variations free of the motion term and, second, to the mass term estimated from geophysical fluids models. We confirm the large improvement of the CSR solution, and we show that the GRGS estimate is also well correlated with the geodetic observations. Significant discrepancies exist between the solutions of each centre. The source of these differences is probably related to the data processing strategy. We also consider residuals computed after removing the geophysical models or the gravimetric solutions from the geodetic mass term. We show that the residual excitation based on models is smoother than the gravimetric data, which are still noisy. Still, they are comparable for the χ2 component. It appears that χ2 residual signals using GFZ and JPL data have less variability. Finally, for assessing the impact of the geophysical fluids models choice on our results, we checked two different oceanic excitation series. We show the significant differences in the residuals correlations, especially for the χ1 more sensitive to the oceanic signals.

  1. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    The first part of this paper (Part 1) presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work has the ability to account for both chemical absorption and desorption of CO2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry’s constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less

  2. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Kevin

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less

  3. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  4. Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations for solvent-based carbon capture. Part 2: Chemical absorption across a wetted wall column: Original Research Article: Hierarchical calibration and validation framework of bench-scale computational fluid dynamics simulations

    DOE PAGES

    Wang, Chao; Xu, Zhijie; Lai, Kevin; ...

    2017-10-24

    Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less

  5. Baryon magnetic moments: Symmetries and relations

    NASA Astrophysics Data System (ADS)

    Parreño, Assumpta; Savage, Martin J.; Tiburzi, Brian C.; Wilhelm, Jonas; Chang, Emmanuel; Detmold, William; Orginos, Kostas

    2018-03-01

    Magnetic moments of the octet baryons are computed using lattice QCD in background magnetic fields, including the first treatment of the magnetically coupled ∑0- ⋀ system. Although the computations are performed for relatively large values of the up and down quark masses, we gain new insight into the symmetries and relations between magnetic moments by working at a three-flavor mass-symmetric point. While the spinflavor symmetry in the large Nc limit of QCD is shared by the naïve constituent quark model, we find instances where quark model predictions are considerably favored over those emerging in the large Nc limit. We suggest further calculations that would shed light on the curious patterns of baryon magnetic moments.

  6. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.

  7. Applying a Particle-only Model to the HL Tau Disk

    NASA Astrophysics Data System (ADS)

    Tabeshian, Maryam; Wiegert, Paul A.

    2018-04-01

    Observations have revealed rich structures in protoplanetary disks, offering clues about their embedded planets. Due to the complexities introduced by the abundance of gas in these disks, modeling their structure in detail is computationally intensive, requiring complex hydrodynamic codes and substantial computing power. It would be advantageous if computationally simpler models could provide some preliminary information on these disks. Here we apply a particle-only model (that we developed for gas-poor debris disks) to the gas-rich disk, HL Tauri, to address the question of whether such simple models can inform the study of these systems. Assuming three potentially embedded planets, we match HL Tau’s radial profile fairly well and derive best-fit planetary masses and orbital radii (0.40, 0.02, 0.21 Jupiter masses for the planets orbiting a 0.55 M ⊙ star at 11.22, 29.67, 64.23 au). Our derived parameters are comparable to those estimated by others, except for the mass of the second planet. Our simulations also reproduce some narrower gaps seen in the ALMA image away from the orbits of the planets. The nature of these gaps is debated but, based on our simulations, we argue they could result from planet–disk interactions via mean-motion resonances, and need not contain planets. Our results suggest that a simple particle-only model can be used as a first step to understanding dynamical structures in gas disks, particularly those formed by planets, and determine some parameters of their hidden planets, serving as useful initial inputs to hydrodynamic models which are needed to investigate disk and planet properties more thoroughly.

  8. Rapid Ice-Sheet Changes and Mechanical Coupling to Solid-Earth/Sea-Level and Space Geodetic Observation

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Ivins, E. R.; Larour, E. Y.

    2015-12-01

    Perturbations in gravitational and rotational potentials caused by climate driven mass redistribution on the earth's surface, such as ice sheet melting and terrestrial water storage, affect the spatiotemporal variability in global and regional sea level. Here we present a numerically accurate, computationally efficient, high-resolution model for sea level. Unlike contemporary models that are based on spherical-harmonic formulation, the model can operate efficiently in a flexible embedded finite-element mesh system, thus capturing the physics operating at km-scale yet capable of simulating geophysical quantities that are inherently of global scale with minimal computational cost. One obvious application is to compute evolution of sea level fingerprints and associated geodetic and astronomical observables (e.g., geoid height, gravity anomaly, solid-earth deformation, polar motion, and geocentric motion) as a companion to a numerical 3-D thermo-mechanical ice sheet simulation, thus capturing global signatures of climate driven mass redistribution. We evaluate some important time-varying signatures of GRACE inferred ice sheet mass balance and continental hydrological budget; for example, we identify dominant sources of ongoing sea-level change at the selected tide gauge stations, and explain the relative contribution of different sources to the observed polar drift. We also report our progress on ice-sheet/solid-earth/sea-level model coupling efforts toward realistic simulation of Pine Island Glacier over the past several hundred years.

  9. CIELO-A GIS integrated model for climatic and water balance simulation in islands environments

    NASA Astrophysics Data System (ADS)

    Azevedo, E. B.; Pereira, L. S.

    2003-04-01

    The model CIELO (acronym for "Clima Insular à Escala Local") is a physically based model that simulates the climatic variables in an island using data from a single synoptic reference meteorological station. The reference station "knows" its position in the orographic and dynamic regime context. The domain of computation is a GIS raster grid parameterised with a digital elevation model (DEM). The grid is oriented following the direction of the air masses circulation through a specific algorithm named rotational terrain model (RTM). The model consists of two main sub-models. One, relative to the advective component simulation, assumes the Foehn effect to reproduce the dynamic and thermodynamic processes occurring when an air mass moves through the island orographic obstacle. This makes possible to simulate the air temperature, air humidity, cloudiness and precipitation as influenced by the orography along the air displacement. The second concerns the radiative component as affected by the clouds of orographic origin and by the shadow produced by the relief. The initial state parameters are computed starting from the reference meteorological station across the DEM transept until the sea level at the windward side. Then, starting from the sea level, the model computes the local scale meteorological parameters according to the direction of the air displacement, which is adjusted with the RTM. The air pressure, temperature and humidity are directly calculated for each cell in the computational grid, while several algorithms are used to compute the cloudiness, net radiation, evapotranspiration, and precipitation. The model presented in this paper has been calibrated and validated using data from some meteorological stations and a larger number of rainfall stations located at various elevations in the Azores Islands.

  10. Stellar mass and age determinations . I. Grids of stellar models from Z = 0.006 to 0.04 and M = 0.5 to 3.5 M⊙

    NASA Astrophysics Data System (ADS)

    Mowlavi, N.; Eggenberger, P.; Meynet, G.; Ekström, S.; Georgy, C.; Maeder, A.; Charbonnel, C.; Eyer, L.

    2012-05-01

    Aims: We present dense grids of stellar models suitable for comparison with observable quantities measured with great precision, such as those derived from binary systems or planet-hosting stars. Methods: We computed new Geneva models without rotation at metallicities Z = 0.006, 0.01, 0.014, 0.02, 0.03, and 0.04 (i.e. [Fe/H] from -0.33 to +0.54) and with mass in small steps from 0.5 to 3.5 M⊙. Great care was taken in the procedure for interpolating between tracks in order to compute isochrones. Results: Several properties of our grids are presented as a function of stellar mass and metallicity. Those include surface properties in the Hertzsprung-Russell diagram, internal properties including mean stellar density, sizes of the convective cores, and global asteroseismic properties. Conclusions: We checked our interpolation procedure and compared interpolated tracks with computed tracks. The deviations are less than 1% in radius and effective temperatures for most of the cases considered. We also checked that the present isochrones provide nice fits to four couples of observed detached binaries and to the observed sequences of the open clusters NGC 3532 and M 67. Including atomic diffusion in our models with M < 1.1 M⊙ leads to variations in the surface abundances that should be taken into account when comparing with observational data of stars with measured metallicities. For that purpose, iso-Zsurf lines are computed. These can be requested for download from a dedicated web page, together with tracks at masses and metallicities within the limits covered by the grids. The validity of the relations linking Z and [Fe/H] is also re-assessed in light of the surface abundance variations in low-mass stars. Table D.1 for the basic tracks is available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/541/A41, and on our web site http://obswww.unige.ch/Recherche/evol/-Database-. Tables for interpolated tracks, iso-Zsurf lines and isochrones can be computed, on demand, from our web site.Appendices are available in electronic form at http://www.aanda.org

  11. Combining Experiments and Simulation of Gas Absorption for Teaching Mass Transfer Fundamentals: Removing CO2 from Air Using Water and NaOH

    ERIC Educational Resources Information Center

    Clark, William M.; Jackson, Yaminah Z.; Morin, Michael T.; Ferraro, Giacomo P.

    2011-01-01

    Laboratory experiments and computer models for studying the mass transfer process of removing CO2 from air using water or dilute NaOH solution as absorbent are presented. Models tie experiment to theory and give a visual representation of concentration profiles and also illustrate the two-film theory and the relative importance of various…

  12. Noncommutative Jackiw-Pi model: One-loop renormalization

    NASA Astrophysics Data System (ADS)

    Bufalo, R.; Ghasemkhani, M.; Alipour, M.

    2018-06-01

    In this paper, we study the quantum behavior of the noncommutative Jackiw-Pi model. After establishing the Becchi-Rouet-Store-Tyutin (BRST) invariant action, the perturbative renormalizability is discussed, allowing us to introduce the renormalized mass and gauge coupling. We then proceed to compute the one-loop correction to the basic 1PI functions, necessary to determine the renormalized parameters (mass and charge), next we discuss the physical behavior of these parameters.

  13. The new semi-analytic code GalICS 2.0 - reproducing the galaxy stellar mass function and the Tully-Fisher relation simultaneously

    NASA Astrophysics Data System (ADS)

    Cattaneo, A.; Blaizot, J.; Devriendt, J. E. G.; Mamon, G. A.; Tollet, E.; Dekel, A.; Guiderdoni, B.; Kucukbas, M.; Thob, A. C. R.

    2017-10-01

    GalICS 2.0 is a new semi-analytic code to model the formation and evolution of galaxies in a cosmological context. N-body simulations based on a Planck cosmology are used to construct halo merger trees, track subhaloes, compute spins and measure concentrations. The accretion of gas on to galaxies and the morphological evolution of galaxies are modelled with prescriptions derived from hydrodynamic simulations. Star formation and stellar feedback are described with phenomenological models (as in other semi-analytic codes). GalICS 2.0 computes rotation speeds from the gravitational potential of the dark matter, the disc and the central bulge. As the rotation speed depends not only on the virial velocity but also on the ratio of baryons to dark matter within a galaxy, our calculation predicts a different Tully-Fisher relation from models in which vrot ∝ vvir. This is why, GalICS 2.0 is able to reproduce the galaxy stellar mass function and the Tully-Fisher relation simultaneously. Our results are also in agreement with halo masses from weak lensing and satellite kinematics, gas fractions, the relation between star formation rate (SFR) and stellar mass, the evolution of the cosmic SFR density, bulge-to-disc ratios, disc sizes and the Faber-Jackson relation.

  14. The Effects of Racket Inertia Tensor on Elbow Loadings and Racket Behavior for Central and Eccentric Impacts

    PubMed Central

    Nesbit, Steven M.; Elzinga, Michael; Herchenroder, Catherine; Serrano, Monika

    2006-01-01

    This paper discusses the inertia tensors of tennis rackets and their influence on the elbow swing torques in a forehand motion, the loadings transmitted to the elbow from central and eccentric impacts, and the racket acceleration responses from central and eccentric impacts. Inertia tensors of various rackets with similar mass and mass center location were determined by an inertia pendulum and were found to vary considerably in all three orthogonal directions. Tennis swing mechanics and impact analyses were performed using a computer model comprised of a full-body model of a human, a parametric model of the racket, and an impact function. The swing mechanics analysis of a forehand motion determined that inertia values had a moderate linear effect on the pronation-supination elbow torques required to twist the racket, and a minor effect on the flexion-extension and valgus-varus torques. The impact analysis found that mass center inertia values had a considerable effect on the transmitted torques for both longitudinal and latitudinal eccentric impacts and significantly affected all elbow torque components. Racket acceleration responses to central and eccentric impacts were measured experimentally and found to be notably sensitive to impact location and mass center inertia values. Key Points Tennis biomechanics. Racket inertia tensor. Impact analysis. Full-body computer model. PMID:24260004

  15. Numerical modeling of landslides and generated seismic waves: The Bingham Canyon Mine landslides

    NASA Astrophysics Data System (ADS)

    Miallot, H.; Mangeney, A.; Capdeville, Y.; Hibert, C.

    2016-12-01

    Landslides are important natural hazards and key erosion processes. They create long period surface waves that can be recorded by regional and global seismic networks. The seismic signals are generated by acceleration/deceleration of the mass sliding over the topography. They consist in a unique and powerful tool to detect, characterize and quantify the landslide dynamics. We investigate here the processes at work during the two massive landslides that struck the Bingham Canyon Mine on the 10th April 2013. We carry a combined analysis of the generated seismic signals and the landslide processes computed with a 3D modeling on a complex topography. Forces computed by broadband seismic waveform inversion are used to constrain the study and particularly the force-source and the bulk dynamic. The source time function are obtained by a 3D model (Shaltop) where rheological parameters can be adjusted. We first investigate the influence of the initial shape of the sliding mass which strongly affects the whole landslide dynamic. We also see that the initial shape of the source mass of the first landslide constrains pretty well the second landslide source mass. We then investigate the effect of a rheological parameter, the frictional angle, that strongly influences the resulted computed seismic source function. We test here numerous friction laws as the frictional Coulomb law and a velocity-weakening friction law. Our results show that the force waveform fitting the observed data is highly variable depending on these different choices.

  16. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  17. Finite temperature corrections to tachyon mass in intersecting D-branes

    NASA Astrophysics Data System (ADS)

    Sethi, Varun; Chowdhury, Sudipto Paul; Sarkar, Swarnendu

    2017-04-01

    We continue with the analysis of finite temperature corrections to the Tachyon mass in intersecting branes which was initiated in [1]. In this paper we extend the computation to the case of intersecting D3 branes by considering a setup of two intersecting branes in flat-space background. A holographic model dual to BCS superconductor consisting of intersecting D8 branes in D4 brane background was proposed in [2]. The background considered here is a simplified configuration of this dual model. We compute the one-loop Tachyon amplitude in the Yang-Mills approximation and show that the result is finite. Analyzing the amplitudes further we numerically compute the transition temperature at which the Tachyon becomes massless. The analytic expressions for the one-loop amplitudes obtained here reduce to those for intersecting D1 branes obtained in [1] as well as those for intersecting D2 branes.

  18. VizieR Online Data Catalog: Evolution of rotating very massive LC stars (Kohler, 2015)

    NASA Astrophysics Data System (ADS)

    Kohler, K.; Langer, N.; de Koter, A.; de Mink, S. E.; Crowther, P. A.; Evans, C. J.; Grafener, G.; Sana, H.; Sanyal, D.; Schneider, F. R. N.; Vink, J. S.

    2014-11-01

    A dense model grid with chemical composition appropriate for the Large Magellanic Cloud is presented. A one-dimensional hydrodynamic stellar evolution code was used to compute our models on the main sequence, taking into account rotation, transport of angular momentum by magnetic fields and stellar wind mass loss. We present stellar evolution models with initial masses of 70-500M⊙ and with initial surface rotational velocities of 0-550km/s. (2 data files).

  19. Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell

    NASA Astrophysics Data System (ADS)

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.

    2017-04-01

    This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H+/OH- transport) and electric field-driven migration on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Overall, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.

  20. Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Improving radiologists’ performance in classification between malignant and benign breast lesions is important to increase cancer detection sensitivity and reduce false-positive recalls. For this purpose, developing computer-aided diagnosis (CAD) schemes has been attracting research interest in recent years. In this study, we investigated a new feature selection method for the task of breast mass classification. Methods: We initially computed 181 image features based on mass shape, spiculation, contrast, presence of fat or calcifications, texture, isodensity, and other morphological features. From this large image feature pool, we used a sequential forward floating selection (SFFS)-based feature selection method to select relevant features, and analyzed their performance using a support vector machine (SVM) model trained for the classification task. On a database of 600 benign and 600 malignant mass regions of interest (ROIs), we performed the study using a ten-fold cross-validation method. Feature selection and optimization of the SVM parameters were conducted on the training subsets only. Results: The area under the receiver operating characteristic curve (AUC) = 0.805±0.012 was obtained for the classification task. The results also showed that the most frequently-selected features by the SFFS-based algorithm in 10-fold iterations were those related to mass shape, isodensity and presence of fat, which are consistent with the image features frequently used by radiologists in the clinical environment for mass classification. The study also indicated that accurately computing mass spiculation features from the projection mammograms was difficult, and failed to perform well for the mass classification task due to tissue overlap within the benign mass regions. Conclusions: In conclusion, this comprehensive feature analysis study provided new and valuable information for optimizing computerized mass classification schemes that may have potential to be useful as a “second reader” in future clinical practice. PMID:24664267

  1. Ab initio results for intermediate-mass, open-shell nuclei

    NASA Astrophysics Data System (ADS)

    Baker, Robert B.; Dytrych, Tomas; Launey, Kristina D.; Draayer, Jerry P.

    2017-01-01

    A theoretical understanding of nuclei in the intermediate-mass region is vital to astrophysical models, especially for nucleosynthesis. Here, we employ the ab initio symmetry-adapted no-core shell model (SA-NCSM) in an effort to push first-principle calculations across the sd-shell region. The ab initio SA-NCSM's advantages come from its ability to control the growth of model spaces by including only physically relevant subspaces, which allows us to explore ultra-large model spaces beyond the reach of other methods. We report on calculations for 19Ne and 20Ne up through 13 harmonic oscillator shells using realistic interactions and discuss the underlying structure as well as implications for various astrophysical reactions. This work was supported by the U.S. NSF (OCI-0904874 and ACI -1516338) and the U.S. DOE (DE-SC0005248), and also benefitted from the Blue Waters sustained-petascale computing project and high performance computing resources provided by LSU.

  2. Modeling of Non-Isothermal Cryogenic Fluid Sloshing

    NASA Technical Reports Server (NTRS)

    Agui, Juan H.; Moder, Jeffrey P.

    2015-01-01

    A computational fluid dynamic model was used to simulate the thermal destratification in an upright self-pressurized cryostat approximately half-filled with liquid nitrogen and subjected to forced sinusoidal lateral shaking. A full three-dimensional computational grid was used to model the tank dynamics, fluid flow and thermodynamics using the ANSYS Fluent code. A non-inertial grid was used which required the addition of momentum and energy source terms to account for the inertial forces, energy transfer and wall reaction forces produced by the shaken tank. The kinetics-based Schrage mass transfer model provided the interfacial mass transfer due to evaporation and condensation at the sloshing interface. The dynamic behavior of the sloshing interface, its amplitude and transition to different wave modes, provided insight into the fluid process at the interface. The tank pressure evolution and temperature profiles compared relatively well with the shaken cryostat experimental test data provided by the Centre National D'Etudes Spatiales.

  3. Vertically-Integrated Dual-Continuum Models for CO2 Injection in Fractured Aquifers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Guo, B.; Bandilla, K.; Celia, M. A.

    2017-12-01

    Injection of CO2 into a saline aquifer leads to a two-phase flow system, with supercritical CO2 and brine being the two fluid phases. Various modeling approaches, including fully three-dimensional (3D) models and vertical-equilibrium (VE) models, have been used to study the system. Almost all of that work has focused on unfractured formations. 3D models solve the governing equations in three dimensions and are applicable to generic geological formations. VE models assume rapid and complete buoyant segregation of the two fluid phases, resulting in vertical pressure equilibrium and allowing integration of the governing equations in the vertical dimension. This reduction in dimensionality makes VE models computationally more efficient, but the associated assumptions restrict the applicability of VE model to formations with moderate to high permeability. In this presentation, we extend the VE and 3D models for CO2 injection in fractured aquifers. This is done in the context of dual-continuum modeling, where the fractured formation is modeled as an overlap of two continuous domains, one representing the fractures and the other representing the rock matrix. Both domains are treated as porous media continua and can be modeled by either a VE or a 3D formulation. The transfer of fluid mass between rock matrix and fractures is represented by a mass transfer function connecting the two domains. We have developed a computational model that combines the VE and 3D models, where we use the VE model in the fractures, which typically have high permeability, and the 3D model in the less permeable rock matrix. A new mass transfer function is derived, which couples the VE and 3D models. The coupled VE-3D model can simulate CO2 injection and migration in fractured aquifers. Results from this model compare well with a full-3D model in which both the fractures and rock matrix are modeled with 3D models, with the hybrid VE-3D model having significantly reduced computational cost. In addition to the VE-3D model, we explore simplifications of the rock matrix domain by using sugar-cube and matchstick conceptualizations and develop VE-dual porosity and VE-matchstick models. These vertically-integrated dual-permeability and dual-porosity models provide a range of computationally efficient tools to model CO2 storage in fractured saline aquifers.

  4. A COMPUTATIONALLY EFFICIENT HYBRID APPROACH FOR DYNAMIC GAS/AEROSOL TRANSFER IN AIR QUALITY MODELS. (R826371C005)

    EPA Science Inventory

    Dynamic mass transfer methods have been developed to better describe the interaction of the aerosol population with semi-volatile species such as nitrate, ammonia, and chloride. Unfortunately, these dynamic methods are computationally expensive. Assumptions are often made to r...

  5. Computational Prediction of Electron Ionization Mass Spectra to Assist in GC/MS Compound Identification.

    PubMed

    Allen, Felicity; Pon, Allison; Greiner, Russ; Wishart, David

    2016-08-02

    We describe a tool, competitive fragmentation modeling for electron ionization (CFM-EI) that, given a chemical structure (e.g., in SMILES or InChI format), computationally predicts an electron ionization mass spectrum (EI-MS) (i.e., the type of mass spectrum commonly generated by gas chromatography mass spectrometry). The predicted spectra produced by this tool can be used for putative compound identification, complementing measured spectra in reference databases by expanding the range of compounds able to be considered when availability of measured spectra is limited. The tool extends CFM-ESI, a recently developed method for computational prediction of electrospray tandem mass spectra (ESI-MS/MS), but unlike CFM-ESI, CFM-EI can handle odd-electron ions and isotopes and incorporates an artificial neural network. Tests on EI-MS data from the NIST database demonstrate that CFM-EI is able to model fragmentation likelihoods in low-resolution EI-MS data, producing predicted spectra whose dot product scores are significantly better than full enumeration "bar-code" spectra. CFM-EI also outperformed previously reported results for MetFrag, MOLGEN-MS, and Mass Frontier on one compound identification task. It also outperformed MetFrag in a range of other compound identification tasks involving a much larger data set, containing both derivatized and nonderivatized compounds. While replicate EI-MS measurements of chemical standards are still a more accurate point of comparison, CFM-EI's predictions provide a much-needed alternative when no reference standard is available for measurement. CFM-EI is available at https://sourceforge.net/projects/cfm-id/ for download and http://cfmid.wishartlab.com as a web service.

  6. Model simulation and experiments of flow and mass transport through a nano-material gas filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Zheng, Zhongquan C.; Winecki, Slawomir

    2013-11-01

    A computational model for evaluating the performance of nano-material packed-bed filters was developed. The porous effects of the momentum and mass transport within the filter bed were simulated. For the momentum transport, an extended Ergun-type model was employed and the energy loss (pressure drop) along the packed-bed was simulated and compared with measurement. For the mass transport, a bulk dsorption model was developed to study the adsorption process (breakthrough behavior). Various types of porous materials and gas flows were tested in the filter system where the mathematical models used in the porous substrate were implemented and validated by comparing withmore » experimental data and analytical solutions under similar conditions. Good agreements were obtained between experiments and model predictions.« less

  7. Sorption Modeling and Verification for Off-Gas Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavlarides, Lawrence; Yiacoumi, Sotira; Tsouris, Costas

    2016-12-20

    This project was successfully executed to provide valuable adsorption data and improve a comprehensive model developed in previous work by the authors. Data obtained were used in an integrated computer program to predict the behavior of adsorption columns. The model is supported by experimental data and has been shown to predict capture of off gas similar to that evolving during the reprocessing of nuclear waste. The computer program structure contains (a) equilibrium models of off-gases with the adsorbate; (b) mass-transfer models to describe off-gas mass transfer to a particle, diffusion through the pores of the particle, and adsorption on themore » active sites of the particle; and (c) incorporation of these models into fixed bed adsorption modeling, which includes advection through the bed. These models are being connected with the MOOSE (Multiphysics Object-Oriented Simulation Environment) software developed at the Idaho National Laboratory through DGOSPREY (Discontinuous Galerkin Off-gas SeParation and REcoverY) computer codes developed in this project. Experiments for iodine and water adsorption have been conducted on reduced silver mordenite (Ag0Z) for single layered particles. Adsorption apparatuses have been constructed to execute these experiments over a useful range of conditions for temperatures ranging from ambient to 250°C and water dew points ranging from -69 to 19°C. Experimental results were analyzed to determine mass transfer and diffusion of these gases into the particles and to determine which models best describe the single and binary component mass transfer and diffusion processes. The experimental results were also used to demonstrate the capabilities of the comprehensive models developed to predict single-particle adsorption and transients of the adsorption-desorption processes in fixed beds. Models for adsorption and mass transfer have been developed to mathematically describe adsorption kinetics and transport via diffusion and advection processes. These models were built on a numerical framework for solving conservation law problems in one-dimensional geometries such as spheres, cylinders, and lines. Coupled with the framework are specific models for adsorption in commercial adsorbents, such as zeolites and mordenites. Utilizing this modeling approach, the authors were able to accurately describe and predict adsorption kinetic data obtained from experiments at a variety of different temperatures and gas phase concentrations. A demonstration of how these models, and framework, can be used to simulate adsorption in fixed- bed columns is provided. The CO 2 absorption work involved modeling with supportive experimental information. A dynamic model was developed to simulate CO 2 absorption using high alkaline content water solutions. The model is based upon transient mass and energy balances for chemical species commonly present in CO 2 absorption. A computer code was developed to implement CO 2 absorption with a chemical reaction model. Experiments were conducted in a laboratory scale column to determine the model parameters. The influence of geometric parameters and operating variables on CO 2 absorption was studied over a wide range of conditions. Continuing work could employ the model to control column operation and predict the absorption behavior under various input conditions and other prescribed experimental perturbations. The value of the validated models and numerical frameworks developed in this project is that they can be used to predict the sorption behavior of off-gas evolved during the reprocessing of nuclear waste and thus reduce the cost of the experiments. They can also be used to design sorption processes based on concentration limits and flow-rates determined at the plant level.« less

  8. Early Universe synthesis of asymmetric dark matter nuggets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.

    We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.

  9. Early Universe synthesis of asymmetric dark matter nuggets

    DOE PAGES

    Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.

    2018-02-12

    We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.

  10. Early Universe synthesis of asymmetric dark matter nuggets

    NASA Astrophysics Data System (ADS)

    Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.

    2018-02-01

    We compute the mass function of bound states of asymmetric dark matter—nuggets—synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.

  11. Gravitational Interactions of White Dwarf Double Stars

    NASA Astrophysics Data System (ADS)

    McKeough, James; Robinson, Chloe; Ortiz, Bridget; Hira, Ajit

    2016-03-01

    In the light of the possible role of White Dwarf stars as progenitors of Type Ia supernovas, we present computational simulations of some astrophysical phenomena associated with a study of gravitationally-bound binary stars, composed of at least one white dwarf star. Of particular interest to astrophysicists are the conditions inside a white dwarf star in the time frame leading up to its explosive end as a Type Ia supernova, for an understanding of the massive stellar explosions. In addition, the studies of the evolution of white dwarfs could serve as promising probes of theories of gravitation. We developed FORTRAN computer programs to implement our models for white dwarfs and other stars. These codes allow for different sizes and masses of stars. Simulations were done in the mass interval from 0.1 to 2.5 solar masses. Our goal was to obtain both atmospheric and orbital parameters. The computational results thus obtained are compared with relevant observational data. The data are further analyzed to identify trends in terms of sizes and masses of stars. We will extend our computational studies to blue giant and red giant stars in the future. Funding from National Science Foundation.

  12. Characterizing and modeling organic binder burnout from green ceramic compacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewsuk, K.G.; Cesarano, J. III; Cochran, R.J.

    New characterization and computational techniques have been developed to evaluate and simulate binder burnout from pressed powder compacts. Using engineering data and a control volume finite element method (CVFEM) thermal model, a nominally one dimensional (1-D) furnace has been designed to test, refine, and validate computer models that simulate binder burnout assuming a 1-D thermal gradient across the ceramic body during heating. Experimentally, 1-D radial heat flow was achieved using a rod-shaped heater that directly heats the inside surface of a stack of ceramic annuli surrounded by thermal insulation. The computational modeling effort focused on producing a macroscopic model formore » binder burnout based on continuum approaches to heat and mass conservation for porous media. Two increasingly complex models have been developed that predict the temperature and mass of a porous powder compact as a function of time during binder burnout. The more complex model also predicts the pressure within a powder compact during binder burnout. Model predictions are in reasonably good agreement with experimental data on binder burnout from a 57--65% relative density pressed powder compact of a 94 wt% alumina body containing {approximately}3 wt% binder. In conjunction with the detailed experimental data from the prototype binder burnout furnace, the models have also proven useful for conducting parametric studies to elucidate critical i-material property data required to support model development.« less

  13. A stochastic model for density-dependent microwave Snow- and Graupel scattering coefficients of the NOAA JCSDA community radiative transfer model

    NASA Astrophysics Data System (ADS)

    Stegmann, Patrick G.; Tang, Guanglin; Yang, Ping; Johnson, Benjamin T.

    2018-05-01

    A structural model is developed for the single-scattering properties of snow and graupel particles with a strongly heterogeneous morphology and an arbitrary variable mass density. This effort is aimed to provide a mechanism to consider particle mass density variation in the microwave scattering coefficients implemented in the Community Radiative Transfer Model (CRTM). The stochastic model applies a bicontinuous random medium algorithm to a simple base shape and uses the Finite-Difference-Time-Domain (FDTD) method to compute the single-scattering properties of the resulting complex morphology.

  14. A novel JEAnS analysis of the Fornax dwarf using evolutionary algorithms: mass follows light with signs of an off-centre merger

    NASA Astrophysics Data System (ADS)

    Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris

    2017-09-01

    Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (I) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (II) an optimization of the smoothing parameters and (III) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.

  15. A Mass Computation Model for Lightweight Brayton Cycle Regenerator Heat Exchangers

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2010-01-01

    Based on a theoretical analysis of convective heat transfer across large internal surface areas, this paper discusses the design implications for generating lightweight gas-gas heat exchanger designs by packaging such areas into compact three-dimensional shapes. Allowances are made for hot and cold inlet and outlet headers for assembly of completed regenerator (or recuperator) heat exchanger units into closed cycle gas turbine flow ducting. Surface area and resulting volume and mass requirements are computed for a range of heat exchanger effectiveness values and internal heat transfer coefficients. Benefit cost curves show the effect of increasing heat exchanger effectiveness on Brayton cycle thermodynamic efficiency on the plus side, while also illustrating the cost in heat exchanger required surface area, volume, and mass requirements as effectiveness is increased. The equations derived for counterflow and crossflow configurations show that as effectiveness values approach unity, or 100 percent, the required surface area, and hence heat exchanger volume and mass tend toward infinity, since the implication is that heat is transferred at a zero temperature difference. To verify the dimensional accuracy of the regenerator mass computational procedure, calculation of a regenerator specific mass, that is, heat exchanger weight per unit working fluid mass flow, is performed in both English and SI units. Identical numerical values for the specific mass parameter, whether expressed in lb/(lb/sec) or kg/(kg/sec), show the dimensional consistency of overall results.

  16. A Mass Computation Model for Lightweight Brayton Cycle Regenerator Heat Exchangers

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    2010-01-01

    Based on a theoretical analysis of convective heat transfer across large internal surface areas, this paper discusses the design implications for generating lightweight gas-gas heat exchanger designs by packaging such areas into compact three-dimensional shapes. Allowances are made for hot and cold inlet and outlet headers for assembly of completed regenerator (or recuperator) heat exchanger units into closed cycle gas turbine flow ducting. Surface area and resulting volume and mass requirements are computed for a range of heat exchanger effectiveness values and internal heat transfer coefficients. Benefit cost curves show the effect of increasing heat exchanger effectiveness on Brayton cycle thermodynamic efficiency on the plus side, while also illustrating the cost in heat exchanger required surface area, volume, and mass requirements as effectiveness is increased. The equations derived for counterflow and crossflow configurations show that as effectiveness values approach unity, or 100 percent, the required surface area, and hence heat exchanger volume and mass tend toward infinity, since the implication is that heat is transferred at a zero temperature difference. To verify the dimensional accuracy of the regenerator mass computational procedure, calculation of a regenerator specific mass, that is, heat exchanger weight per unit working fluid mass flow, is performed in both English and SI units. Identical numerical values for the specific mass parameter, whether expressed in lb/(lb/sec) or kg/ (kg/sec), show the dimensional consistency of overall results.

  17. Baryon magnetic moments: Symmetries and relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parreno, Assumpta; Savage, Martin; Tiburzi, Brian

    Magnetic moments of the octet baryons are computed using lattice QCD in background magnetic fields, including the first treatment of the magnetically coupled Σ0- Λ system. Although the computations are performed for relatively large values of the up and down quark masses, we gain new insight into the symmetries and relations between magnetic moments by working at a three-flavor mass-symmetric point. While the spinflavor symmetry in the large Nc limit of QCD is shared by the naïve constituent quark model, we find instances where quark model predictions are considerably favored over those emerging in the large Nc limit. We suggestmore » further calculations that would shed light on the curious patterns of baryon magnetic moments.« less

  18. PURDU-WINCOF: A computer code for establishing the performance of a fan-compressor unit with water ingestion

    NASA Technical Reports Server (NTRS)

    Leonardo, M.; Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A model for predicting the performance of a multi-spool axial-flow compressor with a fan during operation with water ingestion was developed incorporating several two-phase fluid flow effects as follows: (1) ingestion of water, (2) droplet interaction with blades and resulting changes in blade characteristics, (3) redistribution of water and water vapor due to centrifugal action, (4) heat and mass transfer processes, and (5) droplet size adjustment due to mass transfer and mechanical stability considerations. A computer program, called the PURDU-WINCOF code, was generated based on the model utilizing a one-dimensional formulation. An illustrative case serves to show the manner in which the code can be utilized and the nature of the results obtained.

  19. Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben

    2003-01-01

    A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.

  20. Extratropical Stratosphere-Troposphere Mass Exchange

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark R.

    2004-01-01

    Understanding the exchange of gases between the stratosphere and the troposphere is important for determining how pollutants enter the stratosphere and how they leave. This study does a global analysis of that the exchange of mass between the stratosphere and the troposphere. While the exchange of mass is not the same as the exchange of constituents, you can t get the constituent exchange right if you have the mass exchange wrong. Thus this kind of calculation is an important test for models which also compute trace gas transport. In this study I computed the mass exchange for two assimilated data sets and a GCM. The models all agree that amount of mass descending from the stratosphere to the troposphere in the Northern Hemisphere extra tropics is approx. 10(exp 10) kg/s averaged over a year. The value for the Southern Hemisphere by about a factor of two. ( 10(exp 10) kg of air is the amount of air in 100 km x 100 km area with a depth of 100 m - roughly the size of the D.C. metro area to a depth of 300 feet.) Most people have the idea that most of the mass enters the stratosphere through the tropics. But this study shows that almost 5 times more mass enters the stratosphere through the extra-tropics. This mass, however, is quickly recycled out again. Thus the lower most stratosphere is a mixture of upper stratospheric air and tropospheric air. This is an important result for understanding the chemistry of the lower stratosphere.

  1. Calculation of total cross sections for charge exchange in molecular collisions

    NASA Technical Reports Server (NTRS)

    Ioup, J.

    1979-01-01

    Areas of investigation summarized include nitrogen ion-nitrogen molecule collisions; molecular collisions with surfaces; molecular identification from analysis of cracking patterns of selected gases; computer modelling of a quadrupole mass spectrometer; study of space charge in a quadrupole; transmission of the 127 deg cylindrical electrostatic analyzer; and mass spectrometer data deconvolution.

  2. The minimal SUSY B - L model: from the unification scale to the LHC

    DOE PAGES

    Ovrut, Burt A.; Purves, Austin; Spinner, Sogee

    2015-06-26

    Here, this paper introduces a random statistical scan over the high-energy initial parameter space of the minimal SUSY B - L model — denoted as the B - L MSSM. Each initial set of points is renormalization group evolved to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ~125 GeV. The subspace of initial parameters that satisfies all such constraints is presented, shown to bemore » robust and to contain a wide range of different configurations of soft supersymmetry breaking masses. The low-energy predictions of each such “valid” point — such as the sparticle mass spectrum and, in particular, the LSP — are computed and then statistically analyzed over the full subspace of valid points. Finally, the amount of fine-tuning required is quantified and compared to the MSSM computed using an identical random scan. The B - L MSSM is shown to generically require less fine-tuninng.« less

  3. Classification of breast tissue in mammograms using efficient coding.

    PubMed

    Costa, Daniel D; Campos, Lúcio F; Barros, Allan K

    2011-06-24

    Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.

  4. An Integrated Experimental and Computational Study of Heating due to Surface Catalysis under Hypersonic Conditions

    DTIC Science & Technology

    2012-08-01

    17 1.1.1 Mass production / destruction terms . . . . . . . . . . . . . . . . . . . . 18 1.1.2 Energy exchange terms...that US3D does not cur- rently model electronic energy . If the US3D solution is post-processed to account for electronic energy modes, the computed...nonequilibrium model for electronic energy to the 12 Distribution A: Approved for public release; distribution is unlimited. Figure 9: (left) jet profile solution

  5. Integrated model of the shallow and deep hydrothermal systems in the East Mesa area, Imperial Valley, California

    USGS Publications Warehouse

    Riney, T. David; Pritchett, J.W.; Rice, L.F.

    1982-01-01

    Geological, geophysical, thermal, petrophysical and hydrological data available for the East Mesa hydrothermal system that are pertinent to the construction of a computer model of the natural flow of heat and fluid mass within the system are assembled and correlated. A conceptual model of the full system is developed and a subregion selected for quantitative modeling. By invoking the .Boussinesq approximation, valid for describing the natural flow of heat and mass in a liquid hydrothermal system, it is found practical to carry computer simulations far enough in time to ensure that steady-state conditions are obtained. Initial calculations for an axisymmetric model approximating the system demonstrate that the vertical formation permeability of the deep East Mesa system must be very low (kv ~ 0.25 to 0.5 md). Since subsurface temperature and surface heat flow data exhibit major deviations from the axisymmetric approximation, exploratory three-dimensional calculations are performed to assess the effects of various mechanisms which might operate to produce such observed asymmetries. A three-dimensional model evolves from this iterative data synthesis and computer analysis which includes a hot fluid convective source distributed along a leaky fault radiating northward from the center of the hot spot and realistic variations in the reservoir formation properties.

  6. ON THE EVOLUTIONARY AND PULSATION MASS OF CLASSICAL CEPHEIDS. III. THE CASE OF THE ECLIPSING BINARY CEPHEID CEP0227 IN THE LARGE MAGELLANIC CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prada Moroni, P. G.; Gennaro, M.; Bono, G.

    2012-04-20

    We present a new Bayesian approach to constrain the intrinsic parameters (stellar mass and age) of the eclipsing binary system-CEP0227-in the Large Magellanic Cloud (LMC). We computed several sets of evolutionary models covering a broad range in chemical compositions and in stellar mass. Independent sets of models were also constructed either by neglecting or by including a moderate convective core overshooting ({beta}{sub ov} = 0.2) during central hydrogen-burning phases. Sets of models were also constructed either by neglecting or by assuming a canonical ({eta} = 0.4, 0.8) or an enhanced ({eta} = 4) mass-loss rate. The most probable solutions weremore » computed in three different planes: luminosity-temperature, mass-radius, and gravity-temperature. By using the Bayes factor, we found that the most probable solutions were obtained in the gravity-temperature plane with a Gaussian mass prior distribution. The evolutionary models constructed by assuming a moderate convective core overshooting ({beta}{sub ov} = 0.2) and a canonical mass-loss rate ({eta} = 0.4) give stellar masses for the primary (Cepheid)-M = 4.14{sup +0.04}{sub -0.05} M{sub Sun }-and for the secondary-M = 4.15{sup +0.04}{sub -0.05} M{sub Sun }-that agree at the 1% level with dynamical measurements. Moreover, we found ages for the two components and for the combined system-t = 151{sup +4}{sub -3} Myr-that agree at the 5% level. The solutions based on evolutionary models that neglect the mass loss attain similar parameters, while those ones based on models that either account for an enhanced mass loss or neglect convective core overshooting have lower Bayes factors and larger confidence intervals. The dependence on the mass-loss rate might be the consequence of the crude approximation we use to mimic this phenomenon. By using the isochrone of the most probable solution and a Gaussian prior on the LMC distance, we found a true distance modulus-18.53{sup +0.02}{sub -0.02} mag-and a reddening value-E(B - V) = 0.142{sup +0.005}{sub -0.010} mag-that agree quite well with similar estimates in the literature.« less

  7. Numerical Simulation of Convective Heat and Mass Transfer in a Two-Layer System

    NASA Astrophysics Data System (ADS)

    Myznikova, B. I.; Kazaryan, V. A.; Tarunin, E. L.; Wertgeim, I. I.

    The results are presented of mathematical and computer modeling of natural convection in the “liquid-gas” two-layer system, filling a vertical cylinder surrounded by solid heat conductive tract. The model describes approximately the conjugate heat and mass transfer in the underground oil product storage, filled partially by a hydrocarbon liquid, with natural gas layer above the liquid surface. The geothermal gradient in a rock mass gives rise to the intensive convection in the liquid-gas system. The consideration is worked out for laminar flows, laminar-turbulent transitional regimes, and developed turbulent flows.

  8. Numerical investigation of the vortex-induced vibration of an elastically mounted circular cylinder at high Reynolds number (Re = 104) and low mass ratio using the RANS code.

    PubMed

    Khan, Niaz Bahadur; Ibrahim, Zainah; Nguyen, Linh Tuan The; Javed, Muhammad Faisal; Jameel, Mohammed

    2017-01-01

    This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier-Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear-stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV.

  9. C P -odd sector and θ dynamics in holographic QCD

    NASA Astrophysics Data System (ADS)

    Areán, Daniel; Iatrakis, Ioannis; Järvinen, Matti; Kiritsis, Elias

    2017-07-01

    The holographic model of V-QCD is used to analyze the physics of QCD in the Veneziano large-N limit. An unprecedented analysis of the C P -odd physics is performed going beyond the level of effective field theories. The structure of holographic saddle points at finite θ is determined, as well as its interplay with chiral symmetry breaking. Many observables (vacuum energy and higher-order susceptibilities, singlet and nonsinglet masses and mixings) are computed as functions of θ and the quark mass m . Wherever applicable the results are compared to those of chiral Lagrangians, finding agreement. In particular, we recover the Witten-Veneziano formula in the small x →0 limit, we compute the θ dependence of the pion mass, and we derive the hyperscaling relation for the topological susceptibility in the conformal window in terms of the quark mass.

  10. Evolution models of helium white dwarf-main-sequence star merger remnants: the mass distribution of single low-mass white dwarfs

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfei; Hall, Philip D.; Jeffery, C. Simon; Bi, Shaolan

    2018-02-01

    It is not known how single white dwarfs with masses less than 0.5Msolar -- low-mass white dwarfs -- are formed. One way in which such a white dwarf might be formed is after the merger of a helium-core white dwarf with a main-sequence star that produces a red giant branch star and fails to ignite helium. We use a stellar-evolution code to compute models of the remnants of these mergers and find a relation between the pre-merger masses and the final white dwarf mass. Combining our results with a model population, we predict that the mass distribution of single low-mass white dwarfs formed through this channel spans the range 0.37 to 0.5Msolar and peaks between 0.45 and 0.46Msolar. Helium white dwarf--main-sequence star mergers can also lead to the formation of single helium white dwarfs with masses up to 0.51Msolar. In our model the Galactic formation rate of single low-mass white dwarfs through this channel is about 8.7X10^-3yr^-1. Comparing our models with observations, we find that the majority of single low-mass white dwarfs (<0.5Msolar) are formed from helium white dwarf--main-sequence star mergers, at a rate which is about $2$ per cent of the total white dwarf formation rate.

  11. Improved analytic extreme-mass-ratio inspiral model for scoping out eLISA data analysis

    NASA Astrophysics Data System (ADS)

    Chua, Alvin J. K.; Gair, Jonathan R.

    2015-12-01

    The space-based gravitational-wave detector eLISA has been selected as the ESA L3 mission, and the mission design will be finalized by the end of this decade. To prepare for mission formulation over the next few years, several outstanding and urgent questions in data analysis will be addressed using mock data challenges, informed by instrument measurements from the LISA Pathfinder satellite launching at the end of 2015. These data challenges will require accurate and computationally affordable waveform models for anticipated sources such as the extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes. Previous data challenges have made use of the well-known analytic EMRI waveforms of Barack and Cutler, which are extremely quick to generate but dephase relative to more accurate waveforms within hours, due to their mismatched radial, polar and azimuthal frequencies. In this paper, we describe an augmented Barack-Cutler model that uses a frequency map to the correct Kerr frequencies, along with updated evolution equations and a simple fit to a more accurate model. The augmented waveforms stay in phase for months and may be generated with virtually no additional computational cost.

  12. Modeling and validation of heat and mass transfer in individual coffee beans during the coffee roasting process using computational fluid dynamics (CFD).

    PubMed

    Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan

    2013-01-01

    Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.

  13. Models and Simulations as a Service: Exploring the Use of Galaxy for Delivering Computational Models

    PubMed Central

    Walker, Mark A.; Madduri, Ravi; Rodriguez, Alex; Greenstein, Joseph L.; Winslow, Raimond L.

    2016-01-01

    We describe the ways in which Galaxy, a web-based reproducible research platform, can be used for web-based sharing of complex computational models. Galaxy allows users to seamlessly customize and run simulations on cloud computing resources, a concept we refer to as Models and Simulations as a Service (MaSS). To illustrate this application of Galaxy, we have developed a tool suite for simulating a high spatial-resolution model of the cardiac Ca2+ spark that requires supercomputing resources for execution. We also present tools for simulating models encoded in the SBML and CellML model description languages, thus demonstrating how Galaxy’s reproducible research features can be leveraged by existing technologies. Finally, we demonstrate how the Galaxy workflow editor can be used to compose integrative models from constituent submodules. This work represents an important novel approach, to our knowledge, to making computational simulations more accessible to the broader scientific community. PMID:26958881

  14. Evaluations of tropospheric aerosol properties simulated by the community earth system model with a sectional aerosol microphysics scheme

    PubMed Central

    Toon, Owen B.; Bardeen, Charles G.; Mills, Michael J.; Fan, Tianyi; English, Jason M.; Neely, Ryan R.

    2015-01-01

    Abstract A sectional aerosol model (CARMA) has been developed and coupled with the Community Earth System Model (CESM1). Aerosol microphysics, radiative properties, and interactions with clouds are simulated in the size‐resolving model. The model described here uses 20 particle size bins for each aerosol component including freshly nucleated sulfate particles, as well as mixed particles containing sulfate, primary organics, black carbon, dust, and sea salt. The model also includes five types of bulk secondary organic aerosols with four volatility bins. The overall cost of CESM1‐CARMA is approximately ∼2.6 times as much computer time as the standard three‐mode aerosol model in CESM1 (CESM1‐MAM3) and twice as much computer time as the seven‐mode aerosol model in CESM1 (CESM1‐MAM7) using similar gas phase chemistry codes. Aerosol spatial‐temporal distributions are simulated and compared with a large set of observations from satellites, ground‐based measurements, and airborne field campaigns. Simulated annual average aerosol optical depths are lower than MODIS/MISR satellite observations and AERONET observations by ∼32%. This difference is within the uncertainty of the satellite observations. CESM1/CARMA reproduces sulfate aerosol mass within 8%, organic aerosol mass within 20%, and black carbon aerosol mass within 50% compared with a multiyear average of the IMPROVE/EPA data over United States, but differences vary considerably at individual locations. Other data sets show similar levels of comparison with model simulations. The model suggests that in addition to sulfate, organic aerosols also significantly contribute to aerosol mass in the tropical UTLS, which is consistent with limited data. PMID:27668039

  15. Model-Based Systems Engineering Approach to Managing Mass Margin

    NASA Technical Reports Server (NTRS)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  16. Computation of hypersonic flows with finite rate condensation and evaporation of water

    NASA Technical Reports Server (NTRS)

    Perrell, Eric R.; Candler, Graham V.; Erickson, Wayne D.; Wieting, Alan R.

    1993-01-01

    A computer program for modelling 2D hypersonic flows of gases containing water vapor and liquid water droplets is presented. The effects of interphase mass, momentum and energy transfer are studied. Computations are compared with existing quasi-1D calculations on the nozzle of the NASA Langley Eight Foot High Temperature Tunnel, a hypersonic wind tunnel driven by combustion of natural gas in oxygen enriched air.

  17. Identifying the optimal segmentors for mass classification in mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.

    2015-03-01

    In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.

  18. Research approaches to mass casualty incidents response: development from routine perspectives to complexity science.

    PubMed

    Shen, Weifeng; Jiang, Libing; Zhang, Mao; Ma, Yuefeng; Jiang, Guanyu; He, Xiaojun

    2014-01-01

    To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system, computational experiments and parallel execution (ACP) method. We searched PubMed, Web of Knowledge, China Wanfang and China Biology Medicine (CBM) databases for relevant studies. Searches were performed without year or language restrictions and used the combinations of the following key words: "mass casualty incident", "MCI", "research method", "complexity science", "ACP", "approach", "science", "model", "system" and "response". Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled. Research methods of MCI have increased markedly over the past few decades. For now, dominating research methods of MCI are theory-based approach, empirical approach, evidence-based science, mathematical modeling and computer simulation, simulation experiment, experimental methods, scenario approach and complexity science. This article provides an overview of the development of research methodology for MCI. The progresses of routine research approaches and complexity science are briefly presented in this paper. Furthermore, the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems. And the only feasible alternative is complexity science. Finally, this summary is followed by a review that ACP method combining artificial systems, computational experiments and parallel execution provides a new idea to address researches for complex MCI.

  19. The Mass Distribution of Companions to Low-mass White Dwarfs

    NASA Astrophysics Data System (ADS)

    Andrews, Jeff J.; Price-Whelan, Adrian M.; Agüeros, Marcel A.

    2014-12-01

    Measuring the masses of companions to single-line spectroscopic binary stars is (in general) not possible because of the unknown orbital plane inclination. Even when the mass of the visible star can be measured, only a lower limit can be placed on the mass of the unseen companion. However, since these inclination angles should be isotropically distributed, for a large enough, unbiased sample, the companion mass distribution can be deconvolved from the distribution of observables. In this work, we construct a hierarchical probabilistic model to infer properties of unseen companion stars given observations of the orbital period and projected radial velocity of the primary star. We apply this model to three mock samples of low-mass white dwarfs (LMWDs; M <~ 0.45 M ⊙) and a sample of post-common-envelope binaries. We use a mixture of two Gaussians to model the WD and neutron star (NS) companion mass distributions. Our model successfully recovers the initial parameters of these test data sets. We then apply our model to 55 WDs in the extremely low-mass (ELM) WD Survey. Our maximum a posteriori model for the WD companion population has a mean mass μWD = 0.74 M ⊙, with a standard deviation σWD = 0.24 M ⊙. Our model constrains the NS companion fraction f NS to be <16% at 68% confidence. We make samples from the posterior distribution publicly available so that future observational efforts may compute the NS probability for newly discovered LMWDs.

  20. Modeling error in assessment of mammographic image features for improved computer-aided mammography training: initial experience

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Tourassi, Georgia D.

    2011-03-01

    In this study we investigate the hypothesis that there exist patterns in erroneous assessment of BI-RADS image features among radiology trainees when performing diagnostic interpretation of mammograms. We also investigate whether these error making patterns can be captured by individual user models. To test our hypothesis we propose a user modeling algorithm that uses the previous readings of a trainee to identify whether certain BI-RADS feature values (e.g. "spiculated" value for "margin" feature) are associated with higher than usual likelihood that the feature will be assessed incorrectly. In our experiments we used readings of 3 radiology residents and 7 breast imaging experts for 33 breast masses for the following BI-RADS features: parenchyma density, mass margin, mass shape and mass density. The expert readings were considered as the gold standard. Rule-based individual user models were developed and tested using the leave one-one-out crossvalidation scheme. Our experimental evaluation showed that the individual user models are accurate in identifying cases for which errors are more likely to be made. The user models captured regularities in error making for all 3 residents. This finding supports our hypothesis about existence of individual error making patterns in assessment of mammographic image features using the BI-RADS lexicon. Explicit user models identifying the weaknesses of each resident could be of great use when developing and adapting a personalized training plan to meet the resident's individual needs. Such approach fits well with the framework of adaptive computer-aided educational systems in mammography we have proposed before.

  1. A glacier runoff extension to the Precipitation Runoff Modeling System

    USGS Publications Warehouse

    Van Beusekom, Ashley E.; Viger, Roland

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash-Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.

  2. Model-Independent Bounds on Kinetic Mixing

    DOE PAGES

    Hook, Anson; Izaguirre, Eder; Wacker, Jay G.

    2011-01-01

    New Abelimore » an vector bosons can kinetically mix with the hypercharge gauge boson of the Standard Model. This letter computes the model-independent limits on vector bosons with masses from 1 GeV to 1 TeV. The limits arise from the numerous e + e − experiments that have been performed in this energy range and bound the kinetic mixing by ϵ ≲ 0.03 for most of the mass range studied, regardless of any additional interactions that the new vector boson may have.« less

  3. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  4. Multi-dimensional computer simulation of MHD combustor hydrodynamics

    NASA Astrophysics Data System (ADS)

    Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.

    1991-04-01

    Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.

  5. Star formation in a hierarchical model for Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sanchez, N.; Parravano, A.

    The effects of the external and initial conditions on the star formation processes in Molecular Cloud Complexes are examined in the context of a schematic model. The model considers a hierarchical system with five predefined phases: warm gas, neutral gas, low density molecular gas, high density molecular gas and protostars. The model follows the mass evolution of each substructure by computing its mass exchange with their parent and children. The parent-child mass exchange depends on the radiation density at the interphase, which is produced by the radiation coming from the stars that form at the end of the hierarchical structure, and by the external radiation field. The system is chaotic in the sense that its temporal evolution is very sensitive to small changes in the initial or external conditions. However, global features such as the star formation efficience and the Initial Mass Function are less affected by those variations.

  6. Regional scale landslide risk assessment with a dynamic physical model - development, application and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Luna, Byron Quan; Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Nadim, Farrokh

    2013-04-01

    Landslide risk must be assessed at the appropriate scale in order to allow effective risk management. At the moment, few deterministic models exist that can do all the computations required for a complete landslide risk assessment at a regional scale. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the models to compute the displacement with a large amount of individual initiation areas (computationally exhaustive). This paper presents a medium-scale, dynamic physical model for rapid mass movements in mountainous and volcanic areas. The deterministic nature of the approach makes it possible to apply it to other sites since it considers the frictional equilibrium conditions for the initiation process, the rheological resistance of the displaced flow for the run-out process and fragility curve that links intensity to economic loss for each building. The model takes into account the triggering effect of an earthquake, intense rainfall and a combination of both (spatial and temporal). The run-out module of the model considers the flow as a 2-D continuum medium solving the equations of mass balance and momentum conservation. The model is embedded in an open source environment geographical information system (GIS), it is computationally efficient and it is transparent (understandable and comprehensible) for the end-user. The model was applied to a virtual region, assessing landslide hazard, vulnerability and risk. A Monte Carlo simulation scheme was applied to quantify, propagate and communicate the effects of uncertainty in input parameters on the final results. In this technique, the input distributions are recreated through sampling and the failure criteria are calculated for each stochastic realisation of the site properties. The model is able to identify the released volumes of the critical slopes and the areas threatened by the run-out intensity. The obtained final outcome is the estimation of individual building damage and total economic risk. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).

  7. A program for mass spectrometer control and data processing analyses in isotope geology; written in BASIC for an 8K Nova 1120 computer

    USGS Publications Warehouse

    Stacey, J.S.; Hope, J.

    1975-01-01

    A system is described which uses a minicomputer to control a surface ionization mass spectrometer in the peak switching mode, with the object of computing isotopic abundance ratios of elements of geologic interest. The program uses the BASIC language and is sufficiently flexible to be used for multiblock analyses of any spectrum containing from two to five peaks. In the case of strontium analyses, ratios are corrected for rubidium content and normalized for mass spectrometer fractionation. Although almost any minicomputer would be suitable, the model used was the Data General Nova 1210 with 8K memory. Assembly language driver program and interface hardware-descriptions for the Nova 1210 are included.

  8. Space-Shuttle Emulator Software

    NASA Technical Reports Server (NTRS)

    Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram; hide

    2007-01-01

    A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.

  9. Modelling non-adiabatic effects in H{sub 3}{sup +}: Solution of the rovibrational Schrödinger equation with motion-dependent masses and mass surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mátyus, Edit, E-mail: matyus@chem.elte.hu; Szidarovszky, Tamás; Császár, Attila G., E-mail: csaszar@chem.elte.hu

    2014-10-21

    Introducing different rotational and vibrational masses in the nuclear-motion Hamiltonian is a simple phenomenological way to model rovibrational non-adiabaticity. It is shown on the example of the molecular ion H{sub 3}{sup +}, for which a global adiabatic potential energy surface accurate to better than 0.1 cm{sup −1} exists [M. Pavanello, L. Adamowicz, A. Alijah, N. F. Zobov, I. I. Mizus, O. L. Polyansky, J. Tennyson, T. Szidarovszky, A. G. Császár, M. Berg et al., Phys. Rev. Lett. 108, 023002 (2012)], that the motion-dependent mass concept yields much more accurate rovibrational energy levels but, unusually, the results are dependent upon themore » choice of the embedding of the molecule-fixed frame. Correct degeneracies and an improved agreement with experimental data are obtained if an Eckart embedding corresponding to a reference structure of D{sub 3h} point-group symmetry is employed. The vibrational mass of the proton in H{sub 3}{sup +} is optimized by minimizing the root-mean-square (rms) deviation between the computed and recent high-accuracy experimental transitions. The best vibrational mass obtained is larger than the nuclear mass of the proton by approximately one third of an electron mass, m{sub opt,p}{sup (v)}=m{sub nuc,p}+0.31224 m{sub e}. This optimized vibrational mass, along with a nuclear rotational mass, reduces the rms deviation of the experimental and computed rovibrational transitions by an order of magnitude. Finally, it is shown that an extension of the algorithm allowing the use of motion-dependent masses can deal with coordinate-dependent mass surfaces in the rovibrational Hamiltonian, as well.« less

  10. Modelling non-adiabatic effects in H_3^+: Solution of the rovibrational Schrödinger equation with motion-dependent masses and mass surfaces

    NASA Astrophysics Data System (ADS)

    Mátyus, Edit; Szidarovszky, Tamás; Császár, Attila G.

    2014-10-01

    Introducing different rotational and vibrational masses in the nuclear-motion Hamiltonian is a simple phenomenological way to model rovibrational non-adiabaticity. It is shown on the example of the molecular ion H_3^+, for which a global adiabatic potential energy surface accurate to better than 0.1 cm-1 exists [M. Pavanello, L. Adamowicz, A. Alijah, N. F. Zobov, I. I. Mizus, O. L. Polyansky, J. Tennyson, T. Szidarovszky, A. G. Császár, M. Berg et al., Phys. Rev. Lett. 108, 023002 (2012)], that the motion-dependent mass concept yields much more accurate rovibrational energy levels but, unusually, the results are dependent upon the choice of the embedding of the molecule-fixed frame. Correct degeneracies and an improved agreement with experimental data are obtained if an Eckart embedding corresponding to a reference structure of D3h point-group symmetry is employed. The vibrational mass of the proton in H_3^+ is optimized by minimizing the root-mean-square (rms) deviation between the computed and recent high-accuracy experimental transitions. The best vibrational mass obtained is larger than the nuclear mass of the proton by approximately one third of an electron mass, m^(v)_opt,p=m_nuc,p+0.31224 m_e. This optimized vibrational mass, along with a nuclear rotational mass, reduces the rms deviation of the experimental and computed rovibrational transitions by an order of magnitude. Finally, it is shown that an extension of the algorithm allowing the use of motion-dependent masses can deal with coordinate-dependent mass surfaces in the rovibrational Hamiltonian, as well.

  11. CELSS scenario analysis: Breakeven calculations

    NASA Technical Reports Server (NTRS)

    Mason, R. M.

    1980-01-01

    A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.

  12. A simple Lagrangian forecast system with aviation forecast potential

    NASA Technical Reports Server (NTRS)

    Petersen, R. A.; Homan, J. H.

    1983-01-01

    A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.

  13. 2D modeling of direct laser metal deposition process using a finite particle method

    NASA Astrophysics Data System (ADS)

    Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.

    2018-05-01

    Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.

  14. A stochastic-dynamic model for global atmospheric mass field statistics

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.

    1981-01-01

    A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.

  15. Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.

    This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less

  16. Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell

    DOE PAGES

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; ...

    2017-02-23

    This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less

  17. Sherwood correlation for dissolution of pooled NAPL in porous media

    NASA Astrophysics Data System (ADS)

    Aydin Sarikurt, Derya; Gokdemir, Cagri; Copty, Nadim K.

    2017-11-01

    The rate of interphase mass transfer from non-aqueous phase liquids (NAPLs) entrapped in the subsurface into the surrounding mobile aqueous phase is commonly expressed in terms of Sherwood (Sh) correlations that are expressed as a function of flow and porous media properties. Because of the lack of precise methods for the estimation of the interfacial area separating the NAPL and aqueous phases, most studies have opted to use modified Sherwood expressions that lump the interfacial area into the interphase mass transfer coefficient. To date, there are only two studies in the literature that have developed non-lumped Sherwood correlations; however, these correlations have undergone limited validation. In this paper controlled dissolution experiments from pooled NAPL were conducted. The immobile NAPL mass is placed at the bottom of a flow cell filled with porous media with water flowing horizontally on top. Effluent aqueous phase concentrations were measured for a wide range of aqueous phase velocities and for two different porous media. To interpret the experimental results, a two-dimensional pore network model of the NAPL dissolution kinetics and aqueous phase transport was developed. The observed effluent concentrations were then used to compute best-fit mass transfer coefficients. Comparison of the effluent concentrations computed with the two-dimensional pore network model to those estimated with one-dimensional analytical solutions indicates that the analytical model which ignores the transport in the lateral direction can lead to under-estimation of the mass transfer coefficient. Based on system parameters and the estimated mass transfer coefficients, non-lumped Sherwood correlations were developed and compared to previously published data. The developed correlations, which are a significant improvement over currently available correlations that are associated with large uncertainties, can be incorporated into future modeling studies requiring non-lumped Sh expressions.

  18. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, Mafalda; Seery, David; Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

  19. Immersogeometric cardiovascular fluid–structure interaction analysis with divergence-conforming B-splines

    PubMed Central

    Kamensky, David; Hsu, Ming-Chen; Yu, Yue; Evans, John A.; Sacks, Michael S.; Hughes, Thomas J. R.

    2016-01-01

    This paper uses a divergence-conforming B-spline fluid discretization to address the long-standing issue of poor mass conservation in immersed methods for computational fluid–structure interaction (FSI) that represent the influence of the structure as a forcing term in the fluid subproblem. We focus, in particular, on the immersogeometric method developed in our earlier work, analyze its convergence for linear model problems, then apply it to FSI analysis of heart valves, using divergence-conforming B-splines to discretize the fluid subproblem. Poor mass conservation can manifest as effective leakage of fluid through thin solid barriers. This leakage disrupts the qualitative behavior of FSI systems such as heart valves, which exist specifically to block flow. Divergence-conforming discretizations can enforce mass conservation exactly, avoiding this problem. To demonstrate the practical utility of immersogeometric FSI analysis with divergence-conforming B-splines, we use the methods described in this paper to construct and evaluate a computational model of an in vitro experiment that pumps water through an artificial valve. PMID:28239201

  20. Computational models of epileptiform activity.

    PubMed

    Wendling, Fabrice; Benquet, Pascal; Bartolomei, Fabrice; Jirsa, Viktor

    2016-02-15

    We reviewed computer models that have been developed to reproduce and explain epileptiform activity. Unlike other already-published reviews on computer models of epilepsy, the proposed overview starts from the various types of epileptiform activity encountered during both interictal and ictal periods. Computational models proposed so far in the context of partial and generalized epilepsies are classified according to the following taxonomy: neural mass, neural field, detailed network and formal mathematical models. Insights gained about interictal epileptic spikes and high-frequency oscillations, about fast oscillations at seizure onset, about seizure initiation and propagation, about spike-wave discharges and about status epilepticus are described. This review shows the richness and complementarity of the various modeling approaches as well as the fruitful contribution of the computational neuroscience community in the field of epilepsy research. It shows that models have progressively gained acceptance and are now considered as an efficient way of integrating structural, functional and pathophysiological data about neural systems into "coherent and interpretable views". The advantages, limitations and future of modeling approaches are discussed. Perspectives in epilepsy research and clinical epileptology indicate that very promising directions are foreseen, like model-guided experiments or model-guided therapeutic strategy, among others. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Improving the Spacelab mass memory unit tape layout with a simulation model

    NASA Technical Reports Server (NTRS)

    Noneman, S. R.

    1984-01-01

    A tape drive called the Mass Memory Unit (MMU) stores software used by Spacelab computers. MMU tape motion must be minimized during typical flight operations to avoid a loss of scientific data. A projection of the tape motion is needed for evaluation of candidate tape layouts. A computer simulation of the scheduled and unscheduled MMU tape accesses is developed for this purpose. This simulation permits evaluations of candidate tape layouts by tracking and summarizing tape movements. The factors that affect tape travel are investigated and a heuristic is developed to find a good tape layout. An improved tape layout for Spacelab I is selected after the evaluation of fourteen candidates. The simulation model will provide the ability to determine MMU layouts that substantially decrease the tape travel on future Spacelab flights.

  2. Investigation of Molecule-Surface Interactions With Overtone Absorption Spectroscopy and Computational Methods

    DTIC Science & Technology

    2010-11-01

    method at a fraction of the computational cost . The overtone frequency serves as the bridge between the molecule-surface interaction model and...the computational cost of utilizing higher levels of theory such as MP2. The second task is the calculation of absorption frequencies as a function...the methyl C-H bonds, and n\\ and inn are the carbon and hydrogen atomic masses, respectively. The calculation of the fundamental and overtone

  3. Bayesian prediction of future ice sheet volume using local approximation Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.

  4. Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.

    PubMed

    Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W

    2016-08-18

    A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (<7 %) between estimation and ground truth. By applying the topic model-based purification to mass spectrometric data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed that incorporation of scan-level features have the potential to lead to more accurate purification results by alleviating the loss in information as a result of integrating peaks. We believe cancer biomarker discovery studies that use mass spectrometric analysis of human biospecimens can greatly benefit from topic model-based purification of the data prior to statistical and pathway analyses.

  5. Modeling Jet and Outflow Feedback during Star Cluster Formation

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Schrön, Martin; Banerjee, Robi; Klessen, Ralf S.

    2014-08-01

    Powerful jets and outflows are launched from the protostellar disks around newborn stars. These outflows carry enough mass and momentum to transform the structure of their parent molecular cloud and to potentially control star formation itself. Despite their importance, we have not been able to fully quantify the impact of jets and outflows during the formation of a star cluster. The main problem lies in limited computing power. We would have to resolve the magnetic jet-launching mechanism close to the protostar and at the same time follow the evolution of a parsec-size cloud for a million years. Current computer power and codes fall orders of magnitude short of achieving this. In order to overcome this problem, we implement a subgrid-scale (SGS) model for launching jets and outflows, which demonstrably converges and reproduces the mass, linear and angular momentum transfer, and the speed of real jets, with ~1000 times lower resolution than would be required without the SGS model. We apply the new SGS model to turbulent, magnetized star cluster formation and show that jets and outflows (1) eject about one-fourth of their parent molecular clump in high-speed jets, quickly reaching distances of more than a parsec, (2) reduce the star formation rate by about a factor of two, and (3) lead to the formation of ~1.5 times as many stars compared to the no-outflow case. Most importantly, we find that jets and outflows reduce the average star mass by a factor of ~ three and may thus be essential for understanding the characteristic mass of the stellar initial mass function.

  6. N-BODY SIMULATION OF PLANETESIMAL FORMATION THROUGH GRAVITATIONAL INSTABILITY AND COAGULATION. II. ACCRETION MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michikoshi, Shugo; Kokubo, Eiichiro; Inutsuka, Shu-ichiro, E-mail: michikoshi@cfca.j, E-mail: kokubo@th.nao.ac.j, E-mail: inutsuka@tap.scphys.kyoto-u.ac.j

    2009-10-01

    The gravitational instability of a dust layer is one of the scenarios for planetesimal formation. If the density of a dust layer becomes sufficiently high as a result of the sedimentation of dust grains toward the midplane of a protoplanetary disk, the layer becomes gravitationally unstable and spontaneously fragments into planetesimals. Using a shearing box method, we performed local N-body simulations of gravitational instability of a dust layer and subsequent coagulation without gas and investigated the basic formation process of planetesimals. In this paper, we adopted the accretion model as a collision model. A gravitationally bound pair of particles ismore » replaced by a single particle with the total mass of the pair. This accretion model enables us to perform long-term and large-scale calculations. We confirmed that the formation process of planetesimals is the same as that in the previous paper with the rubble pile models. The formation process is divided into three stages: the formation of nonaxisymmetric structures; the creation of planetesimal seeds; and their collisional growth. We investigated the dependence of the planetesimal mass on the simulation domain size. We found that the mean mass of planetesimals formed in simulations is proportional to L {sup 3/2} {sub y}, where L{sub y} is the size of the computational domain in the direction of rotation. However, the mean mass of planetesimals is independent of L{sub x} , where L{sub x} is the size of the computational domain in the radial direction if L{sub x} is sufficiently large. We presented the estimation formula of the planetesimal mass taking into account the simulation domain size.« less

  7. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  8. Unsteady solute-transport simulation in streamflow using a finite-difference model

    USGS Publications Warehouse

    Land, Larry F.

    1978-01-01

    This report documents a rather simple, general purpose, one-dimensional, one-parameter, mass-transport model for field use. The model assumes a well-mixed conservative solute that may be coming from an unsteady source and is moving in unsteady streamflow. The quantity of solute being transported is in the units of concentration. Results are reported as such. An implicit finite-difference technique is used to solve the mass transport equation. It consists of creating a tridiagonal matrix and using the Thomas algorithm to solve the matrix for the unknown concentrations at the new time step. The computer program pesented is designed to compute the concentration of a water-quality constituent at any point and at any preselected time in a one-dimensional stream. The model is driven by the inflowing concentration of solute at the upstream boundary and is influenced by the solute entering the stream from tributaries and lateral ground-water inflow and from a source or sink. (Woodard-USGS)

  9. Application of Laser Scanning Confocal Microscopy to Heat and Mass Transport Modeling in Porous Microstructures

    NASA Technical Reports Server (NTRS)

    Marshall, Jochen; Milos, Frank; Fredrich, Joanne; Rasky, Daniel J. (Technical Monitor)

    1997-01-01

    Laser Scanning Confocal Microscopy (LSCM) has been used to obtain digital images of the complicated 3-D (three-dimensional) microstructures of rigid, fibrous thermal protection system (TPS) materials. These orthotropic materials are comprised of refractory ceramic fibers with diameters in the range of 1 to 10 microns and have open porosities of 0.8 or more. Algorithms are being constructed to extract quantitative microstructural information from the digital data so that it may be applied to specific heat and mass transport modeling efforts; such information includes, for example, the solid and pore volume fractions, the internal surface area per volume, fiber diameter distributions, and fiber orientation distributions. This type of information is difficult to obtain in general, yet it is directly relevant to many computational efforts which seek to model macroscopic thermophysical phenomena in terms of microscopic mechanisms or interactions. Two such computational efforts for fibrous TPS materials are: i) the calculation of radiative transport properties; ii) the modeling of gas permeabilities.

  10. Computational Flow Modeling of Hydrodynamics in Multiphase Trickle-Bed Reactors

    NASA Astrophysics Data System (ADS)

    Lopes, Rodrigo J. G.; Quinta-Ferreira, Rosa M.

    2008-05-01

    This study aims to incorporate most recent multiphase models in order to investigate the hydrodynamic behavior of a TBR in terms of pressure drop and liquid holdup. Taking into account transport phenomena such as mass and heat transfer, an Eulerian k-fluid model was developed resulting from the volume averaging of the continuity and momentum equations and solved for a 3D representation of the catalytic bed. Computational fluid dynamics (CFD) model predicts hydrodynamic parameters quite well if good closures for fluid/fluid and fluid/particle interactions are incorporated in the multiphase model. Moreover, catalytic performance is investigated with the catalytic wet oxidation of a phenolic pollutant.

  11. Merger of a Neutron Star with a Newtonian Black Hole

    NASA Technical Reports Server (NTRS)

    Lee, William H.; Kluzniak, Wlodzimierz

    1995-01-01

    Newtonian smooth particle hydro simulations are presented of the merger of a 1.4 solar mass neutron star with a black hole of equal mass. The initial state of the system is modeled with a stiff polytrope orbiting a point mass. Dynamical instability sets in when the orbital separation is equal to about three stellar radii. The ensuing mass transfer occurs on the dynamical timescale. No accretion torus is formed. At the end of the computation a corona of large extent shrouds an apparently stable binary system of a 0.25 solar mass star orbiting a 2.3 solar mass black hole.

  12. Modelling Mass Casualty Decontamination Systems Informed by Field Exercise Data

    PubMed Central

    Egan, Joseph R.; Amlôt, Richard

    2012-01-01

    In the event of a large-scale chemical release in the UK decontamination of ambulant casualties would be undertaken by the Fire and Rescue Service (FRS). The aim of this study was to track the movement of volunteer casualties at two mass decontamination field exercises using passive Radio Frequency Identification tags and detection mats that were placed at pre-defined locations. The exercise data were then used to inform a computer model of the FRS component of the mass decontamination process. Having removed all clothing and having showered, the re-dressing (termed re-robing) of casualties was found to be a bottleneck in the mass decontamination process during both exercises. Computer simulations showed that increasing the capacity of each lane of the re-robe section to accommodate 10 rather than five casualties would be optimal in general, but that a capacity of 15 might be required to accommodate vulnerable individuals. If the duration of the shower was decreased from three minutes to one minute then a per lane re-robe capacity of 20 might be necessary to maximise the throughput of casualties. In conclusion, one practical enhancement to the FRS response may be to provide at least one additional re-robe section per mass decontamination unit. PMID:23202768

  13. Enforcing elemental mass and energy balances for reduced order models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J.; Agarwal, K.; Sharma, P.

    2012-01-01

    Development of economically feasible gasification and carbon capture, utilization and storage (CCUS) technologies requires a variety of software tools to optimize the designs of not only the key devices involved (e., g., gasifier, CO{sub 2} adsorber) but also the entire power generation system. High-fidelity models such as Computational Fluid Dynamics (CFD) models are capable of accurately simulating the detailed flow dynamics, heat transfer, and chemistry inside the key devices. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges due to the scale differences in both time and length,more » as well the high computational cost. A reduced order model (ROM) generated from a high-fidelity model can serve as a bridge between the models of different scales. While high-fidelity models are built upon the principles of mass, momentum, and energy conservations, ROMs are usually developed based on regression-type equations and hence their predictions may violate the mass and energy conservation laws. A high-fidelity model may also have the mass and energy balance problem if it is not tightly converged. Conservations of mass and energy are important when a ROM is integrated to a flowsheet for the process simulation of the entire chemical or power generation system, especially when recycle streams are connected to the modeled device. As a part of the Carbon Capture Simulation Initiative (CCSI) project supported by the U.S. Department of Energy, we developed a software framework for generating ROMs from CFD simulations and integrating them with Process Modeling Environments (PMEs) for system-wide optimization. This paper presents a method to correct the results of a high-fidelity model or a ROM such that the elemental mass and energy are conserved perfectly. Correction factors for the flow rates of individual species in the product streams are solved using a minimization algorithm based on Lagrangian multiplier method. Enthalpies of product streams are also modified to enforce the energy balance. The approach is illustrated for two ROMs, one based on a CFD model of an entrained-flow gasifier and the other based on the CFD model of a multiphase CO{sub 2} adsorber.« less

  14. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  15. Strong Lens Models for 10 Galaxy Clusters from the Sloan Giant Arcs Survey

    NASA Astrophysics Data System (ADS)

    Dunham, Samuel; Sharon, Keren; Bayliss, Matthew; Dahle, Hakon; Florian, Michael; Gladders, Michael; Johnson, Traci; Murray, Katherine; Rigby, Jane R.; Whitaker, Katherine E.; Wuyts, Eva

    2016-01-01

    We present the results from modeling several strong gravitational lenses as part of the Sloan Giant Arcs Survey (SGAS). HST cannot resolve star-formation in galaxies at redshifts >~1 because they are too far away, but by using the magnification by galaxy clusters at these redshifts (1

  16. Mass and Ozone Fluxes from the Lowermost Stratosphere

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark R.; Olsen, Mark A.

    2004-01-01

    Net mass flux from the stratosphere to the troposphere can be computed from the heating rate along the 380K isentropic surface and the time rate of change of the mass of the lowermost stratosphere (the region between the tropopause and the 380K isentrope). Given this net mass flux and the cross tropopause diabatic mass flux, the residual adiabatic mass flux across the tropopause can also be estimated. These fluxes have been computed using meteorological fields from a free-running general circulation model (FVGCM) and two assimilation data sets, FVDAS, and UKMO. The data sets tend to agree that the annual average net mass flux for the Northern Hemisphere is about 1P10 kg/s. There is less agreement on the southern Hemisphere flux that might be half as large. For all three data sets, the adiabatic mass flux is computed to be from the upper troposphere into the lowermost stratosphere. This flux will dilute air entering from higher stratospheric altitudes. The mass fluxes are convolved with ozone mixing ratios from the Goddard 3D CTM (which uses the FVGCM) to estimate the cross-tropopause transport of ozone. A relatively large adiabatic flux of tropospheric ozone from the tropical upper troposphere into the extratropical lowermost stratosphere dilutes the stratospheric air in the lowermost stratosphere. Thus, a significant fraction of any measured ozone STE may not be ozone produced in the higher Stratosphere. The results also illustrate that the annual cycle of ozone concentration in the lowermost stratosphere has as much of a role as the transport in the seasonal ozone flux cycle. This implies that a simplified calculation of ozone STE mass from air mass and a mean ozone mixing ratio may have a large uncertainty.

  17. Computational analysis of the Phanerochaete chrysosporium v2.0 genome database and mass spectrometry identiWcation of peptides in ligninolytic cultures reveal complex mixtures of secreted proteins

    Treesearch

    Amber Vanden Wymelenberg; Patrick Minges; Grzegorz Sabat; Diego Martinez; Andrea Aerts; Asaf Salamov; Igor Grigoriev; Harris Shapiro; Nik Putnam; Paula Belinky; Carlos Dosoretz; Jill Gaskell; Phil Kersten; Dan Cullen

    2006-01-01

    The white-rot basidiomycete Phanerochaete chrysosporium employs extracellular enzymes to completely degrade the major polymers of wood: cellulose, hemicellulose, and lignin. Analysis of a total of 10,048 v2.1 gene models predicts 769 secreted proteins, a substantial increase over the 268 models identified in the earlier database (v1.0). Within the v2.1 ‘computational...

  18. A 2d Block Model For Landslide Simulation: An Application To The 1963 Vajont Case

    NASA Astrophysics Data System (ADS)

    Tinti, S.; Zaniboni, F.; Manucci, A.; Bortolucci, E.

    A 2D block model to study the motion of a sliding mass is presented. The slide is par- titioned into a matrix of blocks the basis of which are quadrilaterals. The blocks move on a specified sliding surface and follow a trajectory that is computed by the model. The forces acting on the blocks are gravity, basal friction, buoyancy in case of under- water motion, and interaction with neighbouring blocks. At any time step, the position of the blocks on the sliding surface is determined in curvilinear (local) co-ordinates by computing the position of the vertices of the quadrilaterals and the position of the block centre of mass. Mathematically, the topology of the system is invariant during the motion, which means that the number of blocks is constant and that each block has always the same neighbours. Physically, this means that blocks are allowed to change form, but not to penetrate into each other, not to coalesce, not to split. The change of form is compensated by the change of height, under the computational assumption that the block volume is constant during motion: consequently lateral expansion or contraction yield respectively height reduction or increment of the blocks. This model is superior to the analogous 1D model where the mass is partitioned into a chain of interacting blocks. 1D models require the a-priori specification of the sliding path, that is of the trajectory of the blocks, which the 2D block model supplies as one of its output. In continuation of previous studies on the catastrophic slide of Vajont that occurred in 1963 in northern Italy and caused more than 2000 victims, the 2D block model has been applied to the Vajont case. The results are compared to the outcome of the 1D model, and more importantly to the observational data concerning the deposit position and morphology. The agreement between simulation and data is found to be quite good.

  19. Mathematical modeling heat and mass transfer processes in porous media

    NASA Astrophysics Data System (ADS)

    Akhmed-Zaki, Darkhan

    2013-11-01

    On late development stages of oil-fields appears a complex problem of oil-recovery reduction. One of solution approaches is injecting of surfactant together with water in the form of active impurities into the productive layer - for decreasing oil viscosity and capillary forces between ``oil-water'' phases system. In fluids flow the surfactant can be in three states: dissolved in water, dissolved in oil and adsorbed on pore channels' walls. The surfactant's invasion into the reservoir is tracked by its diffusion with reservoir liquid and mass-exchange with two phase (liquid and solid) components of porous structure. Additionally, in this case heat exchange between fluids (injected, residual) and framework of porous medium has practical importance for evaluating of temperature influences on enhancing oil recovery. Now, the problem of designing an adequate mathematical model for describing a simultaneous flowing heat and mass transfer processes in anisotropic heterogeneous porous medium -surfactant injection during at various temperature regimes has not been fully researched. In this work is presents a 2D mathematical model of surfactant injections into the oil reservoir. Description of heat- and mass transfer processes in a porous media is done through differential and kinetic equations. For designing a computational algorithm is used modify version of IMPES method. The sequential and parallel computational algorithms are developed using an adaptive curvilinear meshes which into account heterogeneous porous structures. In this case we can evaluate the boundaries of our process flows - fronts (``invasion'', ``heat'' and ``mass'' transfers), according to the pressure, temperature, and concentration gradient changes.

  20. On mass and momentum conservation in the variable-parameter Muskingum method

    NASA Astrophysics Data System (ADS)

    Reggiani, Paolo; Todini, Ezio; Meißner, Dennis

    2016-12-01

    In this paper we investigate mass and momentum conservation in one-dimensional routing models. To this end we formulate the conservation equations for a finite-dimensional reach and compute individual terms using three standard Saint-Venant (SV) solvers: SOBEK, HEC-RAS and MIKE11. We also employ two different variable-parameter Muskingum (VPM) formulations: the classical Muskingum-Cunge (MC) and the revised, mass-conservative Muskingum-Cunge-Todini (MCT) approach, whereby geometrical cross sections are treated analytically in both cases. We initially compare the three SV solvers for a straight mild-sloping prismatic channel with geometric cross sections and a synthetic hydrograph as boundary conditions against the analytical MC and MCT solutions. The comparison is substantiated by the fact that in this flow regime the conditions for the parabolic equation model solved by MC and MCT are met. Through this intercomparison we show that all approaches have comparable mass and momentum conservation properties, except the MC. Then we extend the MCT to use natural cross sections for a real irregular river channel forced by an observed triple-peak event and compare the results with SOBEK. The model intercomparison demonstrates that the VPM in the form of MCT can be a computationally efficient, fully mass and momentum conservative approach and therefore constitutes a valid alternative to Saint-Venant based flood wave routing for a wide variety of rivers and channels in the world when downstream boundary conditions or hydraulic structures are non-influential.

  1. Influence of unsprung weight on vehicle ride quality

    NASA Astrophysics Data System (ADS)

    Hrovat, D.

    1988-08-01

    In the first part of this paper, a simple quarter-car, two-degree-of-freedom (2 DOF) vehicle model is used to investigate potential benefits and adaptive control capabilities of active suspensions. The results of this study indicate that, with an active suspension, it is possible to trade each 1% increase in tire deflection with a circa 1% decrease in r.m.s. sprung mass acceleration. This can be used for adaptive suspension tuning based on varying road/speed conditions. The second part of this paper is concerned with the influence of unsprung mass on optimal vibration isolation for the case of a linear 2 DOF, quarter-car model. In the study, it is assumed that the tire stiffness and geometry remain the same while unsprung mass is changed. The comprehensive computer analysis shows that, for active suspensions, both ride and handling can be improved by reducing the unsprung mass. In particular, when the total vehicle mass is kept constant, every 10% reduction in unsprung mass contributes to a circa 6% reduction in r.m.s. sprung mass acceleration for the same level of wheel-hop. For active suspension vehicles, this gives an added incentive for reducing the unsprung weight through the usage of, for example, aluminum wheels and lightweight composite materials. Although used primarily in the context of automotive applications, the results of this study are generic to similar 2 DOF structures in other areas of vibration isolation, ranging from computer peripherals to off-road vehicles.

  2. Noise optimization of a regenerative automotive fuel pump

    NASA Astrophysics Data System (ADS)

    Wang, J. F.; Feng, H. H.; Mou, X. L.; Huang, Y. X.

    2017-03-01

    The regenerative pump used in automotive is facing a noise problem. To understand the mechanism in detail, Computational Fluid Dynamics (CFD) and Computational Acoustic Analysis (CAA) together were used to understand the fluid and acoustic characteristics of the fuel pump using ANSYS-CFX 15.0 and LMS Virtual. Lab Rev12, respectively. The CFD model and acoustical model were validated by mass flow rate test and sound pressure test, respectively. Comparing the computational and experimental results shows that sound pressure levels at the observer position are consistent at high frequencies, especially at blade passing frequency. After validating the models, several numerical models were analyzed in the study for noise improvement. It is observed that for configuration having greater number of impeller blades, noise level was significantly improved at blade passing frequency, when compared to that of the original model.

  3. Ionization Processes in the Atmosphere of Titan (Research Note). III. Ionization by High-Z Nuclei Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Gronoff, G.; Mertens, C.; Lilensten, J.; Desorgher, L.; Fluckiger, E.; Velinov, P.

    2011-01-01

    Context. The Cassini-Huygens mission has revealed the importance of particle precipitation in the atmosphere of Titan thanks to in-situ measurements. These ionizing particles (electrons, protons, and cosmic rays) have a strong impact on the chemistry, hence must be modeled. Aims. We revisit our computation of ionization in the atmosphere of Titan by cosmic rays. The high-energy high-mass ions are taken into account to improve the precision of the calculation of the ion production profile. Methods. The Badhwahr and O Neill model for cosmic ray spectrum was adapted for the Titan model. We used the TransTitan model coupled with the Planetocosmics model to compute the ion production by cosmic rays. We compared the results with the NAIRAS/HZETRN ionization model used for the first time for a body that differs from the Earth. Results. The cosmic ray ionization is computed for five groups of cosmic rays, depending on their charge and mass: protons, alpha, Z = 8 (oxygen), Z = 14 (silicon), and Z = 26 (iron) nucleus. Protons and alpha particles ionize mainly at 65 km altitude, while the higher mass nucleons ionize at higher altitudes. Nevertheless, the ionization at higher altitude is insufficient to obscure the impact of Saturn s magnetosphere protons at a 500 km altitude. The ionization rate at the peak (altitude: 65 km, for all the different conditions) lies between 30 and 40/cu cm/s. Conclusions. These new computations show for the first time the importance of high Z cosmic rays on the ionization of the Titan atmosphere. The updated full ionization profile shape does not differ significantly from that found in our previous calculations (Paper I: Gronoff et al. 2009, 506, 955) but undergoes a strong increase in intensity below an altitude of 400 km, especially between 200 and 400 km altitude where alpha and heavier particles (in the cosmic ray spectrum) are responsible for 40% of the ionization. The comparison of several models of ionization and cosmic ray spectra (in intensity and composition) reassures us about the stability of the altitude of the ionization peak (65 km altitude) with respect to the solar activity.

  4. Investigation of Dispersed and Dispersed Annular (rivulet or Thin Film) Flow Phase Separation in Tees.

    NASA Astrophysics Data System (ADS)

    McCreery, Glenn Ernest

    An experimental and analytical investigation of dispersed and dispersed-annular (rivulet or thin film) flow phase separation in tees has been successfully completed. The research was directed at, but is not specific to, determining flow conditions, following a loss of coolant accident, in the large rectangular passageways leading to vacuum buildings in the containment envelope of some CANDU nuclear reactors. The primary objectives of the research were to: (1) obtain experimental data to help formulate and test mechanistic analytical models of phase separation, and (2) develop the analytical models in computer programs which predict phase separation from upstream flow and pressure conditions and downstream and side branch pressure boundary conditions. To meet these objectives an air-water experimental apparatus was constructed, and consists of large air blowers attached to a long rectangular duct leading to a tee in the horizontal plane. A variety of phenomena was investigated including, for comparison with computer predictions, air streamlines and eddy boundary geometry, drop size spectra, macroscopic mass balances, liquid rivulet pathlines, and trajectories of drops of known size and velocity. Four separate computer programs were developed to analyze phase separation. Three of the programs are used sequentially to calculate dispersed mist phase separation in a tee. The fourth is used to calculate rivulet or thin film pathlines. Macroscopic mass balances are calculated from a summation of mass balances for drops with representative sizes (and masses) spaced across the drop size spectrum. The programs are tested against experimental data, and accurately predict gas flow fields, drop trajectories, rivulet pathlines and macroscopic mass balances. In addition to development of the computer programs, analysis was performed to specify the scaling of dispersed mist and rivulet or thin film flow, to investigate pressure losses in tees, and the inter-relationship of loss coefficients, contraction coefficients, and eddy geometry. The important transient effects of liquid storage in eddies were also analyzed.

  5. Ion transfer from an atmospheric pressure ion funnel into a mass spectrometer with different interface options: Simulation-based optimization of ion transmission efficiency.

    PubMed

    Mayer, Thomas; Borsdorf, Helko

    2016-02-15

    We optimized an atmospheric pressure ion funnel (APIF) including different interface options (pinhole, capillary, and nozzle) regarding a maximal ion transmission. Previous computer simulations consider the ion funnel itself and do not include the geometry of the following components which can considerably influence the ion transmission into the vacuum stage. Initially, a three-dimensional computer-aided design (CAD) model of our setup was created using Autodesk Inventor. This model was imported to the Autodesk Simulation CFD program where the computational fluid dynamics (CFD) were calculated. The flow field was transferred to SIMION 8.1. Investigations of ion trajectories were carried out using the SDS (statistical diffusion simulation) tool of SIMION, which allowed us to evaluate the flow regime, pressure, and temperature values that we obtained. The simulation-based optimization of different interfaces between an atmospheric pressure ion funnel and the first vacuum stage of a mass spectrometer require the consideration of fluid dynamics. The use of a Venturi nozzle ensures the highest level of transmission efficiency in comparison to capillaries or pinholes. However, the application of radiofrequency (RF) voltage and an appropriate direct current (DC) field leads to process optimization and maximum ion transfer. The nozzle does not hinder the transfer of small ions. Our high-resolution SIMION model (0.01 mm grid unit(-1) ) under consideration of fluid dynamics is generally suitable for predicting the ion transmission through an atmospheric-vacuum system for mass spectrometry and enables the optimization of operational parameters. A Venturi nozzle inserted between the ion funnel and the mass spectrometer permits maximal ion transmission. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Pre-supernova models at low metallicities

    NASA Astrophysics Data System (ADS)

    Hirschi, Raphael

    ¢ A series of fast rotating models at very low metallicity (Z 10 8 ) was computed in order to¡ explain the surface abundances observed at the surface of CEMP stars, in particular for nitrogen. The main results are the following: - Strong mixing occurs during He-burning and leads to important primary nitrogen produc- tion. - Important mass loss takes place in the RSG stage for the most massive models. The 85 M£ model loses about three quarter of its initial mass, becomes a WO star and could produce a GRB. - The CNO elements of HE1327-2326 could have been produced in massive rotating stars and ejected by their stellar winds.

  7. The effect of volume phase changes, mass transport, sunlight penetration, and densification on the thermal regime of icy regoliths

    NASA Technical Reports Server (NTRS)

    Fanale, Fraser P.; Salvail, James R.; Matson, Dennis L.; Brown, Robert H.

    1990-01-01

    The present quantitative modeling of convective, condensational, and sublimational effects on porous ice crust volumes subjected to solar radiation encompasses the effect of such insolation's penetration of visible bandpass-translucent light, but opaque to the IR bandpass. Quasi-steady-state temperatures, H2O mass fluxes, and ice mass-density change rates are computed as functions of time of day and ice depth. When the effects of latent heat and mass transport are included in the model, the enhancement of near-surface temperature due to the 'solid-state greenhouse effect' is substantially diminished. When latent heat, mass transport, and densification effects are considered, however, a significant solid-state greenhouse effect is shown to be compatible with both morphological evidence for high crust strengths and icy shell decoupling from the lithosphere.

  8. An Objective Evaluation of Mass Scaling Techniques Utilizing Computational Human Body Finite Element Models.

    PubMed

    Davis, Matthew L; Scott Gayzik, F

    2016-10-01

    Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios.

  9. Surrogate assisted multidisciplinary design optimization for an all-electric GEO satellite

    NASA Astrophysics Data System (ADS)

    Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian; Yuan, Bin

    2017-09-01

    State-of-the-art all-electric geostationary earth orbit (GEO) satellites use electric thrusters to execute all propulsive duties, which significantly differ from the traditional all-chemical ones in orbit-raising, station-keeping, radiation damage protection, and power budget, etc. Design optimization task of an all-electric GEO satellite is therefore a complex multidisciplinary design optimization (MDO) problem involving unique design considerations. However, solving the all-electric GEO satellite MDO problem faces big challenges in disciplinary modeling techniques and efficient optimization strategy. To address these challenges, we presents a surrogate assisted MDO framework consisting of several modules, i.e., MDO problem definition, multidisciplinary modeling, multidisciplinary analysis (MDA), and surrogate assisted optimizer. Based on the proposed framework, the all-electric GEO satellite MDO problem is formulated to minimize the total mass of the satellite system under a number of practical constraints. Then considerable efforts are spent on multidisciplinary modeling involving geosynchronous transfer, GEO station-keeping, power, thermal control, attitude control, and structure disciplines. Since orbit dynamics models and finite element structural model are computationally expensive, an adaptive response surface surrogate based optimizer is incorporated in the proposed framework to solve the satellite MDO problem with moderate computational cost, where a response surface surrogate is gradually refined to represent the computationally expensive MDA process. After optimization, the total mass of the studied GEO satellite is decreased by 185.3 kg (i.e., 7.3% of the total mass). Finally, the optimal design is further discussed to demonstrate the effectiveness of our proposed framework to cope with the all-electric GEO satellite system design optimization problems. This proposed surrogate assisted MDO framework can also provide valuable references for other all-electric spacecraft system design.

  10. Multi-component fluid flow through porous media by interacting lattice gas computer simulation

    NASA Astrophysics Data System (ADS)

    Cueva-Parra, Luis Alberto

    In this work we study structural and transport properties such as power-law behavior of trajectory of each constituent and their center of mass, density profile, mass flux, permeability, velocity profile, phase separation, segregation, and mixing of miscible and immiscible multicomponent fluid flow through rigid and non-consolidated porous media. The considered parameters are the mass ratio of the components, temperature, external pressure, and porosity. Due to its solid theoretical foundation and computational simplicity, the selected approaches are the Interacting Lattice Gas with Monte Carlo Method (Metropolis Algorithm) and direct sampling, combined with particular collision rules. The percolation mechanism is used for modeling initial random porous media. The introduced collision rules allow to model non-consolidated porous media, because part of the kinetic energy of the fluid particles is transfered to barrier particles, which are the components of the porous medium. Having gained kinetic energy, the barrier particles can move. A number of interesting results are observed. Some findings include, (i) phase separation in immiscible fluid flow through a medium with no barrier particles (porosity p P = 1). (ii) For the flow of miscible fluids through rigid porous medium with porosity close to percolation threshold (p C), the flux density (measure of permeability) shows a power law increase ∝ (pC - p) mu with mu = 2.0, and the density profile is found to decay with height ∝ exp(-mA/Bh), consistent with the barometric height law. (iii) Sedimentation and driving of barrier particles in fluid flow through non-consolidated porous medium. This study involves developing computer simulation models with efficient serial and parallel codes, extensive data analysis via graphical utilities, and computer visualization techniques.

  11. Measurements of Kepler Planet Masses and Eccentricities from Transit Timing Variations: Analytic and N-body Results

    NASA Astrophysics Data System (ADS)

    Hadden, Sam; Lithwick, Yoram

    2015-12-01

    Several Kepler planets reside in multi-planet systems where gravitational interactions result in transit timing variations (TTVs) that provide exquisitely sensitive probes of their masses of and orbits. Measuring these planets' masses and orbits constrains their bulk compositions and can provide clues about their formation. However, inverting TTV measurements in order to infer planet properties can be challenging: it involves fitting a nonlinear model with a large number of parameters to noisy data, often with significant degeneracies between parameters. I present results from two complementary approaches to TTV inversion: Markov chain Monte Carlo simulations that use N-body integrations to compute transit times and a simplified analytic model for computing the TTVs of planets near mean motion resonances. The analytic model allows for straightforward interpretations of N-body results and provides an independent estimate of parameter uncertainties that can be compared to MCMC results which may be sensitive to factors such as priors. We have conducted extensive MCMC simulations along with analytic fits to model the TTVs of dozens of Kepler multi-planet systems. We find that the bulk of these sub-Jovian planets have low densities that necessitate significant gaseous envelopes. We also find that the planets' eccentricities are generally small but often definitively non-zero.

  12. Influence of heat transfer rates on pressurization of liquid/slush hydrogen propellant tanks

    NASA Technical Reports Server (NTRS)

    Sasmal, G. P.; Hochstein, J. I.; Hardy, T. L.

    1993-01-01

    A multi-dimensional computational model of the pressurization process in liquid/slush hydrogen tank is developed and used to study the influence of heat flux rates at the ullage boundaries on the process. The new model computes these rates and performs an energy balance for the tank wall whereas previous multi-dimensional models required a priori specification of the boundary heat flux rates. Analyses of both liquid hydrogen and slush hydrogen pressurization were performed to expose differences between the two processes. Graphical displays are presented to establish the dependence of pressurization time, pressurant mass required, and other parameters of interest on ullage boundary heat flux rates and pressurant mass flow rate. Detailed velocity fields and temperature distributions are presented for selected cases to further illuminate the details of the pressurization process. It is demonstrated that ullage boundary heat flux rates do significantly effect the pressurization process and that minimizing heat loss from the ullage and maximizing pressurant flow rate minimizes the mass of pressurant gas required to pressurize the tank. It is further demonstrated that proper dimensionless scaling of pressure and time permit all the pressure histories examined during this study to be displayed as a single curve.

  13. Interior phase transformations and mass-radius relationships of silicon-carbon planets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Hugh F.; Militzer, Burkhard, E-mail: hughfw@gmail.com

    2014-09-20

    Planets such as 55 Cancri e orbiting stars with a high carbon-to-oxygen ratio may consist primarily of silicon and carbon, with successive layers of carbon, silicon carbide, and iron. The behavior of silicon-carbon materials at the extreme pressures prevalent in planetary interiors, however, has not yet been sufficiently understood. In this work, we use simulations based on density functional theory to determine high-pressure phase transitions in the silicon-carbon system, including the prediction of new stable compounds with Si{sub 2}C and SiC{sub 2} stoichiometry at high pressures. We compute equations of state for these silicon-carbon compounds as a function of pressure,more » and hence derive interior structural models and mass-radius relationships for planets composed of silicon and carbon. Notably, we predict a substantially smaller radius for SiC planets than in previous models, and find that mass radius relationships for SiC planets are indistinguishable from those of silicate planets. We also compute a new equation of state for iron. We rederive interior models for 55 Cancri e and are able to place more stringent restrictions on its composition.« less

  14. Analytical effective tensor for flow-through composites

    DOEpatents

    Sviercoski, Rosangela De Fatima [Los Alamos, NM

    2012-06-19

    A machine, method and computer-usable medium for modeling an average flow of a substance through a composite material. Such a modeling includes an analytical calculation of an effective tensor K.sup.a suitable for use with a variety of media. The analytical calculation corresponds to an approximation to the tensor K, and follows by first computing the diagonal values, and then identifying symmetries of the heterogeneity distribution. Additional calculations include determining the center of mass of the heterogeneous cell and its angle according to a defined Cartesian system, and utilizing this angle into a rotation formula to compute the off-diagonal values and determining its sign.

  15. Using GRACE and climate model simulations to predict mass loss of Alaskan glaciers through 2100

    DOE PAGES

    Wahr, John; Burgess, Evan; Swenson, Sean

    2016-05-30

    Glaciers in Alaska are currently losing mass at a rate of ~–50 Gt a –1, one of the largest ice loss rates of any regional collection of mountain glaciers on Earth. Existing projections of Alaska's future sea-level contributions tend to be divergent and are not tied directly to regional observations. Here we develop a simple, regional observation-based projection of Alaska's future sea-level contribution. We compute a time series of recent Alaska glacier mass variability using monthly GRACE gravity fields from August 2002 through December 2014. We also construct a three-parameter model of Alaska glacier mass variability based on monthly ERA-Interimmore » snowfall and temperature fields. When these three model parameters are fitted to the GRACE time series, the model explains 94% of the variance of the GRACE data. Using these parameter values, we then apply the model to simulated fields of monthly temperature and snowfall from the Community Earth System Model, to obtain predictions of mass variations through 2100. Here, we conclude that mass loss rates may increase between –80 and –110 Gt a –1by 2100, with a total sea-level rise contribution of 19 ± 4 mm during the 21st century.« less

  16. Computational Aerodynamic Simulations of a 1215 ft/sec Tip Speed Transonic Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of a 1215 ft/sec tip speed transonic fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone extensive experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which for this model did not include a split flow path with core and bypass ducts. As a result, it was only necessary to adjust fan rotational speed in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the flow fields at all operating conditions reveals no excessive boundary layer separations or related secondary-flow problems.

  17. Three-dimensional computational fluid dynamics modelling and experimental validation of the Jülich Mark-F solid oxide fuel cell stack

    NASA Astrophysics Data System (ADS)

    Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.

    2018-01-01

    This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.

  18. Computer Science (CS) Education in Indian Schools: Situation Analysis Using Darmstadt Model

    ERIC Educational Resources Information Center

    Raman, Raghu; Venkatasubramanian, Smrithi; Achuthan, Krishnashree; Nedungadi, Prema

    2015-01-01

    Computer science (CS) and its enabling technologies are at the heart of this information age, yet its adoption as a core subject by senior secondary students in Indian schools is low and has not reached critical mass. Though there have been efforts to create core curriculum standards for subjects like Physics, Chemistry, Biology, and Math, CS…

  19. Computer technique for simulating the combustion of cellulose and other fuels

    Treesearch

    Andrew M. Stein; Brian W. Bauske

    1971-01-01

    A computer method has been developed for simulating the combustion of wood and other cellulosic fuels. The products of combustion are used as input for a convection model that slimulates real fires. The method allows the chemical process to proceed to equilibrium and then examines the effects of mass addition and repartitioning on the fluid mechanics of the convection...

  20. Numerical investigation of the vortex-induced vibration of an elastically mounted circular cylinder at high Reynolds number (Re = 104) and low mass ratio using the RANS code

    PubMed Central

    2017-01-01

    This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier–Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear–stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV. PMID:28982172

  1. VRA Modeling, phase 1

    NASA Technical Reports Server (NTRS)

    Kindt, Louis M.; Mullins, Michael E.; Hand, David W.; Kline, Andrew A.

    1995-01-01

    The destruction of organic contaminants in waste water for closed systems, such as that of Space Station, is crucial due to the need for recycling the waste water. A co-current upflow bubble column using oxygen as the gas phase oxidant and packed with catalyst particles consisting of a noble metal on an alumina substrate is being developed for this process. The objective of this study is to develop a plug-flow model that will predict the performance of this three phase reactor system in destroying a multicomponent mixture of organic contaminants in water. Mass balances on a series of contaminants and oxygen in both the liquid and gas phases are used to develop this model. These mass balances incorporate the gas-to-liquid and liquid-to-particle mass transfer coefficients, the catalyst effectiveness factor, and intrinsic reaction rate. To validate this model, a bench scale reactor has been tested at Michigan Technological University at elevated pressures (50-83 psig,) and a temperature range of 200 to 290 F. Feeds consisting of five dilute solutions of ethanol (approx. 10 ppm), chlorobenzene (approx. 20 ppb), formaldehyde (approx. 100 ppb), dimethyl sulfoxide (DMSO approx. 300 ppb), and urea (approx. 20 ppm) in water were tested individually with an oxygen mass flow rate of 0.009 lb/h. The results from these individual tests were used to develop the kinetic parameter inputs necessary for the computer model. The computer simulated results are compared to the experimental data obtained for all 5 components run in a mixture on the differential test column for a range of reactor contact times.

  2. VizieR Online Data Catalog: TROY project. I. (Lillo-Box+, 2018)

    NASA Astrophysics Data System (ADS)

    Lillo-Box, J.; Barrado, D.; Figueira, P.; Leleu, A.; Santos, N. C.; Correia, A. C. M.; Robutel, P.; Faria, J. P.

    2017-11-01

    tablea4.dat: Posterior confidence intervals of the parameters explored to fit the radial velocity data according to equation 9 in the paper. tablea6.dat: Maximum mass of possible trojan bodies for the six tested models assuming their presence. We present the 95% confidence intervals of the mass computed from random samplings of the radial velocity semi-amplitude K2, the inclination i, the eccentricity e (when applicable), and the stellar mass obtained from the literature. (2 data files).

  3. SPIDER. V. MEASURING SYSTEMATIC EFFECTS IN EARLY-TYPE GALAXY STELLAR MASSES FROM PHOTOMETRIC SPECTRAL ENERGY DISTRIBUTION FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swindle, R.; Gal, R. R.; La Barbera, F.

    2011-10-15

    We present robust statistical estimates of the accuracy of early-type galaxy stellar masses derived from spectral energy distribution (SED) fitting as functions of various empirical and theoretical assumptions. Using large samples consisting of {approx}40,000 galaxies from the Sloan Digital Sky Survey (SDSS; ugriz), of which {approx}5000 are also in the UKIRT Infrared Deep Sky Survey (YJHK), with spectroscopic redshifts in the range 0.05 {<=} z {<=} 0.095, we test the reliability of some commonly used stellar population models and extinction laws for computing stellar masses. Spectroscopic ages (t), metallicities (Z), and extinctions (A{sub V} ) are also computed from fitsmore » to SDSS spectra using various population models. These external constraints are used in additional tests to estimate the systematic errors in the stellar masses derived from SED fitting, where t, Z, and A{sub V} are typically left as free parameters. We find reasonable agreement in mass estimates among stellar population models, with variation of the initial mass function and extinction law yielding systematic biases on the mass of nearly a factor of two, in agreement with other studies. Removing the near-infrared bands changes the statistical bias in mass by only {approx}0.06 dex, adding uncertainties of {approx}0.1 dex at the 95% CL. In contrast, we find that removing an ultraviolet band is more critical, introducing 2{sigma} uncertainties of {approx}0.15 dex. Finally, we find that the stellar masses are less affected by the absence of metallicity and/or dust extinction knowledge. However, there is a definite systematic offset in the mass estimate when the stellar population age is unknown, up to a factor of 2.5 for very old (12 Gyr) stellar populations. We present the stellar masses for our sample, corrected for the measured systematic biases due to photometrically determined ages, finding that age errors produce lower stellar masses by {approx}0.15 dex, with errors of {approx}0.02 dex at the 95% CL for the median stellar age subsample.« less

  4. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  5. Study of tethered satellite active attitude control

    NASA Technical Reports Server (NTRS)

    Colombo, G.

    1982-01-01

    Existing software was adapted for the study of tethered subsatellite rotational dynamics, an analytic solution for a stable configuration of a tethered subsatellite was developed, the analytic and numerical integrator (computer) solutions for this "test case' was compared in a two mass tether model program (DUMBEL), the existing multiple mass tether model (SKYHOOK) was modified to include subsatellite rotational dynamics, the analytic "test case,' was verified, and the use of the SKYHOOK rotational dynamics capability with a computer run showing the effect of a single off axis thruster on the behavior of the subsatellite was demonstrated. Subroutines for specific attitude control systems are developed and applied to the study of the behavior of the tethered subsatellite under realistic on orbit conditions. The effect of all tether "inputs,' including pendular oscillations, air drag, and electrodynamic interactions, on the dynamic behavior of the tether are included.

  6. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  7. Irradiation and Enhanced Magnetic Braking in Cataclysmic Variables

    NASA Astrophysics Data System (ADS)

    McCormick, P. J.; Frank, J.

    1998-12-01

    In previous work we have shown that irradiation driven mass transfer cycles can occur in cataclysmic variables at all orbital periods if an additional angular momentum loss mechanism is assumed. Earlier models simply postulated that the enhanced angular momentum loss was proportional to the mass transfer rate without any specific physical model. In this paper we present a simple modification of magnetic braking which seems to have the right properties to sustain irradiation driven cycles at all orbital periods. We assume that the wind mass loss from the irradiated companion consists of two parts: an intrinsic stellar wind term plus an enhancement that is proportional to the irradiation. The increase in mass flow reduces the specific angular momentum carried away by the flow but nevertheless yields an enhanced rate of magnetic braking. The secular evolution of the binary is then computed numerically with a suitably modified double polytropic code (McCormick & Frank 1998). With the above model and under certain conditions, mass transfer oscillations occur at all orbital periods.

  8. Multiscale computational modeling of a radiantly driven solar thermal collector

    NASA Astrophysics Data System (ADS)

    Ponnuru, Koushik

    The objectives of the master's thesis are to present, discuss and apply sequential multiscale modeling that combines analytical, numerical (finite element-based) and computational fluid dynamic (CFD) analysis to assist in the development of a radiantly driven macroscale solar thermal collector for energy harvesting. The solar thermal collector is a novel green energy system that converts solar energy to heat and utilizes dry air as a working heat transfer fluid (HTF). This energy system has important advantages over competitive technologies: it is self-contained (no energy sources are needed), there are no moving parts, no oil or supplementary fluids are needed and it is environmentally friendly since it is powered by solar radiation. This work focuses on the development of multi-physics and multiscale models for predicting the performance of the solar thermal collector. Model construction and validation is organized around three distinct and complementary levels. The first level involves an analytical analysis of the thermal transpiration phenomenon and models for predicting the associated mass flow pumping that occurs in an aerogel membrane in the presence of a large thermal gradient. Within the aerogel, a combination of convection, conduction and radiation occurs simultaneously in a domain where the pore size is comparable to the mean free path of the gas molecules. CFD modeling of thermal transpiration is not possible because all the available commercial CFD codes solve the Navier Stokes equations only for continuum flow, which is based on the assumption that the net molecular mass diffusion is zero. However, thermal transpiration occurs in a flow regime where a non-zero net molecular mass diffusion exists. Thus these effects are modeled by using Sharipov's [2] analytical expression for gas flow characterized by high Knudsen number. The second level uses a detailed CFD model solving Navier Stokes equations for momentum, heat and mass transfer in the various components of the device. We have used state-of-the-art computational fluid dynamics (CFD) software, Flow3D (www.flow3d.com) to model the effects of multiple coupled physical processes including buoyancy driven flow from local temperature differences within the plenums, fluid-solid momentum and heat transfer, and coupled radiation exchange between the aerogel, top glazing and environment. In addition, the CFD models include both convection and radiation exchange between the top glazing and the environment. Transient and steady-state thermal models have been constructed using COMSOL Multiphysics. The third level consists of a lumped-element system model, which enables rapid parametric analysis and helps to develop an understanding of the system behavior; the mathematical models developed and multiple CFD simulations studies focus on simultaneous solution of heat, momentum, mass and gas volume fraction balances and succeed in accurate state variable distributions confirmed by experimental measurements.

  9. Quantitative correlations between collision induced dissociation mass spectrometry coupled with electrospray ionization or atmospheric pressure chemical ionization mass spectrometry - Experiment and theory

    NASA Astrophysics Data System (ADS)

    Ivanova, Bojidarka; Spiteller, Michael

    2018-04-01

    The problematic that we consider in this paper treats the quantitative correlation model equations between experimental kinetic and thermodynamic parameters of coupled electrospray ionization (ESI) mass spectrometry (MS) or atmospheric pressure chemical ionization (APCI) mass spectrometry with collision induced dissociation mass spectrometry, accounting for the fact that the physical phenomena and mechanisms of ESI- and APCI-ion formation are completely different. There are described forty two fragment reactions of three analytes under independent ESI- and APCI-measurements. The developed new quantitative models allow us to study correlatively the reaction kinetics and thermodynamics using the methods of mass spectrometry, which complementary application with the methods of the quantum chemistry provide 3D structural information of the analytes. Both static and dynamic quantum chemical computations are carried out. The object of analyses are [2,3-dimethyl-4-(4-methyl-benzoyl)-2,3-di-p-tolyl-cyclobutyl]-p-tolyl-methanone (1) and the polycyclic aromatic hydrocarbons derivatives of dibenzoperylen (2) and tetrabenzo [a,c,fg,op]naphthacene (3), respectively. As far as (1) is known to be a product of [2π+2π] cycloaddition reactions of chalcone (1,3-di-p-tolyl-propenone), however producing cyclic derivatives with different stereo selectivity, so that the study provide crucial data about the capability of mass spectrometry to provide determine the stereo selectivity of the analytes. This work also first provides quantitative treatment of the relations '3D molecular/electronic structures'-'quantum chemical diffusion coefficient'-'mass spectrometric diffusion coefficient', thus extending the capability of the mass spectrometry for determination of the exact 3D structure of the analytes using independent measurements and computations of the diffusion coefficients. The determination of the experimental diffusion parameters is carried out within the 'current monitoring method' evaluating the translation diffusion of charged analytes, while the theoretical modelling of MS ions and computations of theoretical diffusion coefficients are based on the Arrhenius type behavior of the charged species under ESI- and APCI-conditions. Although the study provide certain sound considerations for the quantitative relations between the reaction kinetic-thermodynamics and 3D structure of the analytes together with correlations between 3D molecular/electronic structures-quantum chemical diffusion coefficient-mass spectrometric diffusion coefficient, which contribute significantly to the structural analytical chemistry, the results have importance to other areas such as organic synthesis and catalysis as well.

  10. Oceanic signals in rapid polar motion: results from a barotropic forward model with explicit consideration of self-attraction and loading effects

    NASA Astrophysics Data System (ADS)

    Schindelegger, Michael; Quinn, Katherine J.; Ponte, Rui M.

    2017-04-01

    Numerical modeling of non-tidal variations in ocean currents and bottom pressure has played a key role in closing the excitation budget of Earth's polar motion for a wide range of periodicities. Non-negligible discrepancies between observations and model accounts of pole position changes prevail, however, on sub-monthly time scales and call for examination of hydrodynamic effects usually omitted in general circulation models. Specifically, complete hydrodynamic cores must incorporate self-attraction and loading (SAL) feedbacks on redistributed water masses, effects that produces ocean bottom pressure perturbations of typically about 10% relative to the computed mass variations. Here, we report on a benchmark simulation with a near-global, barotropic forward model forced by wind stress, atmospheric pressure, and a properly calculated SAL term. The latter is obtained by decomposing ocean mass anomalies on a 30-minute grid into spherical harmonics at each time step and applying Love numbers to account for seafloor deformation and changed gravitational attraction. The increase in computational time at each time step is on the order of 50%. Preliminary results indicate that the explicit consideration of SAL in the forward runs increases the fidelity of modeled polar motion excitations, in particular on time scales shorter than 5 days as evident from cross spectral comparisons with geodetic excitation. Definite conclusions regarding the relevance of SAL in simulating rapid polar motion are, however, still hampered by the model's incomplete domain representation that excludes parts of the highly energetic Arctic Ocean.

  11. A Method for Incorporating Changing Structural Characteristics Due to Propellant Mass Usage in a Launch Vehicle Ascent Simulation

    NASA Technical Reports Server (NTRS)

    McGhee, D. S.

    2004-01-01

    Launch vehicles consume large quantities of propellant quickly, causing the mass properties and structural dynamics of the vehicle to change dramatically. Currently, structural load assessments account for this change with a large collection of structural models representing various propellant fill levels. This creates a large database of models complicating the delivery of reduced models and requiring extensive work for model changes. Presented here is a method to account for these mass changes in a more efficient manner. The method allows for the subtraction of propellant mass as the propellant is used in the simulation. This subtraction is done in the modal domain of the vehicle generalized model. Additional computation required is primarily for constructing the used propellant mass matrix from an initial propellant model and further matrix multiplications and subtractions. An additional eigenvalue solution is required to uncouple the new equations of motion; however, this is a much simplier calculation starting from a system that is already substantially uncoupled. The method was successfully tested in a simulation of Saturn V loads. Results from the method are compared to results from separate structural models for several propellant levels, showing excellent agreement. Further development to encompass more complicated propellant models, including slosh dynamics, is possible.

  12. Crustal motion measurements from the POLENET Antarctic Network: comparisons with glacial isostatic adjustment models

    NASA Astrophysics Data System (ADS)

    Wilson, T. J.; Konfal, S. A.; Bevis, M. G.; Spada, G.; Melini, D.; Barletta, V. R.; Kendrick, E. C.; Saddler, D.; Smalley, R., Jr.; Dalziel, I. W. D.; Willis, M. J.

    2016-12-01

    Crustal motions measured by GPS provide a unique proxy record of ice mass change, due to the elastic and viscoelastic response of the earth to removal of ice loads. The ANET/POLENET array of bedrock GPS sites spans much of the Antarctic interior, encompassing regions where glacial isostatic adjustment (GIA) models predict large crustal displacements due to LGM ice loss and including coastal West Antarctica where major modern ice mass loss is documented. To isolate the long-term GIA component of measured crustal motions, we computed and removed elastic displacements due to recent ice mass change. We used the annually resolved ice mass balance data from Martín-Español et al. (2016) derived from a statistical inversion of satellite altimetry, gravimetry, and elastic-corrected GPS data for the period 2003-2013. The Regional Elastic Rebound Calculator (REAR) [Melini et al., 2015] was used to compute elastic vertical and horizontal surface displacements. Uplift due to elastic rebound is substantial in West Antarctica, very minimal in East Antarctica, and variable across the Weddell Embayment. The ANET GPS-derived crustal motion patterns ascribed to non-elastic GIA are spatially complex and differ significantly in magnitude from model predictions. We present a systematic comparison of measured and predicted velocities within different sectors of Antarctica, in order to examine spatial patterns relative to modern ice mass changes, ice history model uncertainties, and lateral variations in earth properties. In the Weddell Embayment region most vertical velocities are lower than uplift predicted by GIA models. Several sites in the southernmost Transantarctic Mountains and the Whitmore Mountains, where small ice mass increase occurs, have vertical uplift significantly exceeding GIA model predictions. There is an intriguing spatial correlation of these fast-moving sites with a low-velocity anomaly in the upper mantle documented by analysis of teleseismic Rayleigh waves by Heeszel et al. (2016). Significant non-elastic GIA velocities occur in the Amundsen Sea Embayment sector, with high uplift flanked by subsiding regions. This pattern can be modeled as a viscoelastic response to ice loss on decadal-centennial time scales in a region with weak upper mantle, consistent with seismic results in the region.

  13. Multi-geodetic characterization of the seasonal signal at the CERGA geodetic reference station, France

    NASA Astrophysics Data System (ADS)

    Mémin, Anthony; Viswanathan, Vishnu; Fienga, Agnes; Santamarìa-Gómez, Alvaro; Boy, Jean-Paul; Cavalié, Olivier; Deleflie, Florent; Exertier, Pierre; Bernard, Jean-Daniel; Hinderer, Jacques

    2017-04-01

    Crustal deformations due to surface-mass loading account for a significant part of the variability in geodetic time series. A perfect understanding of the loading signal observed by geodetic techniques should help in improving terrestrial reference frame (TRF) realizations. Yet, discrepancies between crustal motion estimates from models of surface-mass loading and observations are still too large so that no model is currently recommended by the IERS for reducing the observations. We investigate the discrepancy observed in the seasonal variations of the position at the CERGA station, South of France. We characterize the seasonal motions of the reference geodetic station CERGA from GNSS, SLR, LLR and InSAR. We investigate the consistency between the station motions deduced from these geodetic techniques and compare the observed station motion with that estimated using models of surface-mass change. In that regard, we compute atmospheric loading effects using surface pressure fields from ECMWF, assuming an ocean response according to the classical inverted barometer (IB) assumption, considered to be valid for periods typically exceeding a week. We also used general circulation ocean models (ECCO and GLORYS) forced by wind, heat and fresh water fluxes. The continental water storage is described using GLDAS/Noah and MERRA-land models. Using the surface-mass models, we estimate that the seasonal signal due to loading deformation at the CERGA station is about 8-9, 1-2 and 1-2 mm peak-to-peak in Up, North and East component, respectively. There is a very good correlation between GPS observations and non-tidal loading predicted deformation due to atmosphere, ocean and hydrology which is the main driver of seasonal signal at CERGA. Despite large error bars, LLR observations agree reasonably well with GPS and non-tidal loading predictions in Up component. Local deformation as observed by InSAR is very well correlated with GPS observations corrected for non-tidal loading. Finally, we estimate local mass changes using the absolute gravity measurement campaigns available at the station and the global models of surface-mass change. We compute the induced station motion that we compare with the local deformation observed by InSAR and GPS.

  14. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  15. Computational Analyses of Complex Flows with Chemical Reactions

    NASA Astrophysics Data System (ADS)

    Bae, Kang-Sik

    The heat and mass transfer phenomena in micro-scale for the mass transfer phenomena on drug in cylindrical matrix system, the simulation of oxygen/drug diffusion in a three dimensional capillary network, and a reduced chemical kinetic modeling of gas turbine combustion for Jet propellant-10 have been studied numerically. For the numerical analysis of the mass transfer phenomena on drug in cylindrical matrix system, the governing equations are derived from the cylindrical matrix systems, Krogh cylinder model, which modeling system is comprised of a capillary to a surrounding cylinder tissue along with the arterial distance to veins. ADI (Alternative Direction Implicit) scheme and Thomas algorithm are applied to solve the nonlinear partial differential equations (PDEs). This study shows that the important factors which have an effect on the drug penetration depth to the tissue are the mass diffusivity and the consumption of relevant species during the time allowed for diffusion to the brain tissue. Also, a computational fluid dynamics (CFD) model has been developed to simulate the blood flow and oxygen/drug diffusion in a three dimensional capillary network, which are satisfied in the physiological range of a typical capillary. A three dimensional geometry has been constructed to replicate the one studied by Secomb et al. (2000), and the computational framework features a non-Newtonian viscosity model for blood, the oxygen transport model including in oxygen-hemoglobin dissociation and wall flux due to tissue absorption, as well as an ability to study the diffusion of drugs and other materials in the capillary streams. Finally, a chemical kinetic mechanism of JP-10 has been compiled and validated for a wide range of combustion regimes, covering pressures of 1atm to 40atm with temperature ranges of 1,200 K--1,700 K, which is being studied as a possible Jet propellant for the Pulse Detonation Engine (PDE) and other high-speed flight applications such as hypersonic missiles. The comprehensive skeletal mechanism consists of 58 species and 315 reactions including in CPD, Benzene formation process by the theory for polycyclic aromatic hydrocarbons (PAH) and soot formation process on the constant volume combustor, premixed flame characteristics.

  16. Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, mercedes C.

    2006-01-01

    The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.

  17. GAS eleven node thermal model (GEM)

    NASA Technical Reports Server (NTRS)

    Butler, Dan

    1988-01-01

    The Eleven Node Thermal Model (GEM) of the Get Away Special (GAS) container was originally developed based on the results of thermal tests of the GAS container. The model was then used in the thermal analysis and design of several NASA/GSFC GAS experiments, including the Flight Verification Payload, the Ultraviolet Experiment, and the Capillary Pumped Loop. The model description details the five cu ft container both with and without an insulated end cap. Mass specific heat values are also given so that transient analyses can be performed. A sample problem for each configuration is included as well so that GEM users can verify their computations. The model can be run on most personal computers with a thermal analyzer solution routine.

  18. Nearly Supersymmetric Dark Atoms

    DOE PAGES

    Behbahani, Siavosh R.; Jankowiak, Martin; Rube, Tomas; ...

    2011-01-01

    Theories of dark matter that support bound states are an intriguing possibility for the identity of the missing mass of the Universe. This article proposes a class of models of supersymmetric composite dark matter where the interactions with the Standard Model communicate supersymmetry breaking to the dark sector. In these models, supersymmetry breaking can be treated as a perturbation on the spectrum of bound states. Using a general formalism, the spectrum with leading supersymmetry effects is computed without specifying the details of the binding dynamics. The interactions of the composite states with the Standard Model are computed, and several benchmarkmore » models are described. General features of nonrelativistic supersymmetric bound states are emphasized.« less

  19. Simulation of the effects of different pilot helmets on neck loading during air combat.

    PubMed

    Mathys, R; Ferguson, S J

    2012-09-21

    New generation pilot helmets with mounted devices enhance the capabilities of pilots substantially. However, the additional equipment increases the helmet weight and shifts its center of mass forward. Two helmets with different mass properties were modeled to simulate their effects on the pilot's neck. A musculoskeletal computer model was used, with the methods of inverse dynamics and static optimization, to compute the muscle activations and joint reaction forces for a given range of quasi-static postures at various accelerations experienced during air combat. Head postures which induce much higher loads on the cervical spine than encountered in a neutral position could be identified. The increased weight and the forward shift of the center of mass of a new generation helmet lead to higher muscle activations and higher joint reaction loads over a wide range of head and neck movements. The muscle activations required to balance the head and neck in extreme postures increased the compressive force at the T1-C7 level substantially, while in a neutral posture the muscle activations remained low. The lateral neck muscles can reach activations of 100% and cause compressive joint forces up to 1100N during extensive rotations and extensions at high 'vertical' accelerations (Gz). The calculated values have to be interpreted with care as the model has not been validated. Nevertheless, this systematic analysis could separate the effects of head posture, acceleration and helmet mass on neck loading. More reliable data about mass properties and muscle morphometry with a more detailed motion analysis would help to refine the existing model. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Catalytic ignition model in a monolithic reactor with in-depth reaction

    NASA Technical Reports Server (NTRS)

    Tien, Ta-Ching; Tien, James S.

    1990-01-01

    Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.

  1. Speeding up low-mass planetary microlensing simulations and modeling: The caustic region of influence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penny, Matthew T., E-mail: penny@astronomy.ohio-state.edu

    2014-08-01

    Extensive simulations of planetary microlensing are necessary both before and after a survey is conducted: before to design and optimize the survey and after to understand its detection efficiency. The major bottleneck in such computations is the computation of light curves. However, for low-mass planets, most of these computations are wasteful, as most light curves do not contain detectable planetary signatures. In this paper, I develop a parameterization of the binary microlens that is conducive to avoiding light curve computations. I empirically find analytic expressions describing the limits of the parameter space that contain the vast majority of low-mass planetmore » detections. Through a large-scale simulation, I measure the (in)completeness of the parameterization and the speed-up it is possible to achieve. For Earth-mass planets in a wide range of orbits, it is possible to speed up simulations by a factor of ∼30-125 (depending on the survey's annual duty-cycle) at the cost of missing ∼1% of detections (which is actually a smaller loss than for the arbitrary parameter limits typically applied in microlensing simulations). The benefits of the parameterization probably outweigh the costs for planets below 100 M{sub ⊕}. For planets at the sensitivity limit of AFTA-WFIRST, simulation speed-ups of a factor ∼1000 or more are possible.« less

  2. Self-Pressurization and Spray Cooling Simulations of the Multipurpose Hydrogen Test Bed (MHTB) Ground-Based Experiment

    NASA Technical Reports Server (NTRS)

    Kartuzova, O.; Kassemi, M.; Agui, J.; Moder, J.

    2014-01-01

    This paper presents a CFD (computational fluid dynamics) model for simulating the self-pressurization of a large scale liquid hydrogen storage tank. In this model, the kinetics-based Schrage equation is used to account for the evaporative and condensing interfacial mass flows. Laminar and turbulent approaches to modeling natural convection in the tank and heat and mass transfer at the interface are compared. The flow, temperature, and interfacial mass fluxes predicted by these two approaches during tank self-pressurization are compared against each other. The ullage pressure and vapor temperature evolutions are also compared against experimental data obtained from the MHTB (Multipuprpose Hydrogen Test Bed) self-pressurization experiment. A CFD model for cooling cryogenic storage tanks by spraying cold liquid in the ullage is also presented. The Euler- Lagrange approach is utilized for tracking the spray droplets and for modeling interaction between the droplets and the continuous phase (ullage). The spray model is coupled with the VOF (volume of fluid) model by performing particle tracking in the ullage, removing particles from the ullage when they reach the interface, and then adding their contributions to the liquid. Droplet ullage heat and mass transfer are modeled. The flow, temperature, and interfacial mass flux predicted by the model are presented. The ullage pressure is compared with experimental data obtained from the MHTB spray bar mixing experiment. The results of the models with only droplet/ullage heat transfer and with heat and mass transfer between the droplets and ullage are compared.

  3. Baryon spectrum of SU(4) composite Higgs theory with two distinct fermion representations

    NASA Astrophysics Data System (ADS)

    Ayyar, Venkitesh; DeGrand, Thomas; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.; Shamir, Yigal; Svetitsky, Benjamin

    2018-06-01

    We use lattice simulations to compute the baryon spectrum of SU(4) lattice gauge theory coupled to dynamical fermions in the fundamental and two-index antisymmetric (sextet) representations simultaneously. This model is closely related to a composite Higgs model in which the chimera baryon made up of fermions from both representations plays the role of a composite top-quark partner. The dependence of the baryon masses on each underlying fermion mass is found to be generally consistent with a quark-model description and large-Nc scaling. We combine our numerical results with experimental bounds on the scale of the new strong sector to estimate a lower bound on the mass of the top-quark partner. We discuss some theoretical uncertainties associated with this estimate.

  4. Gravitational microlensing of gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Mao, Shude

    1993-01-01

    A Monte Carlo code is developed to calculate gravitational microlensing in three dimensions when the lensing optical depth is low or moderate (not greater than 0.25). The code calculates positions of microimages and time delays between the microimages. The majority of lensed gamma-ray bursts should show a simple double-burst structure, as predicted by a single point mass lens model. A small fraction should show complicated multiple events due to the collective effects of several point masses (black holes). Cosmological models with a significant fraction of mass density in massive compact objects can be tested by searching for microlensing events in the current BATSE data. Our catalog generated by 10,000 Monte Carlo models is accessible through the computer network. The catalog can be used to take realistic selection effects into account.

  5. Communication-Efficient Arbitration Models for Low-Resolution Data Flow Computing

    DTIC Science & Technology

    1988-12-01

    phase can be formally described as follows: Graph Partitioning Problem NP-complete: (Garey & Johnson) Given graph G = (V, E), weights w (v) for each v e V...Technical Report, MIT/LCS/TR-218, Cambridge, Mass. Agerwala, Tilak, February 1982, "Data Flow Systems", Computer, pp. 10-13. Babb, Robert G ., July 1984...34Parallel Processing with Large-Grain Data Flow Techniques," IEEE Computer 17, 7, pp. 55-61. Babb, Robert G ., II, Lise Storc, and William C. Ragsdale

  6. Evolution, Nucleosynthesis, and Yields of AGB Stars at Different Metallicities. III. Intermediate-mass Models, Revised Low-mass Models, and the ph-FRUITY Interface

    NASA Astrophysics Data System (ADS)

    Cristallo, S.; Straniero, O.; Piersanti, L.; Gobrecht, D.

    2015-08-01

    We present a new set of models for intermediate-mass asymptotic giant branch (AGB) stars (4.0, 5.0, and 6.0 M⊙) at different metallicities (-2.15 ≤ [Fe/H] ≤ +0.15). This set integrates the existing models for low-mass AGB stars (1.3 ≤ M/M⊙ ≤ 3.0) already included in the FRUITY database. We describe the physical and chemical evolution of the computed models from the main sequence up to the end of the AGB phase. Due to less efficient third dredge up episodes, models with large core masses show modest surface enhancements. This effect is due to the fact that the interpulse phases are short and, therefore, thermal pulses (TPs) are weak. Moreover, the high temperature at the base of the convective envelope prevents it from deeply penetrating the underlying radiative layers. Depending on the initial stellar mass, the heavy element nucleosynthesis is dominated by different neutron sources. In particular, the s-process distributions of the more massive models are dominated by the 22Ne(α,n)25Mg reaction, which is efficiently activated during TPs. At low metallicities, our models undergo hot bottom burning and hot third dredge up. We compare our theoretical final core masses to available white dwarf observations. Moreover, we quantify the influence intermediate-mass models have on the carbon star luminosity function. Finally, we present the upgrade of the FRUITY web interface, which now also includes the physical quantities of the TP-AGB phase for all of the models included in the database (ph-FRUITY).

  7. Internal velocity and mass distributions in simulated clusters of galaxies for a variety of cosmogonic models

    NASA Technical Reports Server (NTRS)

    Cen, Renyue

    1994-01-01

    The mass and velocity distributions in the outskirts (0.5-3.0/h Mpc) of simulated clusters of galaxies are examined for a suite of cosmogonic models (two Omega(sub 0) = 1 and two Omega(sub 0) = 0.2 models) utilizing large-scale particle-mesh (PM) simulations. Through a series of model computations, designed to isolate the different effects, we find that both Omega(sub 0) and P(sub k) (lambda less than or = 16/h Mpc) are important to the mass distributions in clusters of galaxies. There is a correlation between power, P(sub k), and density profiles of massive clusters; more power tends to point to the direction of a stronger correlation between alpha and M(r less than 1.5/h Mpc); i.e., massive clusters being relatively extended and small mass clusters being relatively concentrated. A lower Omega(sub 0) universe tends to produce relatively concentrated massive clusters and relatively extended small mass clusters compared to their counterparts in a higher Omega(sub 0) model with the same power. Models with little (initial) small-scale power, such as the hot dark matter (HDM) model, produce more extended mass distributions than the isothermal distribution for most of the mass clusters. But the cold dark matter (CDM) models show mass distributions of most of the clusters more concentrated than the isothermal distribution. X-ray and gravitational lensing observations are beginning providing useful information on the mass distribution in and around clusters; some interesting constraints on Omega(sub 0) and/or the (initial) power of the density fluctuations on scales lambda less than or = 16/h Mpc (where linear extrapolation is invalid) can be obtained when larger observational data sets, such as the Sloan Digital Sky Survey, become available.

  8. An inlet analysis for the NASA hypersonic research engine aerothermodynamic integration model

    NASA Technical Reports Server (NTRS)

    Andrews, E. H., Jr.; Russell, J. W.; Mackley, E. A.; Simmonds, A. L.

    1974-01-01

    A theoretical analysis for the inlet of the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM) has been undertaken by use of a method-of-characteristics computer program. The purpose of the analysis was to obtain pretest information on the full-scale HRE inlet in support of the experimental AIM program (completed May 1974). Mass-flow-ratio and additive-drag-coefficient schedules were obtained that well defined the range effected in the AIM tests. Mass-weighted average inlet total-pressure recovery, kinetic energy efficiency, and throat Mach numbers were obtained.

  9. A new monthly gravity field model based on GRACE observations computed by the modified dynamic approach

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Luo, Z.; Li, Q.; Zhong, B.

    2016-12-01

    The monthly gravity field model can be used to compute the information about the mass variation within the system Earth, i.e., the relationship between mass variation in the oceans, land hydrology, and ice sheets. For more than ten years, GRACE has provided valuable information for recovering monthly gravity field model. In this study, a new time series of GRACE monthly solution, which is truncated to degree and order 60, is computed by the modified dynamic approach. Compared with the traditional dynamic approach, the major difference of our modified approach is the way to process the nuisance parameters. This type of parameters is mainly used to absorb low-frequency errors in KBRR data. One way is to remove the nuisance parameters before estimating the geo-potential coefficients, called Pure Predetermined Strategy (PPS). The other way is to determine the nuisance parameters and geo-potential coefficients simultaneously, called Pure Simultaneous Strategy (PSS). It is convenient to detect the gross error by PPS, while there is also obvious signal loss compared with the solutions derived from PSS. After comparing the difference of practical calculation formulas between PPS and PSS, we create the Filter Predetermine Strategy (FPS), which can combine the advantages of PPS and PSS efficiently. With FPS, a new monthly gravity field model entitled HUST-Grace2016s is developed. The comparisons of geoid degree powers and mass change signals in the Amazon basin, the Greenland and the Antarctic demonstrate that our model is comparable with the other published models, e.g., the CSR RL05, JPL RL05 and GFZ RL05 models. Acknowledgements: This work is supported by China Postdoctoral Science Foundation (Grant No.2016M592337), the National Natural Science Foundation of China (Grant Nos. 41131067, 41504014), the Open Research Fund Program of the State Key Laboratory of Geodesy and Earth's Dynamics (Grant No. SKLGED2015-1-3-E).

  10. Modeling resident error-making patterns in detection of mammographic masses using computer-extracted image features: preliminary experiments

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora

    2014-03-01

    Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.

  11. Simulating effects of highway embankments on estuarine circulation

    USGS Publications Warehouse

    Lee, Jonathan K.; Schaffranek, Raymond W.; Baltzer, Robert A.

    1994-01-01

    A two-dimensional depth-averaged, finite-difference, numerical model was used to simulate tidal circulation and mass transport in the Port Royal Sound. South Carolina, estuarine system. The purpose of the study was to demonstrate the utility of the Surface-Water. Integrated. Flow and Transport model (SWIFT2D) for evaluating changes in circulation patterns and mass transport caused by highway-crossing embankments. A model of subregion of Port Royal Sound including the highway crossings and having a grid size of 61 m (200ft) was derived from a 183-m (600-ft) model of the entire Port Royal Sound estuarine system. The 183-m model was used to compute boundary-value data for the 61-m submodel, which was then used to simulate flow conditions with and without the highway embankments in place. The numerical simulations show that, with the highway embankment in place, mass transport between the Broad River and Battery Creek is reduced and mass transport between the Beaufort River and Battery Creek is increased. The net result is that mass transport into and out of upper Battery Creek is reduced. The presence of the embankments also alters circulation patterns within Battery Creek.

  12. Electrochemical carbon dioxide concentrator: Math model

    NASA Technical Reports Server (NTRS)

    Marshall, R. D.; Schubert, F. H.; Carlson, J. N.

    1973-01-01

    A steady state computer simulation model of an Electrochemical Depolarized Carbon Dioxide Concentrator (EDC) has been developed. The mathematical model combines EDC heat and mass balance equations with empirical correlations derived from experimental data to describe EDC performance as a function of the operating parameters involved. The model is capable of accurately predicting performance over EDC operating ranges. Model simulation results agree with the experimental data obtained over the prediction range.

  13. Resilient Software Systems

    DTIC Science & Technology

    2015-06-01

    and tools, called model-integrated computing ( MIC ) [3] relies on the use of domain-specific modeling languages for creating models of the system to be...hence giving reflective capabilities to it. We have followed the MIC method here: we designed a domain- specific modeling language for modeling...are produced one-off and not for the mass market , the scope for price reduction based on the market demands is non-existent. Processes to create

  14. Modal description—A better way of characterizing human vibration behavior

    NASA Astrophysics Data System (ADS)

    Rützel, Sebastian; Hinz, Barbara; Wölfel, Horst Peter

    2006-12-01

    Biodynamic responses to whole body vibrations are usually characterized in terms of transfer functions, such as impedance or apparent mass. Data measurements from subjects are averaged and analyzed with respect to certain attributes (anthropometrics, posture, excitation intensity, etc.). Averaging involves the risk of identifying unnatural vibration characteristics. The use of a modal description as an alternative method is presented and its contribution to biodynamic modelling is discussed. Modal description is not limited to just one biodynamic function: The method holds for all transfer functions. This is shown in terms of the apparent mass and the seat-to-head transfer function. The advantages of modal description are illustrated using apparent mass data of six male individuals of the same mass percentile. From experimental data, modal parameters such as natural frequencies, damping ratios and modal masses are identified which can easily be used to set up a mathematical model. Following the phenomenological approach, this model will provide the global vibration behavior relating to the input data. The modal description could be used for the development of hardware vibration dummies. With respect to software models such as finite element models, the validation process for these models can be supported by the modal approach. Modal parameters of computational models and of the experimental data can establish a basis for comparison.

  15. Tidal disruption of open clusters in their parent molecular clouds

    NASA Technical Reports Server (NTRS)

    Long, Kevin

    1989-01-01

    A simple model of tidal encounters has been applied to the problem of an open cluster in a clumpy molecular cloud. The parameters of the clumps are taken from the Blitz, Stark, and Long (1988) catalog of clumps in the Rosette molecular cloud. Encounters are modeled as impulsive, rectilinear collisions between Plummer spheres, but the tidal approximation is not invoked. Mass and binding energy changes during an encounter are computed by considering the velocity impulses given to individual stars in a random realization of a Plummer sphere. Mean rates of mass and binding energy loss are then computed by integrating over many encounters. Self-similar evolutionary calculations using these rates indicate that the disruption process is most sensitive to the cluster radius and relatively insensitive to cluster mass. The calculations indicate that clusters which are born in a cloud similar to the Rosette with a cluster radius greater than about 2.5 pc will not survive long enough to leave the cloud. The majority of clusters, however, have smaller radii and will survive the passage through their parent cloud.

  16. Mass and energy flow in prominences

    NASA Technical Reports Server (NTRS)

    Poland, Arthur I.

    1990-01-01

    Mass and energy flow in quiescent prominences is considered based on the hypothesis that active region prominences have a different structure and thus different mass and energy flow characteristics. Several important physical parameters have been plotted using the computational model, representing the evolutionary process after the prominence formation. The temperature, velocity, conductive flux, and enthalpy flux are plotted against distance from the highest point in the loop to the coolest part of the prominence. It is shown that the maximum velocity is only about 5 km/s. The model calculations indicate that the transition region of prominences is dominated by complex processes. It is necessary to take into account mass flow at temperatures below 200,000 K, and both mass flow and optical depth effects in hydrogen at temperatures below 30,000 K. Both of these effects lead to a less steep temperature gradient through the prominence corona interface than can be obtained from the conduction alone.

  17. Simulation capability for dynamics of two-body flexible satellites

    NASA Technical Reports Server (NTRS)

    Austin, F.; Zetkov, G.

    1973-01-01

    An analysis and computer program were prepared to realistically simulate the dynamic behavior of a class of satellites consisting of two end bodies separated by a connecting structure. The shape and mass distribution of the flexible end bodies are arbitrary; the connecting structure is flexible but massless and is capable of deployment and retraction. Fluid flowing in a piping system and rigid moving masses, representing a cargo elevator or crew members, have been modeled. Connecting structure characteristics, control systems, and externally applied loads are modeled in easily replaced subroutines. Subroutines currently available include a telescopic beam-type connecting structure as well as attitude, deployment, spin and wobble control. In addition, a unique mass balance control system was developed to sense and balance mass shifts due to the motion of a cargo elevator. The mass of the cargo may vary through a large range. Numerical results are discussed for various types of runs.

  18. Simultaneous Heat and Mass Transfer Model for Convective Drying of Building Material

    NASA Astrophysics Data System (ADS)

    Upadhyay, Ashwani; Chandramohan, V. P.

    2018-04-01

    A mathematical model of simultaneous heat and moisture transfer is developed for convective drying of building material. A rectangular brick is considered for sample object. Finite-difference method with semi-implicit scheme is used for solving the transient governing heat and mass transfer equation. Convective boundary condition is used, as the product is exposed in hot air. The heat and mass transfer equations are coupled through diffusion coefficient which is assumed as the function of temperature of the product. Set of algebraic equations are generated through space and time discretization. The discretized algebraic equations are solved by Gauss-Siedel method via iteration. Grid and time independent studies are performed for finding the optimum number of nodal points and time steps respectively. A MATLAB computer code is developed to solve the heat and mass transfer equations simultaneously. Transient heat and mass transfer simulations are performed to find the temperature and moisture distribution inside the brick.

  19. A rocket-borne mass analyzer for charged aerosol particles in the mesosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knappmiller, Scott; Robertson, Scott; Sternovsky, Zoltan

    2008-10-15

    An electrostatic mass spectrometer for nanometer-sized charged aerosol particles in the mesosphere has been developed and tested. The analyzer is mounted on the forward end of a rocket and has a slit opening for admitting a continuous sample of air that is exhausted through ports at the sides. Within the instrument housing are two sets of four collection plates that are biased with positive and negative voltages for the collection of negative and positive aerosol particles, respectively. Each collection plate spans about an order of magnitude in mass which corresponds to a factor of 2 in radius. The number densitymore » of the charge is calculated from the current collected by the plates. The mean free path for molecular collisions in the mesosphere is comparable to the size of the instrument opening; thus, the analyzer performance is modeled by a Monte Carlo computer code that finds the aerosol particles trajectories within the instrument including both the electrostatic force and the forces from collisions of the aerosol particles with air molecules. Mass sensitivity curves obtained using the computer models are near to those obtained in the laboratory using an ion source. The first two flights of the instrument returned data showing the charge number densities of both positive and negative aerosol particles in four mass ranges.« less

  20. A manifold learning approach to data-driven computational materials and processes

    NASA Astrophysics Data System (ADS)

    Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco

    2017-10-01

    Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.

  1. Our Sun V: A Bright Young Sun Consistent with Helioseismology and Warm Temperatures on Ancient Earth and Mars

    NASA Technical Reports Server (NTRS)

    Sackmann, I.-Juliana; Boothroyd, Arnold I.

    2001-01-01

    The relatively warm temperatures required on early Earth and Mars have been difficult to account for with warming from greenhouse gases. A slightly more massive young Sun would be brighter than predicted by the standard solar model, simultaneously resolving this problem for both Earth and Mars. We computed high-precision solar models with seven initial masses, from Mi = 1.01 to 1.07 solar mass - the latter being the maximum permitted if the early Earth is not to lose its water via a moist greenhouse effect. The relatively modest early mass loss that is required remains consistent with observational limits on mass loss from young stars and with estimates of the past solar wind obtained from lunar rocks. We considered three types of mass loss rates: (1) a reasonable choice of a simple exponential decline, (2) an extreme step-function case that gives the maximum effect consistent with observations, and (3) the radical case of a linear decline which is inconsistent with the solar wind mass loss estimates from lunar rocks. Our computations demonstrated that mass loss leaves a fingerprint oil the Sun's internal structure large enough to be detectable with helioseismic observations. All of our mass-losing solar models were consistent with the helioseismic observations; in fact, our preferred mass-losing cases were in marginally better agreement with the helioseismology than the standard solar model was, although this difference was smaller than the effects of other uncertainties in the input physics and in the solar composition. Mass loss has only a relatively minor effect on the predicted lithium depletion; the major portion of the solar lithium depletion must still be due to rotational mixing. Thus the modest mass loss cases considered here cannot be ruled out by observed lithium depletions. For the three mass loss types considered, the preferred initial masses were 1.07 solar mass for the exponential case and 1.04 solar mass for the step-function and linear cases; all of these provided high enough solar fluxes at Mars 3.8 Gyr ago to be consistent with the existence of liquid water. For a more massive early Sun, the planets would have had to be closer to the young Sun in order to end up in their present orbits; the orbital radii of the planets would vary inversely with the solar mass. Both of these effects contribute to the fact that the early solar flux at the planets would have been considerably higher than that of the standard solar model at that time. In fact, the 1.07 solar mass exponential case has a flux at birth 5% higher than the present solar flux, while the radical 1.04 solar mass linear case has a nearly constant flux over the first 3 Gyr only about 10% lower than at present. The early solar evolution would be in the opposite direction in the H-R diagram to that of the standard Sun.

  2. Simulation of mixing in the quick quench region of a rich burn-quick quench mix-lean burn combustor

    NASA Technical Reports Server (NTRS)

    Shih, Tom I.-P.; Nguyen, H. Lee; Howe, Gregory W.; Li, Z.

    1991-01-01

    A computer program was developed to study the mixing process in the quick quench region of a rich burn-quick quench mix-lean burn combustor. The computer program developed was based on the density-weighted, ensemble-averaged conservation equations of mass, momentum (full compressible Navier-Stokes), total energy, and species, closed by a k-epsilon turbulence model with wall functions. The combustion process was modeled by a two-step global reaction mechanism, and NO(x) formation was modeled by the Zeldovich mechanism. The formulation employed in the computer program and the essence of the numerical method of solution are described. Some results obtained for nonreacting and reacting flows with different main-flow to dilution-jet momentum flux ratios are also presented.

  3. The Surface Density Distribution in the Solar Nebula

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    2004-01-01

    The commonly used minimum mass power law representation of the pre-solar nebula is reanalyzed using a new cumulative-mass-model. This model predicts a smoother surface density approximation compared with methods based on direct computation of surface density. The density is quantified using two independent analytical formulations. First, a best-fit transcendental function is applied directly to the basic planetary data. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the solar nebula data. The latter model is shown to be a good approximation to the finite-size early Solar Nebula, and by extension to other extra solar protoplanetary disks.

  4. Kinetic particle simulation of discharge and wall erosion of a Hall thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Shinatora; Komurasaki, Kimiya; Arakawa, Yoshihiro

    2013-06-15

    The primary lifetime limiting factor of Hall thrusters is the wall erosion caused by the ion induced sputtering, which is predominated by dielectric wall sheath and pre-sheath. However, so far only fluid or hybrid simulation models were applied to wall erosion and lifetime studies in which this non-quasi-neutral and non-equilibrium area cannot be treated directly. Thus, in this study, a 2D fully kinetic particle-in-cell model was presented for Hall thruster discharge and lifetime simulation. Because the fully kinetic lifetime simulation was yet to be achieved so far due to the high computational cost, the semi-implicit field solver and the techniquemore » of mass ratio manipulation was employed to accelerate the computation. However, other artificial manipulations like permittivity or geometry scaling were not used in order to avoid unrecoverable change of physics. Additionally, a new physics recovering model for the mass ratio was presented for better preservation of electron mobility at the weakly magnetically confined plasma region. The validity of the presented model was examined by various parametric studies, and the thrust performance and wall erosion rate of a laboratory model magnetic layer type Hall thruster was modeled for different operation conditions. The simulation results successfully reproduced the measurement results with typically less than 10% discrepancy without tuning any numerical parameters. It is also shown that the computational cost was reduced to the level that the Hall thruster fully kinetic lifetime simulation is feasible.« less

  5. Operational evaluation of high-throughput community-based mass prophylaxis using Just-in-time training.

    PubMed

    Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei

    2007-01-01

    Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.

  6. The production and escape of nitrogen atoms on Mars

    NASA Technical Reports Server (NTRS)

    Fox, J. L.

    1993-01-01

    Updated rate coefficients and a revised ionosphere-thermosphere model are used to compute the production rates and densities of odd nitrogen species in the Martian atmosphere. Computed density profiles for N(4S), N(2D), N(2P), and NO are presented. The model NO densities are found to be about a factor of 2-3 less than those measured by the Viking 1 mass spectrometer. Revised values for the escape rates of N atoms from dissociative recombination and ionospheric reactions are also computed. Dissociative recombination is found to be comparable in importance to photodissociation at low solar activity, but it is still the most important escape mechanism for N-14 at high solar activity.

  7. Evaluation of innovative rocket engines for single-stage earth-to-orbit vehicles

    NASA Astrophysics Data System (ADS)

    Manski, Detlef; Martin, James A.

    1988-07-01

    Computer models of rocket engines and single-stage-to-orbit vehicles that were developed by the authors at DFVLR and NASA have been combined. The resulting code consists of engine mass, performance, trajectory and vehicle sizing models. The engine mass model includes equations for each subsystem and describes their dependences on various propulsion parameters. The engine performance model consists of multidimensional sets of theoretical propulsion properties and a complete thermodynamic analysis of the engine cycle. The vehicle analyses include an optimized trajectory analysis, mass estimation, and vehicle sizing. A vertical-takeoff, horizontal-landing, single-stage, winged, manned, fully reusable vehicle with a payload capability of 13.6 Mg (30,000 lb) to low earth orbit was selected. Hydrogen, methane, propane, and dual-fuel engines were studied with staged-combustion, gas-generator, dual bell, and the dual-expander cycles. Mixture ratio, chamber pressure, nozzle exit pressure liftoff acceleration, and dual fuel propulsive parameters were optimized.

  8. Evaluation of innovative rocket engines for single-stage earth-to-orbit vehicles

    NASA Technical Reports Server (NTRS)

    Manski, Detlef; Martin, James A.

    1988-01-01

    Computer models of rocket engines and single-stage-to-orbit vehicles that were developed by the authors at DFVLR and NASA have been combined. The resulting code consists of engine mass, performance, trajectory and vehicle sizing models. The engine mass model includes equations for each subsystem and describes their dependences on various propulsion parameters. The engine performance model consists of multidimensional sets of theoretical propulsion properties and a complete thermodynamic analysis of the engine cycle. The vehicle analyses include an optimized trajectory analysis, mass estimation, and vehicle sizing. A vertical-takeoff, horizontal-landing, single-stage, winged, manned, fully reusable vehicle with a payload capability of 13.6 Mg (30,000 lb) to low earth orbit was selected. Hydrogen, methane, propane, and dual-fuel engines were studied with staged-combustion, gas-generator, dual bell, and the dual-expander cycles. Mixture ratio, chamber pressure, nozzle exit pressure liftoff acceleration, and dual fuel propulsive parameters were optimized.

  9. Operational procedure for computer program for design point characteristics of a gas generator or a turbojet lift engine for V/STOL applications

    NASA Technical Reports Server (NTRS)

    Krebs, R. P.

    1972-01-01

    The computer program described calculates the design-point characteristics of a gas generator or a turbojet lift engine for V/STOL applications. The program computes the dimensions and mass, as well as the thermodynamic performance of the model engine and its components. The program was written in FORTRAN 4 language. Provision has been made so that the program accepts input values in either SI Units or U.S. Customary Units. Each engine design-point calculation requires less than 0.5 second of 7094 computer time.

  10. Mass density fluctuations in quantum and classical descriptions of liquid water

    NASA Astrophysics Data System (ADS)

    Galib, Mirza; Duignan, Timothy T.; Misteli, Yannick; Baer, Marcel D.; Schenter, Gregory K.; Hutter, Jürg; Mundy, Christopher J.

    2017-06-01

    First principles molecular dynamics simulation protocol is established using revised functional of Perdew-Burke-Ernzerhof (revPBE) in conjunction with Grimme's third generation of dispersion (D3) correction to describe the properties of water at ambient conditions. This study also demonstrates the consistency of the structure of water across both isobaric (NpT) and isothermal (NVT) ensembles. Going beyond the standard structural benchmarks for liquid water, we compute properties that are connected to both local structure and mass density fluctuations that are related to concepts of solvation and hydrophobicity. We directly compare our revPBE results to the Becke-Lee-Yang-Parr (BLYP) plus Grimme dispersion corrections (D2) and both the empirical fixed charged model (SPC/E) and many body interaction potential model (MB-pol) to further our understanding of how the computed properties herein depend on the form of the interaction potential.

  11. Mass density fluctuations in quantum and classical descriptions of liquid water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galib, Mirza; Duignan, Timothy T.; Misteli, Yannick

    First principles molecular dynamics simulation protocol is established using revised functional of Perdew-Burke-Ernzerhof (revPBE) in conjunction with Grimme's third generation of dispersion (D3) correction to describe properties of water at ambient conditions. This study also demonstrates the consistency of the structure of water across both isobaric (NpT) and isothermal (NVT) ensembles. Going beyond the standard structural benchmarks for liquid water, we compute properties that are connected to both local structure and mass density fluctuations that are related to concepts of solvation and hydrophobicity. We directly compare our revPBE results to the Becke-Lee-Yang-Parr (BLYP) plus Grimme dispersion corrections (D2) and bothmore » the empirical fixed charged model (SPC/E) and many body interaction potential model (MB-pol) to further our understanding of how the computed properties herein depend on the form of the interaction potential.« less

  12. Spray and High-Pressure Flow Computations in the National Combustion Code (NCC) Improved

    NASA Technical Reports Server (NTRS)

    Raju, Manthena S.

    2002-01-01

    Sprays occur in a wide variety of industrial and power applications and in materials processing. A liquid spray is a two-phase flow with a gas as the continuous phase and a liquid as the dispersed phase in the form of droplets or ligaments. The interactions between the two phases--which are coupled through exchanges of mass, momentum, and energy--can occur in different ways at disparate time and length scales involving various thermal, mass, and fluid dynamic factors. An understanding of the flow, combustion, and thermal properties of a rapidly vaporizing spray requires careful modeling of the ratecontrolling processes associated with turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates of the spray, among many other factors. With the aim of developing an efficient solution procedure for use in multidimensional combustor modeling, researchers at the NASA Glenn Research Center have advanced the state-of-the-art in spray computations in several important ways.

  13. Numerical and experimental study of dissociation in an air-water single-bubble sonoluminescence system.

    PubMed

    Puente, Gabriela F; Urteaga, Raúl; Bonetto, Fabián J

    2005-10-01

    We performed a comprehensive numerical and experimental analysis of dissociation effects in an air bubble in water acoustically levitated in a spherical resonator. Our numerical approach is based on suitable models for the different effects considered. We compared model predictions with experimental results obtained in our laboratory in the whole phase parameter space, for acoustic pressures from the bubble dissolution limit up to bubble extinction. The effects were taken into account simultaneously to consider the transition from nonsonoluminescence to sonoluminescence bubbles. The model includes (1) inside the bubble, transient and spatially nonuniform heat transfer using a collocation points method, dissociation of O2 and N2, and mass diffusion of vapor in the noncondensable gases; (2) at the bubble interface, nonequilibrium evaporation and condensation of water and a temperature jump due to the accommodation coefficient; (3) in the liquid, transient and spatially nonuniform heat transfer using a collocation points method, and mass diffusion of the gas in the liquid. The model is completed with a Rayleigh-Plesset equation with liquid compressible terms and vapor mass transfer. We computed the boundary for the shape instability based on the temporal evolution of the computed radius. The model is valid for an arbitrary number of dissociable gases dissolved in the liquid. We also obtained absolute measurements for R(t) using two photodetectors and Mie scattering calculations. The robust technique used allows the estimation of experimental results of absolute R0 and P(a). The technique is based on identifying the bubble dissolution limit coincident with the parametric instability in (P(a),R0) parameter space. We take advantage of the fact that this point can be determined experimentally with high precision and replicability. We computed the equilibrium concentration of the different gaseous species and water vapor during collapse as a function of P(a) and R0. The model obtains from first principles the result that in sonoluminescence the bubble is practically 100% argon for air dissolved in water. Therefore, the dissociation reactions in air bubbles must be taken into account for quantitative computations of maximum temperatures. The agreement found between the numerical and experimental data is very good in the whole parameter space explored. We do not fit any parameter in the model. We believe that we capture all the relevant physics with the model.

  14. Ethmoidectomy combined with superior meatus enlargement increases olfactory airflow

    PubMed Central

    Kondo, Kenji; Nomura, Tsutomu; Yamasoba, Tatsuya

    2017-01-01

    Objectives The relationship between a particular surgical technique in endoscopic sinus surgery (ESS) and airflow changes in the post‐operative olfactory region has not been assessed. The present study aimed to compare olfactory airflow after ESS between conventional ethmoidectomy and ethmoidectomy with superior meatus enlargement, using virtual ESS and computational fluid dynamics (CFD) analysis. Study Design Prospective computational study. Materials and Methods Nasal computed tomography images of four adult subjects were used to generate models of the nasal airway. The original preoperative model was digitally edited as virtual ESS by performing uncinectomy, ethmoidectomy, antrostomy, and frontal sinusotomy. The following two post‐operative models were prepared: conventional ethmoidectomy with normal superior meatus (ESS model) and ethmoidectomy with superior meatus enlargement (ESS‐SM model). The calculated three‐dimensional nasal geometries were confirmed using virtual endoscopy to ensure that they corresponded to the post‐operative anatomy observed in the clinical setting. Steady‐state, laminar, inspiratory airflow was simulated, and the velocity, streamline, and mass flow rate in the olfactory region were compared among the preoperative and two postoperative models. Results The mean velocity in the olfactory region, number of streamlines bound to the olfactory region, and mass flow rate were higher in the ESS‐SM model than in the other models. Conclusion We successfully used an innovative approach involving virtual ESS, virtual endoscopy, and CFD to assess postoperative outcomes after ESS. It is hypothesized that the increased airflow to the olfactory fossa achieved with ESS‐SM may lead to improved olfactory function; however, further studies are required. Level of Evidence NA. PMID:28894833

  15. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  16. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  17. The Development of a new Numerical Modelling Approach for Naturally Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Pine, R. J.; Coggan, J. S.; Flynn, Z. N.; Elmo, D.

    2006-11-01

    An approach for modelling fractured rock masses has been developed which has two main objectives: to maximise the quality of representation of the geometry of existing rock jointing and to use this within a loading model which takes full account of this style of jointing. Initially the work has been applied to the modelling of mine pillars and data from the Middleton Mine in the UK has been used as a case example. However, the general approach is applicable to all aspects of rock mass behaviour including the stress conditions found in hangingwalls, tunnels, block caving, and slopes. The rock mass fracture representation was based on a combination of explicit mapping of rock faces and the synthesis of this data into a three-dimensional model, based on the use of the FracMan computer model suite. Two-dimensional cross sections from this model were imported into the finite element computer model, ELFEN, for loading simulation. The ELFEN constitutive model for fracture simulation includes the Rotating Crack, and Rankine material models, in which fracturing is controlled by tensile strength and fracture energy parameters. For tension/compression stress states, the model is complemented with a capped Mohr-Coulomb criterion in which the softening response is coupled to the tensile model. Fracturing due to dilation is accommodated by introducing an explicit coupling between the inelastic strain accrued by the Mohr-Coulomb yield surface and the anisotropic degradation of the mutually orthogonal tensile yield surfaces of the rotating crack model. Pillars have been simulated with widths of 2.8, 7 and 14 m and a height of 7 m (the Middleton Mine pillars are typically 14 m wide and 7 m high). The evolution of the pillar failure under progressive loading through fracture extension and creation of new fractures is presented, and pillar capacities and stiffnesses are compared with empirical models. The agreement between the models is promising and the new model provides useful insights into the influence of pre-existing fractures. Further work is needed to consider the effects of three-dimensional loading and other boundary condition problems.

  18. Numerical modeling and analytical modeling of cryogenic carbon capture in a de-sublimating heat exchanger

    NASA Astrophysics Data System (ADS)

    Yu, Zhitao; Miller, Franklin; Pfotenhauer, John M.

    2017-12-01

    Both a numerical and analytical model of the heat and mass transfer processes in a CO2, N2 mixture gas de-sublimating cross-flow finned duct heat exchanger system is developed to predict the heat transferred from a mixture gas to liquid nitrogen and the de-sublimating rate of CO2 in the mixture gas. The mixture gas outlet temperature, liquid nitrogen outlet temperature, CO2 mole fraction, temperature distribution and de-sublimating rate of CO2 through the whole heat exchanger was computed using both the numerical and analytic model. The numerical model is built using EES [1] (engineering equation solver). According to the simulation, a cross-flow finned duct heat exchanger can be designed and fabricated to validate the models. The performance of the heat exchanger is evaluated as functions of dimensionless variables, such as the ratio of the mass flow rate of liquid nitrogen to the mass flow rate of inlet flue gas.

  19. On a self-consistent representation of earth models, with an application to the computing of internal flattening

    NASA Astrophysics Data System (ADS)

    Denis, C.; Ibrahim, A.

    Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.

  20. Capillary device refilling. [liquid rocket propellant tank tests

    NASA Technical Reports Server (NTRS)

    Blatt, M. H.; Merino, F.; Symons, E. P.

    1980-01-01

    An analytical and experimental study was conducted dealing with refilling start baskets (capillary devices) with settled fluid. A computer program was written to include dynamic pressure, screen wicking, multiple-screen barriers, standpipe screens, variable vehicle mass for computing vehicle acceleration, and calculation of tank outflow rate and vapor pullthrough height. An experimental apparatus was fabricated and tested to provide data for correlation with the analytical model; the test program was conducted in normal gravity using a scale-model capillary device and ethanol as the test fluid. The test data correlated with the analytical model; the model is a versatile and apparently accurate tool for predicting start basket refilling under actual mission conditions.

  1. Models for integrated and differential scattering optical properties of encapsulated light absorbing carbon aggregates.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa

    2013-04-08

    Optical properties of light absorbing carbon (LAC) aggregates encapsulated in a shell of sulfate are computed for realistic model geometries based on field measurements. Computations are performed for wavelengths from the UV-C to the mid-IR. Both climate- and remote sensing-relevant optical properties are considered. The results are compared to commonly used simplified model geometries, none of which gives a realistic representation of the distribution of the LAC mass within the host material and, as a consequence, fail to predict the optical properties accurately. A new core-gray shell model is introduced, which accurately reproduces the size- and wavelength dependence of the integrated and differential optical properties.

  2. Theoretical Near-IR Spectra for Surface Abundance Studies of Massive Stars

    NASA Technical Reports Server (NTRS)

    Sonneborn, George; Bouret, J.

    2011-01-01

    We present initial results of a study of abundance and mass loss properties of O-type stars based on theoretical near-IR spectra computed with state-of-the-art stellar atmosphere models. The James Webb Space Telescope (JWST) will be a powerful tool to obtain high signal-to-noise ratio near-IR (1-5 micron) spectra of massive stars in different environments of local galaxies. Our goal is to analyze model near-IR spectra corresponding to those expected from NIRspec on JWST in order to map the wind properties and surface composition across the parameter range of 0 stars and to determine projected rotational velocities. As a massive star evolves, internal coupling, related mixing, and mass loss impact its intrinsic rotation rate. These three parameters form an intricate loop, where enhanced rotation leads to more mixing which in turn changes the mass loss rate, the latter thus affecting the rotation rate. Since the effects of rotation are expected to be much more pronounced at low metallicity, we pay special attention to models for massive stars in the the Small Magellanic Cloud. This galaxy provides a unique opportunity to probe stellar evolution, and the feedback of massive stars on galactic evol.ution in conditions similar to the epoch of maximal star formation. Plain-Language Abstract: We present initial results of a study of abundance and mass loss properties of massive stars based on theoretical near-infrared (1-5 micron) spectra computed with state-of-the-art stellar atmosphere models. This study is to prepare for observations by the James Webb Space Telescope.

  3. Development of an Alternative Mixed Odor Delivery Device (MODD) for Canine Training

    DTIC Science & Technology

    2017-05-10

    solid phase microextraction (SPME) and analysis by gas chromatography / mass spectrometry (GC/MS). Like the computational modeling, the laboratory...outlet was extracted by solid phase microextraction (SPME) and analyzed by gas chromatography with mass spectrometry (GC/MS). A polydimethylsiloxane...Menning and H. Ostmark, "Detection of liquid and homemade explosives: What do we need to know about their properties?," in Detection of Liquid

  4. Automatic pattern identification of rock moisture based on the Staff-RF model

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Tao, Kai; Jiang, Wei

    2018-04-01

    Studies on the moisture and damage state of rocks generally focus on the qualitative description and mechanical information of rocks. This method is not applicable to the real-time safety monitoring of rock mass. In this study, a musical staff computing model is used to quantify the acoustic emission signals of rocks with different moisture patterns. Then, the random forest (RF) method is adopted to form the staff-RF model for the real-time pattern identification of rock moisture. The entire process requires only the computing information of the AE signal and does not require the mechanical conditions of rocks.

  5. On Stellar Winds as a Source of Mass: Applying Bondi-Hoyle-Lyttleton Accretion

    NASA Astrophysics Data System (ADS)

    Detweiler, L. G.; Yates, K.; Siem, E.

    2017-12-01

    The interaction between planets orbiting stars and the stellar wind that stars emit is investigated and explored. The main goal of this research is to devise a method of calculating the amount of mass accumulated by an arbitrary planet from the stellar wind of its parent star via accretion processes. To achieve this goal, the Bondi-Hoyle-Lyttleton (BHL) mass accretion rate equation and model is employed. In order to use the BHL equation, various parameters of the stellar wind is required to be known, including the velocity, density, and speed of sound of the wind. In order to create a method that is applicable to arbitrary planets orbiting arbitrary stars, Eugene Parker's isothermal stellar wind model is used to calculate these stellar wind parameters. In an isothermal wind, the speed of sound is simple to compute, however the velocity and density equations are transcendental and so the solutions must be approximated using a numerical approximation method. By combining Eugene Parker's isothermal stellar wind model with the BHL accretion equation, a method for computing planetary accretion rates inside a star's stellar wind is realized. This method is then applied to a variety of scenarios. First, this method is used to calculate the amount of mass that our solar system's planets will accrete from the solar wind throughout our Sun's lifetime. Then, some theoretical situations are considered. We consider the amount of mass various brown dwarfs would accrete from the solar wind of our Sun throughout its lifetime if they were orbiting the Sun at Jupiter's distance. For very high mass brown dwarfs, a significant amount of mass is accreted. In the case of the brown dwarf 15 Sagittae B, it actually accretes enough mass to surpass the mass limit for hydrogen fusion. Since 15 Sagittae B is orbiting a star that is very similar to our Sun, this encouraged making calculations for 15 Sagittae B orbiting our Sun at its true distance from its star, 15 Sagittae. It was found that at this distance, it does not accrete enough mass to surpass the mass limit for hydrogen fusion. Finally, we apply this method to brown dwarfs orbiting a 15 solar mass star at Jupiter's distance. It is found that a significantly smaller amount of mass is accreted when compared to the same brown dwarfs orbiting our Sun at the same distance.

  6. Estimating the dust production rate of carbon stars in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Nanni, Ambra; Marigo, Paola; Girardi, Léo; Rubele, Stefano; Bressan, Alessandro; Groenewegen, Martin A. T.; Pastorelli, Giada; Aringer, Bernhard

    2018-02-01

    We employ newly computed grids of spectra reprocessed by dust for estimating the total dust production rate (DPR) of carbon stars in the Small Magellanic Cloud (SMC). For the first time, the grids of spectra are computed as a function of the main stellar parameters, i.e. mass-loss rate, luminosity, effective temperature, current stellar mass and element abundances at the photosphere, following a consistent, physically grounded scheme of dust growth coupled with stationary wind outflow. The model accounts for the dust growth of various dust species formed in the circumstellar envelopes of carbon stars, such as carbon dust, silicon carbide and metallic iron. In particular, we employ some selected combinations of optical constants and grain sizes for carbon dust that have been shown to reproduce simultaneously the most relevant colour-colour diagrams in the SMC. By employing our grids of models, we fit the spectral energy distributions of ≈3100 carbon stars in the SMC, consistently deriving some important dust and stellar properties, i.e. luminosities, mass-loss rates, gas-to-dust ratios, expansion velocities and dust chemistry. We discuss these properties and we compare some of them with observations in the Galaxy and Large Magellanic Cloud. We compute the DPR of carbon stars in the SMC, finding that the estimates provided by our method can be significantly different, between a factor of ≈2-5, than the ones available in the literature. Our grids of models, including the spectra and other relevant dust and stellar quantities, are publicly available at http://starkey.astro.unipd.it/web/guest/dustymodels.

  7. Numerical Coupling and Simulation of Point-Mass System with the Turbulent Fluid Flow

    NASA Astrophysics Data System (ADS)

    Gao, Zheng

    A computational framework that combines the Eulerian description of the turbulence field with a Lagrangian point-mass ensemble is proposed in this dissertation. Depending on the Reynolds number, the turbulence field is simulated using Direct Numerical Simulation (DNS) or eddy viscosity model. In the meanwhile, the particle system, such as spring-mass system and cloud droplets, are modeled using the ordinary differential system, which is stiff and hence poses a challenge to the stability of the entire system. This computational framework is applied to the numerical study of parachute deceleration and cloud microphysics. These two distinct problems can be uniformly modeled with Partial Differential Equations (PDEs) and Ordinary Differential Equations (ODEs), and numerically solved in the same framework. For the parachute simulation, a novel porosity model is proposed to simulate the porous effects of the parachute canopy. This model is easy to implement with the projection method and is able to reproduce Darcy's law observed in the experiment. Moreover, the impacts of using different versions of k-epsilon turbulence model in the parachute simulation have been investigated and conclude that the standard and Re-Normalisation Group (RNG) model may overestimate the turbulence effects when Reynolds number is small while the Realizable model has a consistent performance with both large and small Reynolds number. For another application, cloud microphysics, the cloud entrainment-mixing problem is studied in the same numerical framework. Three sets of DNS are carried out with both decaying and forced turbulence. The numerical result suggests a new way parameterize the cloud mixing degree using the dynamical measures. The numerical experiments also verify the negative relationship between the droplets number concentration and the vorticity field. The results imply that the gravity has fewer impacts on the forced turbulence than the decaying turbulence. In summary, the proposed framework can be used to solve a physics problem that involves turbulence field and point-mass system, and therefore has a broad application.

  8. A composite smeared finite element for mass transport in capillary systems and biological tissue.

    PubMed

    Kojic, M; Milosevic, M; Simic, V; Koay, E J; Fleming, J B; Nizzero, S; Kojic, N; Ziemys, A; Ferrari, M

    2017-09-01

    One of the key processes in living organisms is mass transport occurring from blood vessels to tissues for supplying tissues with oxygen, nutrients, drugs, immune cells, and - in the reverse direction - transport of waste products of cell metabolism to blood vessels. The mass exchange from blood vessels to tissue and vice versa occurs through blood vessel walls. This vital process has been investigated experimentally over centuries, and also in the last decades by the use of computational methods. Due to geometrical and functional complexity and heterogeneity of capillary systems, it is however not feasible to model in silico individual capillaries (including transport through the walls and coupling to tissue) within whole organ models. Hence, there is a need for simplified and robust computational models that address mass transport in capillary-tissue systems. We here introduce a smeared modeling concept for gradient-driven mass transport and formulate a new composite smeared finite element (CSFE). The transport from capillary system is first smeared to continuous mass sources within tissue, under the assumption of uniform concentration within capillaries. Here, the fundamental relation between capillary surface area and volumetric fraction is derived as the basis for modeling transport through capillary walls. Further, we formulate the CSFE which relies on the transformation of the one-dimensional (1D) constitutive relations (for transport within capillaries) into the continuum form expressed by Darcy's and diffusion tensors. The introduced CSFE is composed of two volumetric parts - capillary and tissue domains, and has four nodal degrees of freedom (DOF): pressure and concentration for each of the two domains. The domains are coupled by connectivity elements at each node. The fictitious connectivity elements take into account the surface area of capillary walls which belongs to each node, as well as the wall material properties (permeability and partitioning). The overall FE model contains geometrical and material characteristics of the entire capillary-tissue system, with physiologically measurable parameters assigned to each FE node within the model. The smeared concept is implemented into our implicit-iterative FE scheme and into FE package PAK. The first three examples illustrate accuracy of the CSFE element, while the liver and pancreas models demonstrate robustness of the introduced methodology and its applicability to real physiological conditions.

  9. Textures of Yukawa coupling matrices in the 2HDM type III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carcamo, A. E.; Martinez, R.; Rodriguez, J.-Alexis

    2008-07-02

    The quark mass matrices ansatze proposed by Fritzsch, Du-Xing and Fukuyama-Nishiura in the framework of the general two Higgs doublet model are studied. The corresponding Cabbibo-Kobayashi-Maskawa matrix elements are computed in all cases and compared with their experimental values. The complex phases of the anstaze are taken into account and the CP violating phase {delta} is computed. Finally some phenomenology is discussed.

  10. 3D simulations of early blood vessel formation

    NASA Astrophysics Data System (ADS)

    Cavalli, F.; Gamba, A.; Naldi, G.; Semplice, M.; Valdembri, D.; Serini, G.

    2007-08-01

    Blood vessel networks form by spontaneous aggregation of individual cells migrating toward vascularization sites (vasculogenesis). A successful theoretical model of two-dimensional experimental vasculogenesis has been recently proposed, showing the relevance of percolation concepts and of cell cross-talk (chemotactic autocrine loop) to the understanding of this self-aggregation process. Here we study the natural 3D extension of the computational model proposed earlier, which is relevant for the investigation of the genuinely three-dimensional process of vasculogenesis in vertebrate embryos. The computational model is based on a multidimensional Burgers equation coupled with a reaction diffusion equation for a chemotactic factor and a mass conservation law. The numerical approximation of the computational model is obtained by high order relaxed schemes. Space and time discretization are performed by using TVD schemes and, respectively, IMEX schemes. Due to the computational costs of realistic simulations, we have implemented the numerical algorithm on a cluster for parallel computation. Starting from initial conditions mimicking the experimentally observed ones, numerical simulations produce network-like structures qualitatively similar to those observed in the early stages of in vivo vasculogenesis. We develop the computation of critical percolative indices as a robust measure of the network geometry as a first step towards the comparison of computational and experimental data.

  11. Mass Storage Systems.

    ERIC Educational Resources Information Center

    Ranade, Sanjay; Schraeder, Jeff

    1991-01-01

    Presents an overview of the mass storage market and discusses mass storage systems as part of computer networks. Systems for personal computers, workstations, minicomputers, and mainframe computers are described; file servers are explained; system integration issues are raised; and future possibilities are suggested. (LRW)

  12. Finite volume model for two-dimensional shallow environmental flow

    USGS Publications Warehouse

    Simoes, F.J.M.

    2011-01-01

    This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.

  13. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  14. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  15. Slowly rotating homogeneous masses revisited

    NASA Astrophysics Data System (ADS)

    Reina, Borja

    2016-02-01

    Hartle's model for slowly rotating stars has been extensively used to compute equilibrium configurations of slowly rotating stars to second order in perturbation theory in general relativity, given a barotropic equation of state. A recent study based on the modern theory of perturbed matchings concludes that the functions in the (first and second order) perturbation tensors can always be taken as continuous at the surface of the star, except for the second-order function m0. This function presents a jump at the surface of the star proportional to the discontinuity of the energy density there. This concerns only a particular outcome of the model: the change in mass δM. In this paper, the amended change in mass is calculated for the case of constant density stars.

  16. Nuclear ground-state masses and deformations: FRDM(2012)

    DOE PAGES

    Moller, P.; Sierk, A. J.; Ichikawa, T.; ...

    2016-03-25

    Here, we tabulate the atomic mass excesses and binding energies, ground-state shell-plus-pairing corrections, ground-state microscopic corrections, and nuclear ground-state deformations of 9318 nuclei ranging from 16O to A=339. The calculations are based on the finite-range droplet macroscopic and the folded-Yukawa single-particle microscopic nuclear-structure models, which are completely specified. Relative to our FRDM(1992) mass table in Möller et al. (1995), the results are obtained in the same model, but with considerably improved treatment of deformation and fewer of the approximations that were necessary earlier, due to limitations in computer power. The more accurate execution of the model and the more extensivemore » and more accurate experimental mass data base now available allow us to determine one additional macroscopic-model parameter, the density-symmetry coefficient LL, which was not varied in the previous calculation, but set to zero. Because we now realize that the FRDM is inaccurate for some highly deformed shapes occurring in fission, because some effects are derived in terms of perturbations around a sphere, we only adjust its macroscopic parameters to ground-state masses.« less

  17. The mysterious age invariance of the planetary nebula luminosity function bright cut-off

    NASA Astrophysics Data System (ADS)

    Gesicki, K.; Zijlstra, A. A.; Miller Bertolami, M. M.

    2018-05-01

    Planetary nebulae mark the end of the active life of 90% of all stars. They trace the transition from a red giant to a degenerate white dwarf. Stellar models1,2 predicted that only stars above approximately twice the solar mass could form a bright nebula. But the ubiquitous presence of bright planetary nebulae in old stellar populations, such as elliptical galaxies, contradicts this: such high-mass stars are not present in old systems. The planetary nebula luminosity function, and especially its bright cut-off, is almost invariant between young spiral galaxies, with high-mass stars, and old elliptical galaxies, with only low-mass stars. Here, we show that new evolutionary tracks of low-mass stars are capable of explaining in a simple manner this decades-old mystery. The agreement between the observed luminosity function and computed stellar evolution validates the latest theoretical modelling. With these models, the planetary nebula luminosity function provides a powerful diagnostic to derive star formation histories of intermediate-age stars. The new models predict that the Sun at the end of its life will also form a planetary nebula, but it will be faint.

  18. Numerical analysis of Eucalyptus grandis × E. urophylla heat-treatment: A dynamically detecting method of mass loss during the process

    NASA Astrophysics Data System (ADS)

    Zhao, Zijian; Ma, Qing; Mu, Jun; Yi, Songlin; He, Zhengbin

    Eucalyptus particles, lamellas and boards were applied to explore a simply-implemented method with neglected heat and mass transfer to inspect the mass loss during the heat-treatment course. The results revealed that the mass loss of a certain period was theoretically the definite integration of loss rate to time in this period, and a monitoring model for mass loss speed was developed with the particles and validated with the lamellas and boards. The loss rate was correlated to the temperature and temperature-evolving speed in the model which was composed of three functions during different temperature-evolving period. The sample mass loss was calculated in the MATLAB for the lamellas and boards and the model was validated and adjusted based on the difference between the computed results and the practically measured loss values. The error ranges of the new models were -16.30% to 18.35% for wood lamellas and -9.86% to 6.80% for wood boards. This method made it possible to acquire the instantaneous loss value through continuously detecting the wood temperature evolution. This idea could provide a reference for the Eucalyptus heat-treatment to detect the treating course and control the final material characteristics.

  19. r.randomwalk v1.0, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-09-01

    We introduce r.randomwalk, a flexible and multi-functional open source tool for backward- and forward-analyses of mass movement propagation. r.randomwalk builds on GRASS GIS, the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are: (i) multiple break criteria can be combined to compute an impact indicator score, (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter settings, resulting in an impact indicator index in the range 0-1, (iii) built-in functions for validation and visualization of the results are provided, (iv) observed landslides can be back-analyzed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk (i) for a single event, the Acheron Rock Avalanche in New Zealand, (ii) for landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) for lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  20. r.randomwalk v1, a multi-functional conceptual tool for mass movement routing

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Krenn, J.; Chu, H.-J.

    2015-12-01

    We introduce r.randomwalk, a flexible and multi-functional open-source tool for backward and forward analyses of mass movement propagation. r.randomwalk builds on GRASS GIS (Geographic Resources Analysis Support System - Geographic Information System), the R software for statistical computing and the programming languages Python and C. Using constrained random walks, mass points are routed from defined release pixels of one to many mass movements through a digital elevation model until a defined break criterion is reached. Compared to existing tools, the major innovative features of r.randomwalk are (i) multiple break criteria can be combined to compute an impact indicator score; (ii) the uncertainties of break criteria can be included by performing multiple parallel computations with randomized parameter sets, resulting in an impact indicator index in the range 0-1; (iii) built-in functions for validation and visualization of the results are provided; (iv) observed landslides can be back analysed to derive the density distribution of the observed angles of reach. This distribution can be employed to compute impact probabilities for each pixel. Further, impact indicator scores and probabilities can be combined with release indicator scores or probabilities, and with exposure indicator scores. We demonstrate the key functionalities of r.randomwalk for (i) a single event, the Acheron rock avalanche in New Zealand; (ii) landslides in a 61.5 km2 study area in the Kao Ping Watershed, Taiwan; and (iii) lake outburst floods in a 2106 km2 area in the Gunt Valley, Tajikistan.

  1. The black tide model of QSOs

    NASA Technical Reports Server (NTRS)

    Young, P. J.; Shields, G. A.; Wheeler, J. C.

    1977-01-01

    The paper develops certain aspects of a model wherein a QSO is a massive black hole located in a dense galactic nucleus, with its growth and luminosity fueled by tidal disruption of passing stars. Cross sections for tidal disruptions are calculated, taking into account the thermal energy of stars, relativistic effects, and partial disruption removing only the outer layers of a star. Accretion rates are computed for a realistic distribution of stellar masses and evolutionary phases, the effect of the black hole on the cluster distribution is examined, and the red-giant disruption rate is evaluated for hole mass of at least 300 million solar masses, the cutoff of disruption of main-sequence stars. The results show that this black-tide model can explain QSO luminosities of at least 1 trillion suns if the black hole remains almost maximally Kerr as it grows above 100 million solar masses and if 'loss-cone' depletion of the number of stars in disruptive orbits is unimportant.

  2. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  3. Communication-Efficient Arbitration Models for Low-Resolution Data Flow Computing

    DTIC Science & Technology

    1988-12-01

    Given graph G = (V, E), weights w (v) for each v e V and L (e) for each e c E, and positive integers B and J, find a partition of V into disjoint...MIT/LCS/TR-218, Cambridge, Mass. Agerwala, Tilak, February 1982, "Data Flow Systems", Computer, pp. 10-13. Babb, Robert G ., July 1984, "Parallel...Processing with Large-Grain Data Flow Techniques," IEEE Computer 17, 7, pp. 55-61. Babb, Robert G ., II, Lise Storc, and William C. Ragsdale, 1986, "A Large

  4. Coupling of rainfall-induced landslide triggering model with predictions of debris flow runout distances

    NASA Astrophysics Data System (ADS)

    Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani

    2014-05-01

    Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.

  5. Physical-mathematical model of condensation process of the sub-micron dust capture in sprayer scrubber

    NASA Astrophysics Data System (ADS)

    Shilyaev, M. I.; Khromova, E. M.; Grigoriev, A. V.; Tumashova, A. V.

    2011-09-01

    A physical-mathematical model of the heat and mass exchange process and condensation capture of sub-micron dust particles on the droplets of dispersed liquid in a sprayer scrubber is proposed and analysed. A satisfactory agreement of computed results and experimental data on soot capturing from the cracking gases is obtained.

  6. An Eight-Parameter Function for Simulating Model Rocket Engine Thrust Curves

    ERIC Educational Resources Information Center

    Dooling, Thomas A.

    2007-01-01

    The toy model rocket is used extensively as an example of a realistic physical system. Teachers from grade school to the university level use them. Many teachers and students write computer programs to investigate rocket physics since the problem involves nonlinear functions related to air resistance and mass loss. This paper describes a nonlinear…

  7. Peer Review of “LDT Weight Reduction Study with Crash Model, Feasibility and Detailed Cost Analyses – Chevrolet Silverado 1500 Pickup”

    EPA Science Inventory

    The contractor will conduct an independent peer review of FEV’s light-duty truck (LDT) mass safety study, “Light-Duty Vehicle Weight Reduction Study with Crash Model, Feasibility and Detailed Cost Analysis – Silverado 1500”, and its corresponding computer-aided engineering (CAE) ...

  8. Holograms of a dynamical top quark

    NASA Astrophysics Data System (ADS)

    Clemens, Will; Evans, Nick; Scott, Marc

    2017-09-01

    We present holographic descriptions of dynamical electroweak symmetry breaking models that incorporate the top mass generation mechanism. The models allow computation of the spectrum in the presence of large anomalous dimensions due to walking and strong Nambu-Jona-Lasinio interactions. Technicolor and QCD dynamics are described by the bottom-up Dynamic AdS/QCD model for arbitrary gauge groups and numbers of quark flavors. An assumption about the running of the anomalous dimension of the quark bilinear operator is input, and the model then predicts the spectrum and decay constants for the mesons. We add Nambu-Jona-Lasinio interactions responsible for flavor physics from extended technicolor, top-color, etc., using Witten's multitrace prescription. We show the key behaviors of a top condensation model can be reproduced. We study generation of the top mass in (walking) one doublet and one family technicolor models and with strong extended technicolor interactions. The models clearly reveal the tensions between the large top mass and precision data for δ ρ . The necessary tunings needed to generate a model compatible with precision constraints are simply demonstrated.

  9. Box-modeling of bone and tooth phosphate oxygen isotope compositions as a function of environmental and physiological parameters.

    PubMed

    Langlois, C; Simon, L; Lécuyer, Ch

    2003-12-01

    A time-dependent box model is developed to calculate oxygen isotope compositions of bone phosphate as a function of environmental and physiological parameters. Input and output oxygen fluxes related to body water and bone reservoirs are scaled to the body mass. The oxygen fluxes are evaluated by stoichiometric scaling to the calcium accretion and resorption rates, assuming a pure hydroxylapatite composition for the bone and tooth mineral. The model shows how the diet composition, body mass, ambient relative humidity and temperature may control the oxygen isotope composition of bone phosphate. The model also computes how bones and teeth record short-term variations in relative humidity, air temperature and delta18O of drinking water, depending on body mass. The documented diversity of oxygen isotope fractionation equations for vertebrates is accounted for by our model when for each specimen the physiological and diet parameters are adjusted in the living range of environmental conditions.

  10. Shape Models of Asteroids as a Missing Input for Bulk Density Determinations

    NASA Astrophysics Data System (ADS)

    Hanuš, Josef

    2015-07-01

    To determine a meaningful bulk density of an asteroid, both accurate volume and mass estimates are necessary. The volume can be computed by scaling the size of the 3D shape model to fit the disk-resolved images or stellar occultation profiles, which are available in the literature or through collaborations. This work provides a list of asteroids, for which (i) there are already mass estimates with reported uncertainties better than 20% or their mass will be most likely determined in the future from Gaia astrometric observations, and (ii) their 3D shape models are currently unknown. Additional optical lightcurves are necessary to determine the convex shape models of these asteroids. The main aim of this article is to motivate the observers to obtain lightcurves of these asteroids, and thus contribute to their shape model determinations. Moreover, a web page https://asteroid-obs.oca.eu, which maintains an up-to-date list of these objects to assure efficiency and to avoid any overlapping efforts, was created.

  11. Star and Planet Formation through Cosmic Time

    NASA Astrophysics Data System (ADS)

    Lee, Aaron Thomas

    The computational advances of the past several decades have allowed theoretical astrophysics to proceed at a dramatic pace. Numerical simulations can now simulate the formation of individual molecules all the way up to the evolution of the entire universe. Observational astrophysics is producing data at a prodigious rate, and sophisticated analysis techniques of large data sets continue to be developed. It is now possible for terabytes of data to be effectively turned into stunning astrophysical results. This is especially true for the field of star and planet formation. Theorists are now simulating the formation of individual planets and stars, and observing facilities are finally capturing snapshots of these processes within the Milky Way galaxy and other galaxies. While a coherent theory remains incomplete, great strides have been made toward this goal. This dissertation discusses several projects that develop models of star and planet forma- tion. This work spans large spatial and temporal scales: from the AU-scale of protoplanetary disks all the way up to the parsec-scale of star-forming clouds, and taking place in both contemporary environments like the Milky Way galaxy and primordial environments at redshifts of z 20. Particularly, I show that planet formation need not proceed in incremental stages, where planets grow from millimeter-sized dust grains all the way up to planets, but instead can proceed directly from small dust grains to large kilometer-sized boulders. The requirements for this model to operate effectively are supported by observations. Additionally, I draw suspicion toward one model for how you form high mass stars (stars with masses exceeding 8 Msun), which postulates that high-mass stars are built up from the gradual accretion of mass from the cloud onto low-mass stars. I show that magnetic fields in star forming clouds thwart this transfer of mass, and instead it is likely that high mass stars are created from the gravitational collapse of large clouds. This work also provides a sub-grid model for computational codes that employ sink particles accreting from magnetized gas. Finally, I analyze the role that radiation plays in determining the final masses of the first stars to ever form in the universe. These stars formed in starkly different environments than stars form in today, and the role of the direct radiation from these stars turns out to be a crucial component of primordial star formation theory. These projects use a variety of computational tools, including the use of spectral hydrodynamics codes, magneto-hydrodynamics grid codes that employ adaptive mesh refinement techniques, and long characteristic ray tracing methods. I develop and describe a long characteristic ray tracing method for modeling hydrogen-ionizing radiation from stars. Additionally, I have developed Monte Carlo routines that convert hydrodynamic data used in smoothed particle hydrodynamics codes for use in grid-based codes. Both of these advances will find use beyond simulations of star and planet formation and benefit the astronomical community at large.

  12. Early science from the Pan-STARRS1 Optical Galaxy Survey (POGS): Maps of stellar mass and star formation rate surface density obtained from distributed-computing pixel-SED fitting

    NASA Astrophysics Data System (ADS)

    Thilker, David A.; Vinsen, K.; Galaxy Properties Key Project, PS1

    2014-01-01

    To measure resolved galactic physical properties unbiased by the mask of recent star formation and dust features, we are conducting a citizen-scientist enabled nearby galaxy survey based on the unprecedented optical (g,r,i,z,y) imaging from Pan-STARRS1 (PS1). The PS1 Optical Galaxy Survey (POGS) covers 3π steradians (75% of the sky), about twice the footprint of SDSS. Whenever possible we also incorporate ancillary multi-wavelength image data from the ultraviolet (GALEX) and infrared (WISE, Spitzer) spectral regimes. For each cataloged nearby galaxy with a reliable redshift estimate of z < 0.05 - 0.1 (dependent on donated CPU power), publicly-distributed computing is being harnessed to enable pixel-by-pixel spectral energy distribution (SED) fitting, which in turn provides maps of key physical parameters such as the local stellar mass surface density, crude star formation history, and dust attenuation. With pixel SED fitting output we will then constrain parametric models of galaxy structure in a more meaningful way than ordinarily achieved. In particular, we will fit multi-component (e.g. bulge, bar, disk) galaxy models directly to the distribution of stellar mass rather than surface brightness in a single band, which is often locally biased. We will also compute non-parametric measures of morphology such as concentration, asymmetry using the POGS stellar mass and SFR surface density images. We anticipate studying how galactic substructures evolve by comparing our results with simulations and against more distant imaging surveys, some of which which will also be processed in the POGS pipeline. The reliance of our survey on citizen-scientist volunteers provides a world-wide opportunity for education. We developed an interactive interface which highlights the science being produced by each volunteer’s own CPU cycles. The POGS project has already proven popular amongst the public, attracting about 5000 volunteers with nearly 12,000 participating computers, and is growing rapidly.

  13. Effect of different cosmologies on the galaxy stellar mass function

    NASA Astrophysics Data System (ADS)

    Lopes, Amanda R.; Gruppioni, C.; Ribeiro, M. B.; Pozzetti, L.; February, S.; Ilbert, O.; Pozzi, F.

    2017-11-01

    The goal of this paper is to understand how the underlying cosmological models may affect the analysis of the stellar masses in galaxies. We computed the galaxy stellar mass function (GSMF) assuming the observationally constrained Lemaître-Tolman-Bondi (LTB) `giant-void' models and compared them with the results from the standard cosmological model. Based on a sample of 220 000 KS-band selected galaxies from the UltraVISTA data, we computed the GSMF up to z ≈ 4 assuming different cosmologies, since, from a cosmological perspective, the two quantities that affect the stellar mass estimation are the luminosity distance and time. The results show that the stellar mass decreased on average by ˜1.1-27.1 per cent depending on the redshift value. For the GSMF, we fitted a double-Schechter function to the data and verified that a change is only seen in two parameters, M^{*} and φ ^{*}1, but always with less than a 3σ significance. We also carried out an additional analysis for the blue and red populations in order to verify a possible change on the galaxy evolution scenario. The results showed that the GSMF derived with the red population sample is more affected by the change of cosmology than the blue one. We also found out that the LTB models overestimated the number density of galaxies with M < 10^{11} M_{⊙}, and underestimate it for M> 10^{11} M_{⊙}, as compared to the standard model over the whole studied redshift range. This feature is noted in the complete, red plus blue, sample. Once we compared the general behaviour of the GSMF derived from the alternative cosmological models with the one based on the standard cosmology we found out that the variation was not large enough to change the shape of the function. Hence, the GSMF was found to be robust under this change of cosmology. This means that all physical interpretations of the GSMF based in the standard cosmological model are valid on the LTB cosmology.

  14. Small Body GN and C Research Report: G-SAMPLE - An In-Flight Dynamical Method for Identifying Sample Mass [External Release Version

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Bayard, David S.

    2006-01-01

    G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  15. Binary black hole coalescence in the large-mass-ratio limit: The hyperboloidal layer method and waveforms at null infinity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernuzzi, Sebastiano; Nagar, Alessandro; Zenginoglu, Anil

    2011-10-15

    We compute and analyze the gravitational waveform emitted to future null infinity by a system of two black holes in the large-mass-ratio limit. We consider the transition from the quasiadiabatic inspiral to plunge, merger, and ringdown. The relative dynamics is driven by a leading order in the mass ratio, 5PN-resummed, effective-one-body (EOB), analytic-radiation reaction. To compute the waveforms, we solve the Regge-Wheeler-Zerilli equations in the time-domain on a spacelike foliation, which coincides with the standard Schwarzschild foliation in the region including the motion of the small black hole, and is globally hyperboloidal, allowing us to include future null infinity inmore » the computational domain by compactification. This method is called the hyperboloidal layer method, and is discussed here for the first time in a study of the gravitational radiation emitted by black hole binaries. We consider binaries characterized by five mass ratios, {nu}=10{sup -2,-3,-4,-5,-6}, that are primary targets of space-based or third-generation gravitational wave detectors. We show significative phase differences between finite-radius and null-infinity waveforms. We test, in our context, the reliability of the extrapolation procedure routinely applied to numerical relativity waveforms. We present an updated calculation of the final and maximum gravitational recoil imparted to the merger remnant by the gravitational wave emission, v{sub kick}{sup end}/(c{nu}{sup 2})=0.04474{+-}0.00007 and v{sub kick}{sup max}/(c{nu}{sup 2})=0.05248{+-}0.00008. As a self-consistency test of the method, we show an excellent fractional agreement (even during the plunge) between the 5PN EOB-resummed mechanical angular momentum loss and the gravitational wave angular momentum flux computed at null infinity. New results concerning the radiation emitted from unstable circular orbits are also presented. The high accuracy waveforms computed here could be considered for the construction of template banks or for calibrating analytic models such as the effective-one-body model.« less

  16. Eruptive event generator based on the Gibson-Low magnetic configuration

    NASA Astrophysics Data System (ADS)

    Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.

    2017-08-01

    Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.

  17. Dark matter admixed strange quark stars in the Starobinsky model

    NASA Astrophysics Data System (ADS)

    Lopes, Ilídio; Panotopoulos, Grigoris

    2018-01-01

    We compute the mass-to-radius profiles for dark matter admixed strange quark stars in the Starobinsky model of modified gravity. For quark matter, we assume the MIT bag model, while self-interacting dark matter inside the star is modeled as a Bose-Einstein condensate with a polytropic equation of state. We numerically integrate the structure equations in the Einstein frame, adopting the two-fluid formalism, and we treat the curvature correction term nonperturbatively. The effects on the properties of the stars of the amount of dark matter as well as the higher curvature term are investigated. We find that strange quark stars (in agreement with current observational constraints) with the highest masses are equally affected by dark matter and modified gravity.

  18. On two special values of temperature factor in hypersonic flow stagnation point

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2018-03-01

    The hypersonic aircraft permeable cylindrical and spherical surfaces laminar boundary layer heat and mass transfer control mathematical model properties are investigated. The nonlinear algebraic equations systems are obtained for two special values of temperature factor in the hypersonic flow stagnation point. The mappings bijectivity between heat and mass transfer local parameters and controls is established. The computation experiments results are presented: the domains of allowed values “heat-friction” are obtained.

  19. Scalar correlator at [symbol: see text](alpha(s)4), Higgs boson decay into bottom quarks, and bounds on the light-quark masses.

    PubMed

    Baikov, P A; Chetyrkin, K G; Kühn, J H

    2006-01-13

    We compute, for the first time, the absorptive part of the massless correlator of two quark scalar currents in five loops. As physical applications, we consider the [symbol: see text](alpha(s)4) corrections to the decay rate of the standard model Higgs boson into quarks, as well as the constraints on the strange quark mass following from QCD sum rules.

  20. A Large Stellar Evolution Database for Population Synthesis Studies. I. Scaled Solar Models and Isochrones

    NASA Astrophysics Data System (ADS)

    Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Castelli, Fiorella

    2004-09-01

    We present a large and updated stellar evolution database for low-, intermediate-, and high-mass stars in a wide metallicity range, suitable for studying Galactic and extragalactic simple and composite stellar populations using population synthesis techniques. The stellar mass range is between ~0.5 and 10 Msolar with a fine mass spacing. The metallicity [Fe/H] comprises 10 values ranging from -2.27 to 0.40, with a scaled solar metal distribution. The initial He mass fraction ranges from Y=0.245, for the more metal-poor composition, up to 0.303 for the more metal-rich one, with ΔY/ΔZ~1.4. For each adopted chemical composition, the evolutionary models have been computed without (canonical models) and with overshooting from the Schwarzschild boundary of the convective cores during the central H-burning phase. Semiconvection is included in the treatment of core convection during the He-burning phase. The whole set of evolutionary models can be used to compute isochrones in a wide age range, from ~30 Myr to ~15 Gyr. Both evolutionary models and isochrones are available in several observational planes, employing an updated set of bolometric corrections and color-Teff relations computed for this project. The number of points along the models and the resulting isochrones is selected in such a way that interpolation for intermediate metallicities not contained in the grid is straightforward; a simple quadratic interpolation produces results of sufficient accuracy for population synthesis applications.We compare our isochrones with results from a series of widely used stellar evolution databases and perform some empirical tests for the reliability of our models. Since this work is devoted to scaled solar chemical compositions, we focus our attention on the Galactic disk stellar populations, employing multicolor photometry of unevolved field main-sequence stars with precise Hipparcos parallaxes, well-studied open clusters, and one eclipsing binary system with precise measurements of masses, radii, and [Fe/H] of both components. We find that the predicted metallicity dependence of the location of the lower, unevolved main sequence in the color magnitude diagram (CMD) appears in satisfactory agreement with empirical data. When comparing our models with CMDs of selected, well-studied, open clusters, once again we were able to properly match the whole observed evolutionary sequences by assuming cluster distance and reddening estimates in satisfactory agreement with empirical evaluations of these quantities. In general, models including overshooting during the H-burning phase provide a better match to the observations, at least for ages below ~4 Gyr. At [Fe/H] around solar and higher ages (i.e., smaller convective cores) before the onset of radiative cores, the selected efficiency of core overshooting may be too high in our model, as well as in various other models in the literature. Since we also provide canonical models, the reader is strongly encouraged to always compare the results from both sets in this critical age range.

  1. Chemistry Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1984

    1984-01-01

    Presents an experiment which links mass spectrometry to gas chromatography. Also presents a simulation of iron extraction using a ZX81 computer and discussions of Fehling versus Benedict's solutions, transition metal ammine complexes, electrochemical and other chemical series, and a simple model of dynamic equilibria. (JN)

  2. Estimating ground-water inflow to lakes in central Florida using the isotope mass-balance approach

    USGS Publications Warehouse

    Sacks, Laura A.

    2002-01-01

    The isotope mass-balance approach was used to estimate ground-water inflow to 81 lakes in the central highlands and coastal lowlands of central Florida. The study area is characterized by a subtropical climate and numerous lakes in a mantled karst terrain. Ground-water inflow was computed using both steady-state and transient formulations of the isotope mass-balance equation. More detailed data were collected from two study lakes, including climatic, hydrologic, and isotopic (hydrogen and oxygen isotope ratio) data. For one of these lakes (Lake Starr), ground-water inflow was independently computed from a water-budget study. Climatic and isotopic data collected from the two lakes were similar even though they were in different physiographic settings about 60 miles apart. Isotopic data from all of the study lakes plotted on an evaporation trend line, which had a very similar slope to the theoretical slope computed for Lake Starr. These similarities suggest that data collected from the detailed study lakes can be extrapolated to the rest of the study area. Ground-water inflow computed using the isotope mass-balance approach ranged from 0 to more than 260 inches per year (or 0 to more than 80 percent of total inflows). Steady-state and transient estimates of ground-water inflow were very similar. Computed ground-water inflow was most sensitive to uncertainty in variables used to calculate the isotopic composition of lake evaporate (isotopic compositions of lake water and atmospheric moisture and climatic variables). Transient results were particularly sensitive to changes in the isotopic composition of lake water. Uncertainty in ground-water inflow results is considerably less for lakes with higher ground-water inflow than for lakes with lower ground-water inflow. Because of these uncertainties, the isotope mass-balance approach is better used to distinguish whether ground-water inflow quantities fall within certain ranges of values, rather than for precise quantification. The lakes fit into three categories based on their range of ground-water inflow: low (less than 25 percent of total inflows), medium (25-50 percent of inflows), and high (greater than 50 percent of inflows). The majority of lakes in the coastal lowlands had low ground-water inflow, whereas the majority of lakes in the central highlands had medium to high ground-water inflow. Multiple linear regression models were used to predict ground-water inflow to lakes. These models help identify basin characteristics that are important in controlling ground-water inflow to Florida lakes. Significant explanatory variables include: ratio of basin area to lake surface area, depth to the Upper Floridan aquifer, maximum lake depth, and fraction of wetlands in the basin. Models were improved when lake water-quality data (nitrate, sodium, and iron concentrations) were included, illustrating the link between ground-water geochemistry and lake chemistry. Regression models that considered lakes within specific geographic areas were generally poorer than models for the entire study area. Regression results illustrate how more simplified models based on basin and lake characteristics can be used to estimate ground-water inflow. Although the uncertainty in the amount of ground-water inflow to individual lakes is high, the isotope mass-balance approach was useful in comparing the range of ground-water inflow for numerous Florida lakes. Results were also helpful in understanding differences in the geographic distribution of ground-water inflow between the coastal lowlands and central highlands. In order to use the isotope mass-balance approach to estimate inflow for multiple lakes, it is essential that all the lakes are sampled during the same time period and that detailed isotopic, hydrologic, and climatic data are collected over this same period of time. Isotopic data for Florida lakes can change over time, both seasonally and interannually, primarily because of differ

  3. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.

    1983-01-01

    The physical modeling embodied in the computational fluid dynamics codes is discussed. The objectives were to identify shortcomings in the models and to provide a program plan to improve the quantitative accuracy. The physical models studied were for: turbulent mass and momentum transport, heat release, liquid fuel spray, and gaseous radiation. The approach adopted was to test the models against appropriate benchmark-quality test cases from experiments in the literature for the constituent flows that together make up the combustor real flow.

  4. Gradient-free MCMC methods for dynamic causal modelling

    DOE PAGES

    Sengupta, Biswa; Friston, Karl J.; Penny, Will D.

    2015-03-14

    Here, we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density -- albeit at almost 1000% increase in computational time, in comparisonmore » to the most efficient algorithm (i.e., the adaptive MCMC sampler).« less

  5. Computation of inlet reference plane flow-field for a subscale free-jet forebody/inlet model and comparison to experimental data

    NASA Astrophysics Data System (ADS)

    McClure, M. D.; Sirbaugh, J. R.

    1991-02-01

    The computational fluid dynamics (CFD) computer code PARC3D was used to predict the inlet reference plane (IRP) flow field for a side-mounted inlet and forebody simulator in a free jet for five different flow conditions. The calculations were performed for free-jet conditions, mass flow rates, and inlet configurations that matched the free-jet test conditions. In addition, viscous terms were included in the main flow so that the viscous free-jet shear layers emanating from the free-jet nozzle exit were modeled. A measure of the predicted accuracy was determined as a function of free-stream Mach number, angle-of-attack, and sideslip angle.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, C.G.

    Starting with the initial understanding that pulsation in variable stars is caused by the heat engine of Hydrogen and Helium ionization in their atmospheres (A.S. Eddington in Cox 1980) it was soon realized that non-linear effects were responsible for the detailed features on their light and velocity curves. With the advent of the computer we were able to solve the coupled set of hydrodynamics and radiation diffusion equations to model these non-linear features. This paper describes some recent model results for long period (LP) Cepheids in an attempt to get another handle on Cepheid masses. Section II discusses these resultsmore » and Section III considers the implications of these model results on the problem of the Cepheid mass discrepancy.« less

  7. Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code

    NASA Technical Reports Server (NTRS)

    Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.

    2003-01-01

    Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.

  8. Standard model anatomy of WIMP dark matter direct detection. I. Weak-scale matching

    NASA Astrophysics Data System (ADS)

    Hill, Richard J.; Solon, Mikhail P.

    2015-02-01

    We present formalism necessary to determine weak-scale matching coefficients in the computation of scattering cross sections for putative dark matter candidates interacting with the Standard Model. We pay particular attention to the heavy-particle limit. A consistent renormalization scheme in the presence of nontrivial residual masses is implemented. Two-loop diagrams appearing in the matching to gluon operators are evaluated. Details are given for the computation of matching coefficients in the universal limit of WIMP-nucleon scattering for pure states of arbitrary quantum numbers, and for singlet-doublet and doublet-triplet mixed states.

  9. Local deformation for soft tissue simulation

    PubMed Central

    Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2016-01-01

    ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482

  10. Computational modeling for prediction of the shear stress of three-dimensional isotropic and aligned fiber networks.

    PubMed

    Park, Seungman

    2017-09-01

    Interstitial flow (IF) is a creeping flow through the interstitial space of the extracellular matrix (ECM). IF plays a key role in diverse biological functions, such as tissue homeostasis, cell function and behavior. Currently, most studies that have characterized IF have focused on the permeability of ECM or shear stress distribution on the cells, but less is known about the prediction of shear stress on the individual fibers or fiber networks despite its significance in the alignment of matrix fibers and cells observed in fibrotic or wound tissues. In this study, I developed a computational model to predict shear stress for different structured fibrous networks. To generate isotropic models, a random growth algorithm and a second-order orientation tensor were employed. Then, a three-dimensional (3D) solid model was created using computer-aided design (CAD) software for the aligned models (i.e., parallel, perpendicular and cubic models). Subsequently, a tetrahedral unstructured mesh was generated and flow solutions were calculated by solving equations for mass and momentum conservation for all models. Through the flow solutions, I estimated permeability using Darcy's law. Average shear stress (ASS) on the fibers was calculated by averaging the wall shear stress of the fibers. By using nonlinear surface fitting of permeability, viscosity, velocity, porosity and ASS, I devised new computational models. Overall, the developed models showed that higher porosity induced higher permeability, as previous empirical and theoretical models have shown. For comparison of the permeability, the present computational models were matched well with previous models, which justify our computational approach. ASS tended to increase linearly with respect to inlet velocity and dynamic viscosity, whereas permeability was almost the same. Finally, the developed model nicely predicted the ASS values that had been directly estimated from computational fluid dynamics (CFD). The present computational models will provide new tools for predicting accurate functional properties and designing fibrous porous materials, thereby significantly advancing tissue engineering. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. The Path of the Blind Watchmaker: A Model of Evolution

    DTIC Science & Technology

    2011-04-06

    computational biology has now reached the point that astronomy reached when it began to look backward in time to the Big Bang. Our goal is look backward in...treatment. We claim that computational biology has now reached the point that astronomy reached when it began to look backward in time to the Big...evolutionary process itself, in fact, created it. When astronomy reached a critical mass of theory, technology, and observational data, astronomers

  12. Comparative Study of Shrinkage and Non-Shrinkage Model of Food Drying

    NASA Astrophysics Data System (ADS)

    Shahari, N.; Jamil, N.; Rasmani, KA.

    2016-08-01

    A single phase heat and mass model has always been used to represent the moisture and temperature distribution during the drying of food. Several effects of the drying process, such as physical and structural changes, have been considered in order to increase understanding of the movement of water and temperature. However, the comparison between the heat and mass equation with and without structural change (in terms of shrinkage), which can affect the accuracy of the prediction model, has been little investigated. In this paper, two mathematical models to describe the heat and mass transfer in food, with and without the assumption of structural change, were analysed. The equations were solved using the finite difference method. The converted coordinate system was introduced within the numerical computations for the shrinkage model. The result shows that the temperature with shrinkage predicts a higher temperature at a specific time compared to that of the non-shrinkage model. Furthermore, the predicted moisture content decreased faster at a specific time when the shrinkage effect was included in the model.

  13. First assembly times and equilibration in stochastic coagulation-fragmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Orsogna, Maria R.; Department of Mathematics, CSUN, Los Angeles, California 91330-8313; Lei, Qi

    2015-07-07

    We develop a fully stochastic theory for coagulation and fragmentation (CF) in a finite system with a maximum cluster size constraint. The process is modeled using a high-dimensional master equation for the probabilities of cluster configurations. For certain realizations of total mass and maximum cluster sizes, we find exact analytical results for the expected equilibrium cluster distributions. If coagulation is fast relative to fragmentation and if the total system mass is indivisible by the mass of the largest allowed cluster, we find a mean cluster-size distribution that is strikingly broader than that predicted by the corresponding mass-action equations. Combinations ofmore » total mass and maximum cluster size under which equilibration is accelerated, eluding late-stage coarsening, are also delineated. Finally, we compute the mean time it takes particles to first assemble into a maximum-sized cluster. Through careful state-space enumeration, the scaling of mean assembly times is derived for all combinations of total mass and maximum cluster size. We find that CF accelerates assembly relative to monomer kinetic only in special cases. All of our results hold in the infinite system limit and can be only derived from a high-dimensional discrete stochastic model, highlighting how classical mass-action models of self-assembly can fail.« less

  14. The late behavior of supernova 1987A. I - The light curve. II - Gamma-ray transparency of the ejecta

    NASA Technical Reports Server (NTRS)

    Arnett, W. David; Fu, Albert

    1989-01-01

    Observations of the late (t = 20-1500 days) bolometric light curve and the gamma-lines and X-rays from supernova 1987A are compared to theoretical models. It is found that 0.073 + or - 0.015 solar masses of freshly synthesized Ni-56 must be present to fit the bolometric light curve. The results place limits on the luminosity and presumed period of the newly formed pulsar/neutron star. In the second half of the paper, the problem of computing the luminosities in gamma-ray lines and in X-rays from supernova 1987A is addressed. High-energy observations suggest the development of large-scale clumping and bubbling of radioactive material in the ejecta. A model is proposed with a hydrogen envelope mass of about 7 solar masses, homologous scale expansion velocities of about 3000 km/s, and an approximately uniform mass distribution.

  15. Neutrino-heated stars and broad-line emission from active galactic nuclei

    NASA Technical Reports Server (NTRS)

    Macdonald, James; Stanev, Todor; Biermann, Peter L.

    1991-01-01

    Nonthermal radiation from active galactic nuclei indicates the presence of highly relativistic particles. The interaction of these high-energy particles with matter and photons gives rise to a flux of high-energy neutrinos. In this paper, the influence of the expected high neutrino fluxes on the structure and evolution of single, main-sequence stars is investigated. Sequences of models of neutrino-heated stars in thermal equilibrium are presented for masses 0.25, 0.5, 0.8, and 1.0 solar mass. In addition, a set of evolutionary sequences for mass 0.5 solar mass have been computed for different assumed values for the incident neutrino energy flux. It is found that winds driven by the heating due to high-energy particles and hard electromagnetic radiation of the outer layers of neutrino-bloated stars may satisfy the requirements of the model of Kazanas (1989) for the broad-line emission clouds in active galactic nuclei.

  16. Novel parametric reduced order model for aeroengine blade dynamics

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis

    2015-10-01

    The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.

  17. Star formation with disc accretion and rotation. I. Stars between 2 and 22 M⊙ at solar metallicity

    NASA Astrophysics Data System (ADS)

    Haemmerlé, L.; Eggenberger, P.; Meynet, G.; Maeder, A.; Charbonnel, C.

    2013-09-01

    Context. The way angular momentum is built up in stars during their formation process may have an impact on their further evolution. Aims: In the framework of the cold disc accretion scenario, we study how angular momentum builds up inside the star during its formation for the first time and what the consequences are for its evolution on the main sequence (MS). Methods: Computation begins from a hydrostatic core on the Hayashi line of 0.7 M⊙ at solar metallicity (Z = 0.014) rotating as a solid body. Accretion rates depending on the luminosity of the accreting object are considered, which vary between 1.5 × 10-5 and 1.7 × 10-3 M⊙ yr-1. The accreted matter is assumed to have an angular velocity equal to that of the outer layer of the accreting star. Models are computed for a mass-range on the zero-age main sequence (ZAMS) between 2 and 22 M⊙. Results: We study how the internal and surface velocities vary as a function of time during the accretion phase and the evolution towards the ZAMS. Stellar models, whose evolution has been followed along the pre-MS phase, are found to exhibit a shallow gradient of angular velocity on the ZAMS. Typically, the 6 M⊙ model has a core that rotates 50% faster than the surface on the ZAMS. The degree of differential rotation on the ZAMS decreases when the mass increases (for a fixed value of vZAMS/vcrit). The MS evolution of our models with a pre-MS accreting phase show no significant differences with respect to those of corresponding models computed from the ZAMS with an initial solid-body rotation. Interestingly, there exists a maximum surface velocity that can be reached through the present scenario of formation for masses on the ZAMS larger than 8 M⊙. Typically, only stars with surface velocities on the ZAMS lower than about 45% of the critical velocity can be formed for 14 M⊙ models. Reaching higher velocities would require starting from cores that rotate above the critical limit. We find that this upper velocity limit is smaller for higher masses. In contrast, there is no restriction below 8 M⊙, and the whole domain of velocities to the critical point can be reached.

  18. On the radius of habitable planets

    NASA Astrophysics Data System (ADS)

    Alibert, Y.

    2014-01-01

    Context. The conditions that a planet must fulfill to be habitable are not precisely known. However, it is comparatively easier to define conditions under which a planet is very likely not habitable. Finding such conditions is important as it can help select, in an ensemble of potentially observable planets, which ones should be observed in greater detail for characterization studies. Aims: Assuming, as in the Earth, that the presence of a C-cycle is a necessary condition for long-term habitability, we derive, as a function of the planetary mass, a radius above which a planet is likely not habitable. We compute the maximum radius a planet can have to fulfill two constraints: surface conditions compatible with the existence of liquid water, and no ice layer at the bottom of a putative global ocean. We demonstrate that, above a given radius, these two constraints cannot be met. Methods: We compute internal structure models of planets, using a five-layer model (core, inner mantle, outer mantle, ocean, and atmosphere), for different masses and composition of the planets (in particular, the Fe/Si ratio of the planet). Results: Our results show that for planets in the super-Earth mass range (1-12 M⊕), the maximum that a planet, with a composition similar to that of the Earth, can have varies between 1.7 and 2.2 R⊕. This radius is reduced when considering planets with higher Fe/Si ratios and taking radiation into account when computing the gas envelope structure. Conclusions: These results can be used to infer, from radius and mass determinations using high-precision transit observations like those that will soon be performed by the CHaracterizing ExOPlanet Satellite (CHEOPS), which planets are very likely not habitable, and therefore which ones should be considered as best targets for further habitability studies.

  19. Differences in simulated fire spread over Askervein Hill using two advanced wind models and a traditional uniform wind field

    Treesearch

    Jason Forthofer; Bret Butler

    2007-01-01

    A computational fluid dynamics (CFD) model and a mass-consistent model were used to simulate winds on simulated fire spread over a simple, low hill. The results suggest that the CFD wind field could significantly change simulated fire spread compared to traditional uniform winds. The CFD fire spread case may match reality better because the winds used in the fire...

  20. On the Lulejian-I Combat Model

    DTIC Science & Technology

    1976-08-01

    possible initial massing of the attacking side’s resources, the model tries to represent in a game -theoretic context the adversary nature of the...sequential game , as outlined in [A]. In principle, it is necessary to run the combat simulation once for each possible set of sequentially chosen...sequential game , in which the evaluative portion of the model (i.e., the combat assessment) serves to compute intermediate and terminal payoffs for the

  1. Computation of leading edge film cooling from a CONSOLE geometry (CONverging Slot hOLE)

    NASA Astrophysics Data System (ADS)

    Guelailia, A.; Khorsi, A.; Hamidou, M. K.

    2016-01-01

    The aim of this study is to investigate the effect of mass flow rate on film cooling effectiveness and heat transfer over a gas turbine rotor blade with three staggered rows of shower-head holes which are inclined at 30° to the spanwise direction, and are normal to the streamwise direction on the blade. To improve film cooling effectiveness, the standard cylindrical holes, located on the leading edge region, are replaced with the converging slot holes (console). The ANSYS CFX has been used for this computational simulation. The turbulence is approximated by a k-ɛ model. Detailed film effectiveness distributions are presented for different mass flow rate. The numerical results are compared with experimental data.

  2. Related Progenitor Models for Long-duration Gamma-Ray Bursts and Type Ic Superluminous Supernovae

    NASA Astrophysics Data System (ADS)

    Aguilera-Dena, David R.; Langer, Norbert; Moriya, Takashi J.; Schootemeijer, Abel

    2018-05-01

    We model the late evolution and mass loss history of rapidly rotating Wolf–Rayet stars in the mass range 5 M ⊙…100 M ⊙). We find that quasi-chemically homogeneously evolving single stars computed with enhanced mixing retain very little or no helium and are compatible with Type Ic supernovae. The more efficient removal of core angular momentum and the expected smaller compact object mass in our lower-mass models lead to core spins in the range suggested for magnetar-driven superluminous supernovae. Our higher-mass models retain larger specific core angular momenta, expected for long-duration gamma-ray bursts in the collapsar scenario. Due to the absence of a significant He envelope, the rapidly increasing neutrino emission after core helium exhaustion leads to an accelerated contraction of the whole star, inducing a strong spin-up and centrifugally driven mass loss at rates of up to {10}-2 {M}ȯ {yr}}-1 in the last years to decades before core collapse. Because the angular momentum transport in our lower-mass models enhances the envelope spin-up, they show the largest relative amounts of centrifugally enforced mass loss, i.e., up to 25% of the expected ejecta mass. Our most massive models evolve into the pulsational pair-instability regime. We would thus expect signatures of interaction with a C/O-rich circumstellar medium for Type Ic superluminous supernovae with ejecta masses below ∼10 M ⊙ as well as for the most massive engine-driven explosions with ejecta masses above ∼30 M ⊙. Signs of such interaction should be observable at early epochs of the supernova explosion; they may be related to bumps observed in the light curves of superluminous supernovae, or to the massive circumstellar CO-shell proposed for Type Ic superluminous supernova Gaia16apd.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherkaduvasala, V.; Murphy, D.W.; Ban, H.

    Popcorn ash particles are fragments of sintered coal fly ash masses that resemble popcorn in low apparent density. They can travel with the flow in the furnace and settle on key places such as catalyst surfaces. Computational fluid dynamics (CFD) models are often used in the design process to prevent the carryover and settling of these particles on catalysts. Particle size, density, and drag coefficient are the most important aerodynamic parameters needed in CFD modeling of particle flow. The objective of this study was to experimentally determine particle size, shape, apparent density, and drag characteristics for popcorn ash particles frommore » a coal-fired power plant. Particle size and shape were characterized by digital photography in three orthogonal directions and by computer image analysis. Particle apparent density was determined by volume and mass measurements. Particle terminal velocities in three directions were measured in water and each particle was also weighed in air and in water. The experimental data were analyzed and models were developed for equivalent sphere and equivalent ellipsoid with apparent density and drag coefficient distributions. The method developed in this study can be used to characterize the aerodynamic properties of popcorn-like particles.« less

  4. Damping Enhancement of Composite Panels by Inclusion of Shunted Piezoelectric Patches: A Wave-Based Modelling Approach.

    PubMed

    Chronopoulos, Dimitrios; Collet, Manuel; Ichchou, Mohamed

    2015-02-17

    The waves propagating within complex smart structures are hereby computed by employing a wave and finite element method. The structures can be of arbitrary layering and of complex geometric characteristics as long as they exhibit two-dimensional periodicity. The piezoelectric coupling phenomena are considered within the finite element formulation. The mass, stiffness and piezoelectric stiffness matrices of the modelled segment can be extracted using a conventional finite element code. The post-processing of these matrices involves the formulation of an eigenproblem whose solutions provide the phase velocities for each wave propagating within the structure and for any chosen direction of propagation. The model is then modified in order to account for a shunted piezoelectric patch connected to the composite structure. The impact of the energy dissipation induced by the shunted circuit on the total damping loss factor of the composite panel is then computed. The influence of the additional mass and stiffness provided by the attached piezoelectric devices on the wave propagation characteristics of the structure is also investigated.

  5. A reduced-dimensional model for near-wall transport in cardiovascular flows

    PubMed Central

    Hansen, Kirk B.

    2015-01-01

    Near-wall mass transport plays an important role in many cardiovascular processes, including the initiation of atherosclerosis, endothelial cell vasoregulation, and thrombogenesis. These problems are characterized by large Péclet and Schmidt numbers as well as a wide range of spatial and temporal scales, all of which impose computational difficulties. In this work, we develop an analytical relationship between the flow field and near-wall mass transport for high-Schmidt-number flows. This allows for the development of a wall-shear-stress-driven transport equation that lies on a codimension-one vessel-wall surface, significantly reducing computational cost in solving the transport problem. Separate versions of this equation are developed for the reaction-rate-limited and transport-limited cases, and numerical results in an idealized abdominal aortic aneurysm are compared to those obtained by solving the full transport equations over the entire domain. The reaction-rate-limited model matches the expected results well. The transport-limited model is accurate in the developed flow regions, but overpredicts wall flux at entry regions and reattachment points in the flow. PMID:26298313

  6. Damping Enhancement of Composite Panels by Inclusion of Shunted Piezoelectric Patches: A Wave-Based Modelling Approach

    PubMed Central

    Chronopoulos, Dimitrios; Collet, Manuel; Ichchou, Mohamed; Shah, Tahir

    2015-01-01

    The waves propagating within complex smart structures are hereby computed by employing a wave and finite element method. The structures can be of arbitrary layering and of complex geometric characteristics as long as they exhibit two-dimensional periodicity. The piezoelectric coupling phenomena are considered within the finite element formulation. The mass, stiffness and piezoelectric stiffness matrices of the modelled segment can be extracted using a conventional finite element code. The post-processing of these matrices involves the formulation of an eigenproblem whose solutions provide the phase velocities for each wave propagating within the structure and for any chosen direction of propagation. The model is then modified in order to account for a shunted piezoelectric patch connected to the composite structure. The impact of the energy dissipation induced by the shunted circuit on the total damping loss factor of the composite panel is then computed. The influence of the additional mass and stiffness provided by the attached piezoelectric devices on the wave propagation characteristics of the structure is also investigated. PMID:28787972

  7. A Mass-balance nitrate model for predicting the effects of land use on ground-water quality in municipal wellhead-protection areas

    USGS Publications Warehouse

    Frimpter, M.H.; Donohue, J.J.; Rapacz, M.V.; Beye, H.G.

    1990-01-01

    A mass-balance accounting model can be used to guide the management of septic systems and fertilizers to control the degradation of groundwater quality in zones of an aquifer that contributes water to public supply wells. The nitrate nitrogen concentration of the mixture in the well can be predicted for steady-state conditions by calculating the concentration that results from the total weight of nitrogen and total volume of water entering the zone of contribution to the well. These calculations will allow water-quality managers to predict the nitrate concentrations that would be produced by different types and levels of development, and to plan development accordingly. Computations for different development schemes provide a technical basis for planners and managers to compare water quality effects and to select alternatives that limit nitrate concentration in wells. Appendix A contains tables of nitrate loads and water volumes from common sources for use with the accounting model. Appendix B describes the preparation of a spreadsheet for the nitrate loading calculations with a software package generally available for desktop computers. (USGS)

  8. Computational Aerodynamic Simulations of a 1484 ft/sec Tip Speed Quiet High-Speed Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.

  9. Earth Science Computational Architecture for Multi-disciplinary Investigations

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Blom, R.; Gurrola, E.; Katz, D.; Lyzenga, G.; Norton, C.

    2005-12-01

    Understanding the processes underlying Earth's deformation and mass transport requires a non-traditional, integrated, interdisciplinary, approach dependent on multiple space and ground based data sets, modeling, and computational tools. Currently, details of geophysical data acquisition, analysis, and modeling largely limit research to discipline domain experts. Interdisciplinary research requires a new computational architecture that is optimized to perform complex data processing of multiple solid Earth science data types in a user-friendly environment. A web-based computational framework is being developed and integrated with applications for automatic interferometric radar processing, and models for high-resolution deformation & gravity, forward models of viscoelastic mass loading over short wavelengths & complex time histories, forward-inverse codes for characterizing surface loading-response over time scales of days to tens of thousands of years, and inversion of combined space magnetic & gravity fields to constrain deep crustal and mantle properties. This framework combines an adaptation of the QuakeSim distributed services methodology with the Pyre framework for multiphysics development. The system uses a three-tier architecture, with a middle tier server that manages user projects, available resources, and security. This ensures scalability to very large networks of collaborators. Users log into a web page and have a personal project area, persistently maintained between connections, for each application. Upon selection of an application and host from a list of available entities, inputs may be uploaded or constructed from web forms and available data archives, including gravity, GPS and imaging radar data. The user is notified of job completion and directed to results posted via URLs. Interdisciplinary work is supported through easy availability of all applications via common browsers, application tutorials and reference guides, and worked examples with visual response. At the platform level, multi-physics application development and workflow are available in the enriched environment of the Pyre framework. Advantages for combining separate expert domains include: multiple application components efficiently interact through Python shared libraries, investigators may nimbly swap models and try new parameter values, and a rich array of common tools are inherent in the Pyre system. The first four specific investigations to use this framework are: Gulf Coast subsidence: understanding of partitioning between compaction, subsidence and growth faulting; Gravity & deformation of a layered spherical earth model due to large earthquakes; Rift setting of Lake Vostok, Antarctica; and global ice mass changes.

  10. Multiphase, multi-electrode Joule heat computations for glass melter and in situ vitrification simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowery, P.S.; Lessor, D.L.

    Waste glass melter and in situ vitrification (ISV) processes represent the combination of electrical thermal, and fluid flow phenomena to produce a stable waste-from product. Computational modeling of the thermal and fluid flow aspects of these processes provides a useful tool for assessing the potential performance of proposed system designs. These computations can be performed at a fraction of the cost of experiment. Consequently, computational modeling of vitrification systems can also provide and economical means for assessing the suitability of a proposed process application. The computational model described in this paper employs finite difference representations of the basic continuum conservationmore » laws governing the thermal, fluid flow, and electrical aspects of the vitrification process -- i.e., conservation of mass, momentum, energy, and electrical charge. The resulting code is a member of the TEMPEST family of codes developed at the Pacific Northwest Laboratory (operated by Battelle for the US Department of Energy). This paper provides an overview of the numerical approach employed in TEMPEST. In addition, results from several TEMPEST simulations of sample waste glass melter and ISV processes are provided to illustrate the insights to be gained from computational modeling of these processes. 3 refs., 13 figs.« less

  11. A Membrane Gas Separation Experiment for the Undergraduate Laboratory.

    ERIC Educational Resources Information Center

    Davis, Richard A.; Sandall, Orville C.

    1991-01-01

    Described is a membrane experiment that provides students with experience in fundamental engineering skills such as mass balances, modeling, and using the computer as a research tool. Included are the experimental design, theory, method of solution, sample calculations, and conclusions. (KR)

  12. Relativity and the TRS-80.

    ERIC Educational Resources Information Center

    Levin, Sidney

    1984-01-01

    Presents the listing (TRS-80) for a computer program which derives the relativistic equation (employing as a model the concept of a moving clock which emits photons at regular intervals) and calculates transformations of time, mass, and length with increasing velocities (Einstein-Lorentz transformations). (JN)

  13. Spatial-Operator Algebra For Flexible-Link Manipulators

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Rodriguez, Guillermo

    1994-01-01

    Method of computing dynamics of multiple-flexible-link robotic manipulators based on spatial-operator algebra, which originally applied to rigid-link manipulators. Aspects of spatial-operator-algebra approach described in several previous articles in NASA Tech Briefs-most recently "Robot Control Based on Spatial-Operator Algebra" (NPO-17918). In extension of spatial-operator algebra to manipulators with flexible links, each link represented by finite-element model: mass of flexible link apportioned among smaller, lumped-mass rigid bodies, coupling of motions expressed in terms of vibrational modes. This leads to operator expression for modal-mass matrix of link.

  14. Fifteenth NASTRAN (R) Users' Colloquium

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Numerous applications of the NASA Structural Analysis (NASTRAN) computer program, a general purpose finite element code, are discussed. Additional features that can be added to NASTRAN, interactive plotting of NASTRAN data on microcomputers, mass modeling for bars, the design of wind tunnel models, the analysis of ship structures subjected to underwater explosions, and buckling analysis of radio antennas are among the topics discussed.

  15. A new approach to the convective parameterization of the regional atmospheric model BRAMS

    NASA Astrophysics Data System (ADS)

    Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.

    2013-05-01

    The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.

  16. Automated aortic calcification detection in low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  17. Mathematical Simulation of the Process of Aerobic Treatment of Wastewater under Conditions of Diffusion and Mass Transfer Perturbations

    NASA Astrophysics Data System (ADS)

    Bomba, A. Ya.; Safonik, A. P.

    2018-05-01

    A mathematical model of the process of aerobic treatment of wastewater has been refined. It takes into account the interaction of bacteria, as well as of organic and biologically nonoxidizing substances under conditions of diffusion and mass transfer perturbations. An algorithm of the solution of the corresponding nonlinear perturbed problem of convection-diffusion-mass transfer type has been constructed, with a computer experiment carried out based on it. The influence of the concentration of oxygen and of activated sludge on the quality of treatment is shown. Within the framework of the model suggested, a possibility of automated control of the process of deposition of impurities in a biological filter depending on the initial parameters of the water medium is suggested.

  18. Mathematical Simulation of the Process of Aerobic Treatment of Wastewater under Conditions of Diffusion and Mass Transfer Perturbations

    NASA Astrophysics Data System (ADS)

    Bomba, A. Ya.; Safonik, A. P.

    2018-03-01

    A mathematical model of the process of aerobic treatment of wastewater has been refined. It takes into account the interaction of bacteria, as well as of organic and biologically nonoxidizing substances under conditions of diffusion and mass transfer perturbations. An algorithm of the solution of the corresponding nonlinear perturbed problem of convection-diffusion-mass transfer type has been constructed, with a computer experiment carried out based on it. The influence of the concentration of oxygen and of activated sludge on the quality of treatment is shown. Within the framework of the model suggested, a possibility of automated control of the process of deposition of impurities in a biological filter depending on the initial parameters of the water medium is suggested.

  19. Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2017-04-01

    A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anzai, Chihaya; Hasselhuhn, Alexander; Höschele, Maik

    We compute the contribution to the total cross section for the inclusive production of a Standard Model Higgs boson induced by two quarks with different flavour in the initial state. Our calculation is exact in the Higgs boson mass and the partonic center-of-mass energy. Here, we describe the reduction to master integrals, the construction of a canonical basis, and the solution of the corresponding differential equations. Our analytic result contains both Harmonic Polylogarithms and iterated integrals with additional letters in the alphabet.

  1. Dilaton-assisted dark matter.

    PubMed

    Bai, Yang; Carena, Marcela; Lykken, Joseph

    2009-12-31

    A dilaton could be the dominant messenger between standard model fields and dark matter. The measured dark matter relic abundance relates the dark matter mass and spin to the conformal breaking scale. The dark matter-nucleon spin-independent cross section is predicted in terms of the dilaton mass. We compute the current constraints on the dilaton from LEP and Tevatron experiments, and the gamma-ray signal from dark matter annihilation to dilatons that could be observed by Fermi Large Area Telescope.

  2. Renormalization and radiative corrections to masses in a general Yukawa model

    NASA Astrophysics Data System (ADS)

    Fox, M.; Grimus, W.; Löschner, M.

    2018-01-01

    We consider a model with arbitrary numbers of Majorana fermion fields and real scalar fields φa, general Yukawa couplings and a ℤ4 symmetry that forbids linear and trilinear terms in the scalar potential. Moreover, fermions become massive only after spontaneous symmetry breaking of the ℤ4 symmetry by vacuum expectation values (VEVs) of the φa. Introducing the shifted fields ha whose VEVs vanish, MS¯ renormalization of the parameters of the unbroken theory suffices to make the theory finite. However, in this way, beyond tree level it is necessary to perform finite shifts of the tree-level VEVs, induced by the finite parts of the tadpole diagrams, in order to ensure vanishing one-point functions of the ha. Moreover, adapting the renormalization scheme to a situation with many scalars and VEVs, we consider the physical fermion and scalar masses as derived quantities, i.e. as functions of the coupling constants and VEVs. Consequently, the masses have to be computed order by order in a perturbative expansion. In this scheme, we compute the self-energies of fermions and bosons and show how to obtain the respective one-loop contributions to the tree-level masses. Furthermore, we discuss the modification of our results in the case of Dirac fermions and investigate, by way of an example, the effects of a flavor symmetry group.

  3. Mass segregation phenomena using the Hamiltonian Mean Field model

    NASA Astrophysics Data System (ADS)

    Steiner, J. R.; Zolacir, T. O.

    2018-02-01

    Mass segregation problem is thought to be entangled with the dynamical evolution of young stellar clusters (Olczak, 2011 [1]). This is a common sense in the astrophysical community. In this work, the Hamiltonian Mean Field (HMF) model with different masses is studied. A mass segregation phenomenon (MSP) arises from this study as a dynamical feature. The MSP in the HMF model is a consequence of the Landau damping (LD) and it appears in systems that the interactions belongs to a long range regime. Actually HMF is a toy model known to show up the main characteristics of astrophysical systems due to the mean field character of the potential and for different masses, as stellar and galaxies clusters, also exhibits MSP. It is in this sense that computational simulations focusing in what happens over the mass distribution in the phase space are performed for this system. What happens through the violent relaxation period and what stands for the quasi-stationary states (QSS) of this dynamics is analyzed. The results obtained support the fact that MSP is observed already in the violent relaxation time and is maintained during the QSS. Some structures in the mass distribution function are observed. As a result of this study the mass distribution is determined by the system dynamics and is independent of the dimensionality of the system. MSP occurs in a one dimensional system as a result of the long range forces that acts in the system. In this approach MSP emerges as a dynamical feature. We also show that for HMF with different masses, the dynamical time scale is N.

  4. A High-Resolution Model of Water Mass Transformation and Transport in the Weddell Sea

    NASA Astrophysics Data System (ADS)

    Hazel, J.; Stewart, A.

    2016-12-01

    The ocean circulation around the Antarctic margins has a pronounced impact on the global ocean and climate system. One of these impacts includes closing the global meridional overturning circulation (MOC) via formation of dense Antarctic Bottom Water (AABW), which ventilates a large fraction of the subsurface ocean. AABW is also partially composed of modified Circumpolar Deep Water (CDW), a warm, mid-depth water mass whose transport towards the continent has the potential to induce rapid retreat of marine-terminating glaciers. Previous studies suggest that these water mass exchanges may be strongly influenced by high-frequency processes such as downslope gravity currents, tidal flows, and mesoscale/submesoscale eddy transport. However, evaluating the relative contributions of these processes to near-Antarctic water mass transports is hindered by the region's relatively small scales of motion and the logistical difficulties in taking measurements beneath sea ice.In this study we develop a regional model of the Weddell Sea, the largest established source of AABW. The model is forced by an annually-repeating atmospheric state constructed from the Antarctic Mesoscale Prediction System data and by annually-repeating lateral boundary conditions constructed from the Southern Ocean State Estimate. The model incorporates the full Filchner-Ronne cavity and simulates the thermodynamics and dynamics of sea ice. To analyze the role of high-frequency processes in the transport and transformation of water masses, we compute the model's overturning circulation, water mass transformations, and ice sheet basal melt at model horizontal grid resolutions ranging from 1/2 degree to 1/24 degree. We temporally decompose the high-resolution (1/24 degree) model circulation into components due to mean, eddy and tidal flows and discuss the geographical dependence of these processes and their impact on water mass transformation and transport.

  5. Computational analysis of semi-span model test techniques

    NASA Technical Reports Server (NTRS)

    Milholen, William E., II; Chokani, Ndaona

    1996-01-01

    A computational investigation was conducted to support the development of a semi-span model test capability in the NASA LaRC's National Transonic Facility. This capability is required for the testing of high-lift systems at flight Reynolds numbers. A three-dimensional Navier-Stokes solver was used to compute the low-speed flow over both a full-span configuration and a semi-span configuration. The computational results were found to be in good agreement with the experimental data. The computational results indicate that the stand-off height has a strong influence on the flow over a semi-span model. The semi-span model adequately replicates the aerodynamic characteristics of the full-span configuration when a small stand-off height, approximately twice the tunnel empty sidewall boundary layer displacement thickness, is used. Several active sidewall boundary layer control techniques were examined including: upstream blowing, local jet blowing, and sidewall suction. Both upstream tangential blowing, and sidewall suction were found to minimize the separation of the sidewall boundary layer ahead of the semi-span model. The required mass flow rates are found to be practicable for testing in the NTF. For the configuration examined, the active sidewall boundary layer control techniques were found to be necessary only near the maximum lift conditions.

  6. Modeling of turbulent chemical reaction

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.

    1995-01-01

    Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.

  7. A New Mass Criterium for Electron Capture Supernovae

    NASA Astrophysics Data System (ADS)

    Poelarends, Arend

    2016-06-01

    Electron capture supernovae (ECSN) are thought to populate the mass range between massive white dwarf progenitors and core collapse supernovae. It is generally believed that the initial stellar mass range for ECSN from single stars is about 0.5-1.0 M⊙ wide and centered around a value of 8.5 or 9 M⊙, depending on the specifics of the physics of convection and mass loss one applies. Since mass loss in a binary system is able to delay or cancel the second dredge-up, it is also believed that the initial mass range for ECSN in binary systems is wider than in single stars, but an initial mass range has not been defined yet.The last phase of stars in this particular mass range, however, is challenging to compute, either due to recurring Helium shell flashes, or due to convectively bound flames in the degenerate interior of the star. It would be helpful, nevertheless, to know before we enter these computationally intensive phases whether a star will explode as an ECSN or not. The mass of the helium core after helium core burning is one such criterium (Nomoto, 1984), which predicts that ECSN will occur if the helium core mass is between 2.0 M⊙ and 2.5 M⊙. However, since helium cores can be subject to erosion due to mass loss — even during helium core burning, this criterium will not yield accurate predictions for stars in binary systems.We present a dense grid of stellar evolution models that allow us to put constraints on the final fate of their cores, based on a combination of Carbon/Oxygen core mass, the mass of the surrounding Helium layer and C/O abundance. We find that CO cores with masses between 1.365 and 1.420 M⊙ at the end of Carbon burning will result in ECSN, with some minor adjustments of these ranges due to the mass of the Helium layer and the C/O ratio. While detailed models of stars within the ECSN mass range remain necessary to understand the details of pre-ECSN evolution, our research refines the Helium core criterion and provides a useful way to determine the final fate of stars in this complicated mass range early on.

  8. Dynamic Method for Identifying Collected Sample Mass

    NASA Technical Reports Server (NTRS)

    Carson, John

    2008-01-01

    G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.

  9. An approach to the mathematical modelling of a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Averner, M. M.

    1981-01-01

    An approach to the design of a computer based model of a closed ecological life-support system suitable for use in extraterrestrial habitats is presented. The model is based on elemental mass balance and contains representations of the metabolic activities of biological components. The model can be used as a tool in evaluating preliminary designs for closed regenerative life support systems and as a method for predicting the behavior of such systems.

  10. Augmented kludge waveforms for detecting extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Chua, Alvin J. K.; Moore, Christopher J.; Gair, Jonathan R.

    2017-08-01

    The extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes are an important class of source for the future space-based gravitational-wave detector LISA. Detecting signals from EMRIs will require waveform models that are both accurate and computationally efficient. In this paper, we present the latest implementation of an augmented analytic kludge (AAK) model, publicly available at https://github.com/alvincjk/EMRI_Kludge_Suite as part of an EMRI waveform software suite. This version of the AAK model has improved accuracy compared to its predecessors, with two-month waveform overlaps against a more accurate fiducial model exceeding 0.97 for a generic range of sources; it also generates waveforms 5-15 times faster than the fiducial model. The AAK model is well suited for scoping out data analysis issues in the upcoming round of mock LISA data challenges. A simple analytic argument shows that it might even be viable for detecting EMRIs with LISA through a semicoherent template bank method, while the use of the original analytic kludge in the same approach will result in around 90% fewer detections.

  11. Three-Dimensional Model of Heat and Mass Transfer in Fractured Rocks to Estimate Environmental Conditions Along Heated Drifts

    NASA Astrophysics Data System (ADS)

    Fedors, R. W.; Painter, S. L.

    2004-12-01

    Temperature gradients along the thermally-perturbed drifts of the potential high-level waste repository at Yucca Mountain, Nevada, will drive natural convection and associated heat and mass transfer along drifts. A three-dimensional, dual-permeability, thermohydrological model of heat and mass transfer was used to estimate the magnitude of temperature gradients along a drift. Temperature conditions along heated drifts are needed to support estimates of repository-edge cooling and as input to computational fluid dynamics modeling of in-drift axial convection and the cold-trap process. Assumptions associated with abstracted heat transfer models and two-dimensional thermohydrological models weakly coupled to mountain-scale thermal models can readily be tested using the three-dimensional thermohydrological model. Although computationally expensive, the fully coupled three-dimensional thermohydrological model is able to incorporate lateral heat transfer, including host rock processes of conduction, convection in gas phase, advection in liquid phase, and latent-heat transfer. Results from the three-dimensional thermohydrological model showed that weakly coupling three-dimensional thermal and two-dimensional thermohydrological models lead to underestimates of temperatures and underestimates of temperature gradients over large portions of the drift. The representative host rock thermal conductivity needed for abstracted heat transfer models are overestimated using the weakly coupled models. If axial flow patterns over large portions of drifts are not impeded by the strong cross-sectional flow patterns imparted by the heat rising directly off the waste package, condensation from the cold-trap process will not be limited to the extreme ends of each drift. Based on the three-dimensional thermohydrological model, axial temperature gradients occur sooner over a larger portion of the drift, though high gradients nearest the edge of the potential repository are dampened. This abstract is an independent product of CNWRA and does not necessarily reflect the view or regulatory position of the Nuclear Regulatory Commission.

  12. Annotation: a computational solution for streamlining metabolomics analysis

    PubMed Central

    Domingo-Almenara, Xavier; Montenegro-Burke, J. Rafael; Benton, H. Paul; Siuzdak, Gary

    2017-01-01

    Metabolite identification is still considered an imposing bottleneck in liquid chromatography mass spectrometry (LC/MS) untargeted metabolomics. The identification workflow usually begins with detecting relevant LC/MS peaks via peak-picking algorithms and retrieving putative identities based on accurate mass searching. However, accurate mass search alone provides poor evidence for metabolite identification. For this reason, computational annotation is used to reveal the underlying metabolites monoisotopic masses, improving putative identification in addition to confirmation with tandem mass spectrometry. This review examines LC/MS data from a computational and analytical perspective, focusing on the occurrence of neutral losses and in-source fragments, to understand the challenges in computational annotation methodologies. Herein, we examine the state-of-the-art strategies for computational annotation including: (i) peak grouping or full scan (MS1) pseudo-spectra extraction, i.e., clustering all mass spectral signals stemming from each metabolite; (ii) annotation using ion adduction and mass distance among ion peaks; (iii) incorporation of biological knowledge such as biotransformations or pathways; (iv) tandem MS data; and (v) metabolite retention time calibration, usually achieved by prediction from molecular descriptors. Advantages and pitfalls of each of these strategies are discussed, as well as expected future trends in computational annotation. PMID:29039932

  13. Type Ia supernovae: Pulsating delayed detonation models, IR light curves, and the formation of molecules

    NASA Technical Reports Server (NTRS)

    Hoflich, Peter; Khokhlov, A.; Wheeler, C.

    1995-01-01

    We computed optical and infrared light curves of the pulsating class of delayed detonation models for Type Ia supernovae (SNe Ia). It is demonstrated that observations of the IR light curves can be used to identify subluminous SNe Ia by testing whether secondary maxima occur in the IR. Our pulsating delayed detonation models are in agreement with current observations both for subluminous and normal bright SN Ia, namely SN1991bg, SN1992bo, and SN1992bc. Observations of molecular bands provide a test to distinguish whether strongly subluminous supernovae are a consequence of the pulsating mechanism occurring in a high-mass white dwarf (WD) or, alternatively, are formed by the helium detonation in a low-mass WD as was suggested by Woosley. In the latter case, no carbon is left after the explosion of low-mass WDs whereas a log of C/O-rich material is present in pulsating delayed detonation models.

  14. Clustering, cosmology and a new era of black hole demographics- II. The conditional luminosity functions of Type 2 and Type 1 active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Ballantyne, D. R.

    2017-01-01

    The orientation-based unification model of active galactic nuclei (AGNs) posits that the principle difference between obscured (Type 2) and unobscured (Type 1) AGNs is the line of sight into the central engine. If this model is correct then there should be no difference in many of the properties of AGN host galaxies (e.g. the mass of the surrounding dark matter haloes). However, recent clustering analyses of Type 1 and Type 2 AGNs have provided some evidence for a difference in the halo mass, in conflict with the orientation-based unified model. In this work, a method to compute the conditional luminosity function (CLF) of Type 2 and Type 1 AGNs is presented. The CLF allows many fundamental halo properties to be computed as a function of AGN luminosity, which we apply to the question of the host halo masses of Type 1 and 2 AGNs. By making use of the total AGN CLF, the Type 1 X-ray luminosity function, and the luminosity-dependent Type 2 AGN fraction, the CLFs of Type 1 and 2 AGNs are calculated at z ≈ 0 and 0.9. At both z, there is no statistically significant difference in the mean halo mass of Type 2 and 1 AGNs at any luminosity. There is marginal evidence that Type 1 AGNs may have larger halo masses than Type 2s, which would be consistent with an evolutionary picture where quasars are initially obscured and then subsequently reveal themselves as Type 1s. As the Type 1 lifetime is longer, the host halo will increase somewhat in mass during the Type 1 phase. The CLF technique will be a powerful way to study the properties of many AGNs subsets (e.g. radio-loud, Compton-thick) as future wide-area X-ray and optical surveys substantially increase our ability to place AGNs in their cosmological context.

  15. Geometrical and gravimetrical observations of the Aral Sea and its tributaries along with hydrological models

    NASA Astrophysics Data System (ADS)

    Singh, A.; Seitz, F.; Schwatke, C.; Güntner, A.

    2012-04-01

    Satellite altimetry is capable of measuring surface water level changes of large water bodies. This is especially interesting for regions where in-situ gauges are sparse or not available. Temporal variations of coastline and horizontal extent of a water body can be derived from optical remote sensing data. A joint analysis of both data types together with a digital elevation model allows for the estimation of water volume changes. Related variations of water mass map into the observations of the satellite gravity field mission GRACE. In this presentation, we demonstrate the application of heterogeneuous remote sensing methods for studying chages of water volume and mass of the Aral Sea and compare the results with respect to their consistency. Our analysis covers the period 2002-2011. In particular we deal with data from multi-mission radar and laser satellite altimetry that are analyzed in combination with coastlines from Landsat images. The resultant vertical and horizontal variations of the lake surface are geometrically intersected with the bathymetry of the Aral Sea in order to compute volumetric changes. These are transformed into variations of water mass that are subsequently compared with storage changes derived from GRACE satellite gravimetry. Hence we obtain a comprehensive picture of the hydrological changes in the region. Observations from all datasets correspond quite well with each other with respect to their temporal development. However, geometrically determined volume changes and mass changes observed by GRACE agree less well during years of heavy water inflow in to the Aral Sea from its southern tributary 'Amu Darya' since the GRACE signals are contaminated by the large mass of water stored in the river delta and prearalie region On the other hand, GRACE observations of the river basins of Syr Darya and Amu Dayra correspond very well with hydrological models and mass changes computed from the balance of precipitation, evaporation and runoff determined from the atmospheric-terrestrial water balance.

  16. Bi-directional vibration control of offshore wind turbines using a 3D pendulum tuned mass damper

    NASA Astrophysics Data System (ADS)

    Sun, C.; Jahangiri, V.

    2018-05-01

    Offshore wind turbines suffer from excessive bi-directional vibrations due to wind-wave misalignment and vortex induced vibrations. However, most of existing research focus on unidirectional vibration attenuation which is inadequate for real applications. The present paper proposes a three dimensional pendulum tuned mass damper (3d-PTMD) to mitigate the tower and nacelle dynamic response in the fore-aft and side-side directions. An analytical model of the wind turbine coupled with the 3d-PTMD is established wherein the interaction between the blades, the tower and the 3d-PTMD is modeled. Aerodynamic loading is computed using the Blade Element Momentum method where the Prandtls tip loss factor and the Glauert correction are considered. JONSWAP spectrum is adopted to generate wave data. Wave loading is computed using Morisons equation in collaboration with the strip theory. Via a numerical search approach, the design formula of the 3d-PTMD is obtained and examined on a National Renewable Energy Lab (NREL) monopile 5 MW baseline wind turbine model under misaligned wind, wave and seismic loading. Dual linear tuned mass dampers (TMDs) deployed in the fore-aft and side-side directions are utilized for comparison. It is found that the 3d-PTMD with a mass ratio of 2 % can improve the mitigation of the root mean square and peak response by around 10 % when compared with the dual linear TMDs in controlling the bi-directional vibration of the offshore wind turbines under misaligned wind, wave and seismic loading.

  17. VizieR Online Data Catalog: Grids of stellar models V. (Meynet+ 1994)

    NASA Astrophysics Data System (ADS)

    Meynet, G.; Maeder, A.; Schaller, G.; Schaerer, D.; Charbonnel, C.

    1993-09-01

    Most outputs of massive star evolution critically depend on the mass loss rates. In order to broaden the comparison basis and to illustrate the effects of different mass loss rates, we have computed new sets of models, with initial masses between 12 and 120 M⊙, and metallicities, Z, between 0.001 and 0.040, with a mass loss rate increased by a factor of two during the phases when the stellar winds are believed to be essentially driven by the radiation pressure. A moderate core-overshooting and the new radiative opacities from Iglesias et al. (1992ApJ...397..717I) and Kurucz (1991) were taken into account. These models complete the homogeneous and extended theoretical database formed by the previous grids of this series, computed by Schaller et al. (1992, Cat. J/A+AS/96/269) for Z=0.020 and Z=0.001, by Schaerer et al. (1992, Cat. J/A+AS/98/523; 1993, Cat. J/A+AS/102/339) for Z=0.008 and Z=0.040 and by Charbonnel et al. (1993, Cat. J/A+AS/101/415) for Z=0.004. This paper closes this series. Of particular interest is the predicted behaviour of metal rich stars such as may be found in the inner regions of our Galaxy. New evolutionary connexions are found, in particular we show that the most massive and metal rich stars may spend a relatively long time as He and N enriched stars and may even end their evolution as white dwarfs. (33 data files).

  18. A theoretical study of mixing downstream of transverse injection into a supersonic boundary layer

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Zelazny, S. W.

    1972-01-01

    A theoretical and analytical study was made of mixing downstream of transverse hydrogen injection, from single and multiple orifices, into a Mach 4 air boundary layer over a flat plate. Numerical solutions to the governing three-dimensional, elliptic boundary layer equations were obtained using a general purpose computer program. Founded upon a finite element solution algorithm. A prototype three-dimensional turbulent transport model was developed using mixing length theory in the wall region and the mass defect concept in the outer region. Excellent agreement between the computed flow field and experimental data for a jet/freestream dynamic pressure ratio of unity was obtained in the centerplane region of the single-jet configuration. Poorer agreement off centerplane suggests an inadequacy of the extrapolated two-dimensional turbulence model. Considerable improvement in off-centerplane computational agreement occured for a multi-jet configuration, using the same turbulent transport model.

  19. Many Masses on One Stroke:. Economic Computation of Quark Propagators

    NASA Astrophysics Data System (ADS)

    Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus

    The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.

  20. Simulation of Power Collection Dynamics for Simply Supported Power Rail

    DOT National Transportation Integrated Search

    1972-11-01

    The mathematical model of a sprung mass moving along a simply supported beam is used to analyze the dynamics of a power-collection system. A computer simulation of one-dimensional motion is used to demonstrate the phenomenon of collector-power rail i...

  1. Numerical Modeling of Saturated Boiling in a Heated Tube

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; LeClair, Andre; Hartwig, Jason

    2017-01-01

    This paper describes a mathematical formulation and numerical solution of boiling in a heated tube. The mathematical formulation involves a discretization of the tube into a flow network consisting of fluid nodes and branches and a thermal network consisting of solid nodes and conductors. In the fluid network, the mass, momentum and energy conservation equations are solved and in the thermal network, the energy conservation equation of solids is solved. A pressure-based, finite-volume formulation has been used to solve the equations in the fluid network. The system of equations is solved by a hybrid numerical scheme which solves the mass and momentum conservation equations by a simultaneous Newton-Raphson method and the energy conservation equation by a successive substitution method. The fluid network and thermal network are coupled through heat transfer between the solid and fluid nodes which is computed by Chen's correlation of saturated boiling heat transfer. The computer model is developed using the Generalized Fluid System Simulation Program and the numerical predictions are compared with test data.

  2. The use of multigrid techniques in the solution of the Elrod algorithm for a dynamically loaded journal bearing. M.S. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed, utilizing a multigrid iterative technique. The code is compared with a presently existing direct solution in terms of computational time and accuracy. The model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobssen-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via liquid striations. The mixed nature of the equations (elliptic in the full film zone and nonelliptic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  3. Simulating the Gradually Deteriorating Performance of an RTG

    NASA Technical Reports Server (NTRS)

    Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry

    2008-01-01

    Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.

  4. Fully-Coupled Dynamical Jitter Modeling of Momentum Exchange Devices

    NASA Astrophysics Data System (ADS)

    Alcorn, John

    A primary source of spacecraft jitter is due to mass imbalances within momentum exchange devices (MEDs) used for fine pointing, such as reaction wheels (RWs) and variable-speed control moment gyroscopes (VSCMGs). Although these effects are often characterized through experimentation in order to validate pointing stability requirements, it is of interest to include jitter in a computer simulation of the spacecraft in the early stages of spacecraft development. An estimate of jitter amplitude may be found by modeling MED imbalance torques as external disturbance forces and torques on the spacecraft. In this case, MED mass imbalances are lumped into static and dynamic imbalance parameters, allowing jitter force and torque to be simply proportional to wheel speed squared. A physically realistic dynamic model may be obtained by defining mass imbalances in terms of a wheel center of mass location and inertia tensor. The fully-coupled dynamic model allows for momentum and energy validation of the system. This is often critical when modeling additional complex dynamical behavior such as flexible dynamics and fuel slosh. Furthermore, it is necessary to use the fully-coupled model in instances where the relative mass properties of the spacecraft with respect to the RWs cause the simplified jitter model to be inaccurate. This thesis presents a generalized approach to MED imbalance modeling of a rigid spacecraft hub with N RWs or VSCMGs. A discussion is included to convert from manufacturer specifications of RW imbalances to the parameters introduced within each model. Implementations of the fully-coupled RW and VSCMG models derived within this thesis are released open-source as part of the Basilisk astrodynamics software.

  5. The influence of topographic feedback on a coupled mass balance and ice-flow model for Vestfonna ice-cap, Svalbard

    NASA Astrophysics Data System (ADS)

    Schäfer, Martina; Möller, Marco; Zwinger, Thomas; Moore, John

    2016-04-01

    Using a coupled simulation set-up between a by statistical climate data forced and to ice-cap resolution downscaled mass balance model and an ice-dynamic model, we study coupling effects for the Vestfonna ice cap, Nordaustlandet, Svalbard, by analysing the impacts of different imposed coupling intervals on mass-balance and sea-level rise (SLR) projections. Based on a method to estimate errors introduced by different coupling schemes, we find that neglecting the topographic feedback in the coupling leads to underestimations of 10-20% in SLR projections on century time-scales in our model compared to full coupling (i.e., exchange of properties using smallest occurring time-step). Using the same method it also is shown that parametrising mass-balance adjustment for changes in topography using lapse rates is a - in computational terms - cost-effective reasonably accurate alternative applied to an ice-cap like Vestfonna. We test the forcing imposed by different emission pathways (RCP 2.4, 4.5, 6.0 and 8.5). For most of them, over the time-period explored (2000-2100), fast-flowing outlet glaciers decrease in impacting SLR due to their deceleration and reduced mass flux as they thin and retreat from the coast, hence detaching from the ocean and thereby losing their major mass drainage mechanism, i.e., calving.

  6. Dynamic behavior and deformation analysis of the fish cage system using mass-spring model

    NASA Astrophysics Data System (ADS)

    Lee, Chun Woo; Lee, Jihoon; Park, Subong

    2015-06-01

    Fish cage systems are influenced by various oceanic conditions, and the movements and deformation of the system by the external forces can affect the safety of the system itself, as well as the species of fish being cultivated. Structural durability of the system against environmental factors has been major concern for the marine aquaculture system. In this research, a mathematical model and a simulation method were presented for analyzing the performance of the large-scale fish cage system influenced by current and waves. The cage system consisted of netting, mooring ropes, floats, sinkers and floating collar. All the elements were modeled by use of the mass-spring model. The structures were divided into finite elements and mass points were placed at the mid-point of each element, and mass points were connected by springs without mass. Each mass point was applied to external and internal forces, and total force was calculated in every integration step. The computation method was applied to the dynamic simulation of the actual fish cage systems rigged with synthetic fiber and copper wire simultaneously influenced by current and waves. Here, we also tried to find a relevant ratio between buoyancy and sinking force of the fish cages. The simulation results provide improved understanding of the behavior of the structure and valuable information concerning optimum ratio of the buoyancy to sinking force according to current speeds.

  7. Simplified phenomenology for colored dark sectors

    NASA Astrophysics Data System (ADS)

    El Hedri, Sonia; Kaminska, Anna; de Vries, Maikel; Zurita, Jose

    2017-04-01

    We perform a general study of the relic density and LHC constraints on simplified models where the dark matter coannihilates with a strongly interacting particle X. In these models, the dark matter depletion is driven by the self-annihilation of X to pairs of quarks and gluons through the strong interaction. The phenomenology of these scenarios therefore only depends on the dark matter mass and the mass splitting between dark matter and X as well as the quantum numbers of X. In this paper, we consider simplified models where X can be either a scalar, a fermion or a vector, as well as a color triplet, sextet or octet. We compute the dark matter relic density constraints taking into account Sommerfeld corrections and bound state formation. Furthermore, we examine the restrictions from thermal equilibrium, the lifetime of X and the current and future LHC bounds on X pair production. All constraints are comprehensively presented in the mass splitting versus dark matter mass plane. While the relic density constraints can lead to upper bounds on the dark matter mass ranging from 2 TeV to more than 10 TeV across our models, the prospective LHC bounds range from 800 to 1500 GeV. A full coverage of the strongly coannihilating dark matter parameter space would therefore require hadron colliders with significantly higher center-of-mass energies.

  8. Computational model of collagen turnover in carotid arteries during hypertension.

    PubMed

    Sáez, P; Peña, E; Tarbell, J M; Martínez, M A

    2015-02-01

    It is well known that biological tissues adapt their properties because of different mechanical and chemical stimuli. The goal of this work is to study the collagen turnover in the arterial tissue of hypertensive patients through a coupled computational mechano-chemical model. Although it has been widely studied experimentally, computational models dealing with the mechano-chemical approach are not. The present approach can be extended easily to study other aspects of bone remodeling or collagen degradation in heart diseases. The model can be divided into three different stages. First, we study the smooth muscle cell synthesis of different biological substances due to over-stretching during hypertension. Next, we study the mass-transport of these substances along the arterial wall. The last step is to compute the turnover of collagen based on the amount of these substances in the arterial wall which interact with each other to modify the turnover rate of collagen. We simulate this process in a finite element model of a real human carotid artery. The final results show the well-known stiffening of the arterial wall due to the increase in the collagen content. Copyright © 2015 John Wiley & Sons, Ltd.

  9. A model for heat and mass input control in GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smartt, H.B.; Einerson, C.J.

    1993-05-01

    This work describes derivation of a control model for electrode melting and heat and mass transfer from the electrode to the work piece in gas metal arc welding (GMAW). Specifically, a model is developed which allows electrode speed and welding speed to be calculated for given values of voltage and torch-to-base metal distance, as a function of the desired heat and mass input to the weldment. Heat input is given on a per unit weld length basis, and mass input is given in terms of transverse cross-sectional area added to the weld bead (termed reinforcement). The relationship to prior workmore » is discussed. The model was demonstrated using a computer-controlled welding machine and a proportional-integral (PI) controller receiving input from a digital filter. The difference between model-calculated welding current and measured current is used as controller feedback. The model is calibrated for use with carbon steel welding wire and base plate with Ar-CO[sub 2] shielding gas. Although the system is intended for application during spray transfer of molten metal from the electrode to the weld pool, satisfactory performance is also achieved during globular and streaming transfer. Data are presented showing steady-state and transient performance, as well as resistance to external disturbances.« less

  10. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    NASA Astrophysics Data System (ADS)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  11. Properties of LEGUS Clusters Obtained with Different Massive-Star Evolutionary Tracks

    NASA Astrophysics Data System (ADS)

    Wofford, A.; Charlot, S.; Eldridge, J. J.

    We compute spectral libraries for populations of coeval stars using state-of-the-art massive-star evolutionary tracks that account for different astrophysics including rotation and close-binarity. Our synthetic spectra account for stellar and nebular contributions. We use our models to obtain E(B - V ), age, and mass for six clusters in spiral galaxy NGC 1566, which have ages of < 50 Myr and masses of > 5 x 104M⊙ according to standard models. NGC 1566 was observed from the NUV to the I-band as part of the imaging Treasury HST program LEGUS: Legacy Extragalactic UV Survey. We aim to establish i) if the models provide reasonable fits to the data, ii) how well the models and photometry are able to constrain the cluster properties, and iii) how different the properties obtained with different models are.

  12. A STUDY OF PREDICTED BONE MARROW DISTRIBUTION ON CALCULATED MARROW DOSE FROM EXTERNAL RADIATION EXPOSURES USING TWO SETS OF IMAGE DATA FOR THE SAME INDIVIDUAL

    PubMed Central

    Caracappa, Peter F.; Chao, T. C. Ephraim; Xu, X. George

    2010-01-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body. PMID:19430219

  13. A study of predicted bone marrow distribution on calculated marrow dose from external radiation exposures using two sets of image data for the same individual.

    PubMed

    Caracappa, Peter F; Chao, T C Ephraim; Xu, X George

    2009-06-01

    Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body.

  14. Gradient-free MCMC methods for dynamic causal modelling.

    PubMed

    Sengupta, Biswa; Friston, Karl J; Penny, Will D

    2015-05-15

    In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-based samplers are more efficient compared with random walk Metropolis sampler or slice-sampling; yet adaptive MCMC sampling is more promising in terms of compute time. Slice-sampling yields the highest number of independent samples from the target density - albeit at almost 1000% increase in computational time, in comparison to the most efficient algorithm (i.e., the adaptive MCMC sampler). Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. The evolution of supermassive Population III stars

    NASA Astrophysics Data System (ADS)

    Haemmerlé, Lionel; Woods, T. E.; Klessen, Ralf S.; Heger, Alexander; Whalen, Daniel J.

    2018-02-01

    Supermassive primordial stars forming in atomically cooled haloes at z ˜ 15-20 are currently thought to be the progenitors of the earliest quasars in the Universe. In this picture, the star evolves under accretion rates of 0.1-1 M⊙ yr-1 until the general relativistic instability triggers its collapse to a black hole at masses of ˜105 M⊙. However, the ability of the accretion flow to sustain such high rates depends crucially on the photospheric properties of the accreting star, because its ionizing radiation could reduce or even halt accretion. Here we present new models of supermassive Population III protostars accreting at rates 0.001-10 M⊙ yr-1, computed with the GENEVA stellar evolution code including general relativistic corrections to the internal structure. We compute for the first time evolutionary tracks in the mass range M > 105 M⊙. We use the polytropic stability criterion to estimate the mass at which the collapse occurs, which has been shown to give a lower limit of the actual mass at collapse in recent hydrodynamic simulations. We find that at accretion rates higher than 0.01 M⊙ yr-1, the stars evolve as red, cool supergiants with surface temperatures below 104 K towards masses >105 M⊙. Moreover, even with the lower rates 0.001 M_{⊙} yr{^{-1}}<\\dot{M}< 0.01 M⊙ yr-1, the surface temperature is substantially reduced from 105 to 104 K for M ≳ 600 M⊙. Compared to previous studies, our results extend the range of masses and accretion rates at which the ionizing feedback remains weak, reinforcing the case for direct collapse as the origin of the first quasars. We provide numerical tables for the surface properties of our models.

  16. Re-analysis of Alaskan benchmark glacier mass-balance data using the index method

    USGS Publications Warehouse

    Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.

    2010-01-01

    At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.

  17. Numerical Modeling of Exploitation Relics and Faults Influence on Rock Mass Deformations

    NASA Astrophysics Data System (ADS)

    Wesołowski, Marek

    2016-12-01

    This article presents numerical modeling results of fault planes and exploitation relics influenced by the size and distribution of rock mass and surface area deformations. Numerical calculations were performed using the finite difference program FLAC. To assess the changes taking place in a rock mass, an anisotropic elasto-plastic ubiquitous joint model was used, into which the Coulomb-Mohr strength (plasticity) condition was implemented. The article takes as an example the actual exploitation of the longwall 225 area in the seam 502wg of the "Pokój" coal mine. Computer simulations have shown that it is possible to determine the influence of fault planes and exploitation relics on the size and distribution of rock mass and its surface deformation. The main factor causing additional deformations of the area surface are the abandoned workings in the seam 502wd. These abandoned workings are the activation factor that caused additional subsidences and also, due to the significant dip, they are a layer on which the rock mass slides down in the direction of the extracted space. These factors are not taken into account by the geometrical and integral theories.

  18. Effects of heavy sea quarks at low energies.

    PubMed

    Bruno, Mattia; Finkenrath, Jacob; Knechtli, Francesco; Leder, Björn; Sommer, Rainer

    2015-03-13

    We present a factorization formula for the dependence of light hadron masses and low energy hadronic scales on the mass M of a heavy quark: apart from an overall mass-independent factor Q, ratios such as r_{0}(M)/r_{0}(0) are computable in perturbation theory at large M. The perturbation theory part is stable concerning different loop orders. Our nonperturbative Monte Carlo results obtained in a model calculation, where a doublet of heavy quarks is decoupled, match quantitatively to the perturbative prediction. Upon taking ratios of different hadronic scales at the same mass, the perturbative function drops out and the ratios are given by the decoupled theory up to M^{-2} corrections. We verify-in the continuum limit-that the sea quark effects of quarks with masses around the charm mass are very small in such ratios.

  19. Composition of early planetary atmospheres - II. Coupled Dust and chemical evolution in protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Cridland, A. J.; Pudritz, Ralph E.; Birnstiel, Tilman; Cleeves, L. Ilsedore; Bergin, Edwin A.

    2017-08-01

    We present the next step in a series of papers devoted to connecting the composition of the atmospheres of forming planets with the chemistry of their natal evolving protoplanetary discs. The model presented here computes the coupled chemical and dust evolution of the disc and the formation of three planets per disc model. Our three canonical planet traps produce a Jupiter near 1 AU, a Hot Jupiter and a Super-Earth. We study the dependence of the final orbital radius, mass, and atmospheric chemistry of planets forming in disc models with initial disc masses that vary by 0.02 M⊙ above and below our fiducial model (M_{disc,0} = 0.1 M_{⊙}). We compute C/O and C/N for the atmospheres formed in our three models and find that C/Oplanet ˜ C/O_{disc}, which does not vary strongly between different planets formed in our model. The nitrogen content of atmospheres can vary in planets that grow in different disc models. These differences are related to the formation history of the planet, the time and location that the planet accretes its atmosphere, and are encoded in the bulk abundance of NH3. These results suggest that future observations of atmospheric NH3 and an estimation of the planetary C/O and C/N can inform the formation history of particular planetary systems.

  20. Model Validation for Propulsion - On the TFNS and LES Subgrid Models for a Bluff Body Stabilized Flame

    NASA Technical Reports Server (NTRS)

    Wey, Thomas

    2017-01-01

    With advances in computational power and availability of distributed computers, the use of even the most complex of turbulent chemical interaction models in combustors and coupled analysis of combustors and turbines is now possible and more and more affordable for realistic geometries. Recent more stringent emission standards have enticed the development of more fuel-efficient and low-emission combustion system for aircraft gas turbine applications. It is known that the NOx emissions tend to increase dramatically with increasing flame temperature. It is well known that the major difficulty, when modeling the turbulence-chemistry interaction, lies in the high non-linearity of the reaction rate expressed in terms of the temperature and species mass fractions. The transport filtered density function (FDF) model and the linear eddy model (LEM), which both use local instantaneous values of the temperature and mass fractions, have been shown to often provide more accurate results of turbulent combustion. In the present, the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models, LEM-like and EUPDF-like, capable of emulating the major processes occurring in the turbulence-chemistry interaction will be used to perform reacting flow simulations of a selected test case. The selected test case from the Volvo Validation Rig was documented by Sjunnesson.

  1. Computational Modeling of 3D Tumor Growth and Angiogenesis for Chemotherapy Evaluation

    PubMed Central

    Tang, Lei; van de Ven, Anne L.; Guo, Dongmin; Andasari, Vivi; Cristini, Vittorio; Li, King C.; Zhou, Xiaobo

    2014-01-01

    Solid tumors develop abnormally at spatial and temporal scales, giving rise to biophysical barriers that impact anti-tumor chemotherapy. This may increase the expenditure and time for conventional drug pharmacokinetic and pharmacodynamic studies. In order to facilitate drug discovery, we propose a mathematical model that couples three-dimensional tumor growth and angiogenesis to simulate tumor progression for chemotherapy evaluation. This application-oriented model incorporates complex dynamical processes including cell- and vascular-mediated interstitial pressure, mass transport, angiogenesis, cell proliferation, and vessel maturation to model tumor progression through multiple stages including tumor initiation, avascular growth, and transition from avascular to vascular growth. Compared to pure mechanistic models, the proposed empirical methods are not only easy to conduct but can provide realistic predictions and calculations. A series of computational simulations were conducted to demonstrate the advantages of the proposed comprehensive model. The computational simulation results suggest that solid tumor geometry is related to the interstitial pressure, such that tumors with high interstitial pressure are more likely to develop dendritic structures than those with low interstitial pressure. PMID:24404145

  2. Numerical formulation for the prediction of solid/liquid change of a binary alloy

    NASA Technical Reports Server (NTRS)

    Schneider, G. E.; Tiwari, S. N.

    1990-01-01

    A computational model is presented for the prediction of solid/liquid phase change energy transport including the influence of free convection fluid flow in the liquid phase region. The computational model considers the velocity components of all non-liquid phase change material control volumes to be zero but fully solves the coupled mass-momentum problem within the liquid region. The thermal energy model includes the entire domain and uses an enthalpy like model and a recently developed method for handling the phase change interface nonlinearity. Convergence studies are performed and comparisons made with experimental data for two different problem specifications. The convergence studies indicate that grid independence was achieved and the comparison with experimental data indicates excellent quantitative prediction of the melt fraction evolution. Qualitative data is also provided in the form of velocity vector diagrams and isotherm plots for selected times in the evolution of both problems. The computational costs incurred are quite low by comparison with previous efforts on solving these problems.

  3. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  4. A computer model for the recombination zone of a microwave-plasma electrothermal rocket

    NASA Technical Reports Server (NTRS)

    Filpus, John W.; Hawley, Martin C.

    1987-01-01

    As part of a study of the microwave-plasma electrothermal rocket, a computer model of the flow regime below the plasma has been developed. A second-order model, including axial dispersion of energy and material and boundary conditions at infinite length, was developed to partially reproduce the absence of mass-flow rate dependence that was seen in experimental temperature profiles. To solve the equations of the model, a search technique was developed to find the initial derivatives. On integrating with a trial set of initial derivatives, the values and their derivatives were checked to judge whether the values were likely to attain values outside the practical regime, and hence, the boundary conditions at infinity were likely to be violated. Results are presented and directions for further development are suggested.

  5. Dust grains from the heart of supernovae

    NASA Astrophysics Data System (ADS)

    Bocchio, M.; Marassi, S.; Schneider, R.; Bianchi, S.; Limongi, M.; Chieffi, A.

    2016-03-01

    Dust grains are classically thought to form in the winds of asymptotic giant branch (AGB) stars. However, there is increasing evidence today for dust formation in supernovae (SNe). To establish the relative importance of these two classes of stellar sources of dust, it is important to know the fraction of freshly formed dust in SN ejecta that is able to survive the passage of the reverse shock and be injected in the interstellar medium. With this aim, we have developed a new code, GRASH_Rev, that allows following the dynamics of dust grains in the shocked SN ejecta and computing the time evolution of the mass, composition, and size distribution of the grains. We considered four well-studied SNe in the Milky Way and Large Magellanic Cloud: SN 1987A, CasA, the Crab nebula, and N49. These sources have been observed with both Spitzer and Herschel, and the multiwavelength data allow a better assessment the mass of warm and cold dust associated with the ejecta. For each SN, we first identified the best explosion model, using the mass and metallicity of the progenitor star, the mass of 56Ni, the explosion energy, and the circumstellar medium density inferred from the data. We then ran a recently developed dust formation model to compute the properties of freshly formed dust. Starting from these input models, GRASH_Rev self-consistently follows the dynamics of the grains, considering the effects of the forward and reverse shock, and allows predicting the time evolution of the dust mass, composition, and size distribution in the shocked and unshocked regions of the ejecta. All the simulated models aagree well with observations. Our study suggests that SN 1987A is too young for the reverse shock to have affected the dust mass. Hence the observed dust mass of 0.7-0.9 M⊙ in this source can be safely considered as indicative of the mass of freshly formed dust in SN ejecta. Conversely, in the other three SNe, the reverse shock has already destroyed between 10-40% of the initial dust mass. However, the largest dust mass destruction is predicted to occur between 103 and 105 yr after the explosions. Since the oldest SN in the sample has an estimated age of 4800 yr, current observations can only provide an upper limit to the mass of SN dust that will enrich the interstellar medium, the so-called effective dust yields. We find that only between 1-8% of the currently observed mass will survive, resulting in an average SN effective dust yield of (1.55 ± 1.48) × 10-2M⊙. This agrees well with the values adopted in chemical evolution models that consider the effect of the SN reverse shock. We discuss the astrophysical implications of our results for dust enrichment in local galaxies and at high redshift.

  6. Laboratory Experiments and Modeling of Pooled NAPL Dissolution in Porous Media

    NASA Astrophysics Data System (ADS)

    Copty, N. K.; Sarikurt, D. A.; Gokdemir, C.

    2017-12-01

    The dissolution of non-aqueous phase liquids (NAPLs) entrapped in porous media is commonly modeled at the continuum scale as the product of a chemical potential and an interphase mass transfer coefficient, the latter expressed in terms of Sherwood correlations that are related to flow and porous media properties. Because of the lack of precise estimates of the interface area separating the NAPL and aqueous phase, numerous studies have lumped the interfacial area into the interphase mass transfer coefficient. In this paper controlled dissolution experiments from a pooled NAPL were conducted. The immobile NAPL mass is placed at the bottom of a flow cell filled with porous media with water flowing on top. Effluent aqueous phase concentrations were measured for a wide range of aqueous phase velocities and for two types of porous media. To interpret the experimental results, a two-dimensional pore network model of the NAPL dissolution was developed. The well-defined geometry of the NAPL-water interface and the observed effluent concentrations were used to compute best-fit mass transfer coefficients and non-lumped Sherwood correlations. Comparing the concentrations predicted with the pore network model to simple previously used one-dimensional analytic solutions indicates that the analytic model which ignores the transverse dispersion can lead to over-estimation of the mass transfer coefficient. The predicted Sherwood correlations are also compared to previously published data and implications on NAPL remediation strategies are discussed.

  7. SLHAplus: A library for implementing extensions of the standard model

    NASA Astrophysics Data System (ADS)

    Bélanger, G.; Christensen, Neil D.; Pukhov, A.; Semenov, A.

    2011-03-01

    We provide a library to facilitate the implementation of new models in codes such as matrix element and event generators or codes for computing dark matter observables. The library contains an SLHA reader routine as well as diagonalisation routines. This library is available in CalcHEP and micrOMEGAs. The implementation of models based on this library is supported by LanHEP and FeynRules. Program summaryProgram title: SLHAplus_1.3 Catalogue identifier: AEHX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6283 No. of bytes in distributed program, including test data, etc.: 52 119 Distribution format: tar.gz Programming language: C Computer: IBM PC, MAC Operating system: UNIX (Linux, Darwin, Cygwin) RAM: 2000 MB Classification: 11.1 Nature of problem: Implementation of extensions of the standard model in matrix element and event generators and codes for dark matter observables. Solution method: For generic extensions of the standard model we provide routines for reading files that adopt the standard format of the SUSY Les Houches Accord (SLHA) file. The procedure has been generalized to take into account an arbitrary number of blocks so that the reader can be used in generic models including non-supersymmetric ones. The library also contains routines to diagonalize real and complex mass matrices with either unitary or bi-unitary transformations as well as routines for evaluating the running strong coupling constant, running quark masses and effective quark masses. Running time: 0.001 sec

  8. The dynamics of superclusters - Initial determination of the mass density of the universe at large scales

    NASA Technical Reports Server (NTRS)

    Ford, H. C.; Ciardullo, R.; Harms, R. J.; Bartko, F.

    1981-01-01

    The radial velocities of cluster members of two rich, large superclusters have been measured in order to probe the supercluster mass densities, and simple evolutionary models have been computed to place limits upon the mass density within each supercluster. These superclusters represent true physical associations of size of about 100 Mpc seen presently at an early stage of evolution. One supercluster is weakly bound, the other probably barely bound, but possibly marginally unbound. Gravity has noticeably slowed the Hubble expansion of both superclusters. Galaxy surface-density counts and the density enhancement of Abell clusters within each supercluster were used to derive the ratio of mass densities of the superclusters to the mean field mass density. The results strongly exclude a closed universe.

  9. Computer-aided controllability assessment of generic manned Space Station concepts

    NASA Technical Reports Server (NTRS)

    Ferebee, M. J.; Deryder, L. J.; Heck, M. L.

    1984-01-01

    NASA's Concept Development Group assessment methodology for the on-orbit rigid body controllability characteristics of each generic configuration proposed for the manned space station is presented; the preliminary results obtained represent the first step in the analysis of these eight configurations. Analytical computer models of each configuration were developed by means of the Interactive Design Evaluation of Advanced Spacecraft CAD system, which created three-dimensional geometry models of each configuration to establish dimensional requirements for module connectivity, payload accommodation, and Space Shuttle berthing; mass, center-of-gravity, inertia, and aerodynamic drag areas were then derived. Attention was also given to the preferred flight attitude of each station concept.

  10. Numerical Computation of Flame Spread over a Thin Solid in Forced Concurrent Flow with Gas-phase Radiation

    NASA Technical Reports Server (NTRS)

    Jiang, Ching-Biau; T'ien, James S.

    1994-01-01

    Excerpts from a paper describing the numerical examination of concurrent-flow flame spread over a thin solid in purely forced flow with gas-phase radiation are presented. The computational model solves the two-dimensional, elliptic, steady, and laminar conservation equations for mass, momentum, energy, and chemical species. Gas-phase combustion is modeled via a one-step, second order finite rate Arrhenius reaction. Gas-phase radiation considering gray non-scattering medium is solved by a S-N discrete ordinates method. A simplified solid phase treatment assumes a zeroth order pyrolysis relation and includes radiative interaction between the surface and the gas phase.

  11. Impact of thermal energy storage properties on solar dynamic space power conversion system mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.; Coles-Hamilton, Carolyn E.; Lacy, Dovie E.

    1987-01-01

    A 16 parameter solar concentrator/heat receiver mass model is used in conjunction with Stirling and Brayton Power Conversion System (PCS) performance and mass computer codes to determine the effect of thermal energy storage (TES) material property changes on overall PCS mass as a function of steady state electrical power output. Included in the PCS mass model are component masses as a function of thermal power for: concentrator, heat receiver, heat exchangers (source unless integral with heat receiver, heat sink, regenerator), heat engine units with optional parallel redundancy, power conditioning and control (PC and C), PC and C radiator, main radiator, and structure. Critical TES properties are: melting temperature, heat of fusion, density of the liquid phase, and the ratio of solid-to-liquid density. Preliminary results indicate that even though overalll system efficiency increases with TES melting temperature up to 1400 K for concentrator surface accuracies of 1 mrad or better, reductions in the overall system mass beyond that achievable with lithium fluoride (LiF) can be accomplished only if the heat of fusion is at least 800 kJ/kg and the liquid density is comparable to that of LiF (1880 kg/cu m.

  12. Impact of thermal energy storage properties on solar dynamic space power conversion system mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.; Coles-Hamilton, Carolyn E.; Lacy, Dovie E.

    1987-01-01

    A 16 parameter solar concentrator/heat receiver mass model is used in conjunction with Stirling and Brayton Power Conversion System (PCS) performance and mass computer codes to determine the effect of thermal energy storage (TES) material property changes on overall PCS mass as a function of steady state electrical power output. Included in the PCS mass model are component masses as a function of thermal power for: concentrator, heat receiver, heat exchangers (source unless integral with heat receiver, heat sink, regenerator), heat engine units with optional parallel redundancy, power conditioning and control (PC and C), PC and C radiator, main radiator, and structure. Critical TES properties are: melting temperature, heat of fusion, density of the liquid phase, and the ratio of solid-to-liquid density. Preliminary results indicate that even though overall system efficiency increases with TES melting temperature up to 1400 K for concentrator surface accuracies of 1 mrad or better, reductions in the overall system mass beyond that achievable with lithium fluoride (LiF) can be accomplished only if the heat of fusion is at least 800 kJ/kg and the liquid density is comparable to that of LiF (1800 kg/cu m).

  13. Exact N 3LO results for qq ' → H + X

    DOE PAGES

    Anzai, Chihaya; Hasselhuhn, Alexander; Höschele, Maik; ...

    2015-07-27

    We compute the contribution to the total cross section for the inclusive production of a Standard Model Higgs boson induced by two quarks with different flavour in the initial state. Our calculation is exact in the Higgs boson mass and the partonic center-of-mass energy. Here, we describe the reduction to master integrals, the construction of a canonical basis, and the solution of the corresponding differential equations. Our analytic result contains both Harmonic Polylogarithms and iterated integrals with additional letters in the alphabet.

  14. The turbulent mean-flow, Reynolds-stress, and heat flux equations in mass-averaged dependent variables

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Rose, W. C.

    1973-01-01

    The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.

  15. Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source

    NASA Astrophysics Data System (ADS)

    Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.

    2017-11-01

    Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.

  16. Terrestrial water storage variations and surface vertical deformation derived from GPS and GRACE observations in Nepal and Himalayas

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Shen, W.; Hwang, C.

    2015-12-01

    As an elastic Earth, the surface vertical deformation is in response to hydrological mass change on or near Earth's surface. The continuous GPS (CGPS) records show surface vertical deformations which are significant information to estimate the variation of terrestrial water storage. We compute the loading deformations at GPS stations based on synthetic models of seasonal water load distribution and then invert the synthetic GPS data for surface mass distribution. We use GRACE gravity observations and hydrology models to evaluate seasonal water storage variability in Nepal and Himalayas. The coherence among GPS inversion results, GRACE and hydrology models indicate that GPS can provide quantitative estimates of terrestrial water storage variations by inverting the surface deformation observations. The annual peak-to-peak surface mass change derived from GPS and GRACE results reveal seasonal loads oscillations of water, snow and ice. Meanwhile, the present uplifting of Nepal and Himalayas indicates the hydrology mass loss. This study is supported by National 973 Project China (grant Nos. 2013CB733302 and 2013CB733305), NSFC (grant Nos. 41174011, 41429401, 41210006, 41128003, 41021061).

  17. A two-fluid model for avalanche and debris flows.

    PubMed

    Pitman, E Bruce; Le, Long

    2005-07-15

    Geophysical mass flows--debris flows, avalanches, landslides--can contain O(10(6)-10(10)) m(3) or more of material, often a mixture of soil and rocks with a significant quantity of interstitial fluid. These flows can be tens of meters in depth and hundreds of meters in length. The range of scales and the rheology of this mixture presents significant modelling and computational challenges. This paper describes a depth-averaged 'thin layer' model of geophysical mass flows containing a mixture of solid material and fluid. The model is derived from a 'two-phase' or 'two-fluid' system of equations commonly used in engineering research. Phenomenological modelling and depth averaging combine to yield a tractable set of equations, a hyperbolic system that describes the motion of the two constituent phases. If the fluid inertia is small, a reduced model system that is easier to solve may be derived.

  18. A three-dimensional, time-dependent model of Mobile Bay

    NASA Technical Reports Server (NTRS)

    Pitts, F. H.; Farmer, R. C.

    1976-01-01

    A three-dimensional, time-variant mathematical model for momentum and mass transport in estuaries was developed and its solution implemented on a digital computer. The mathematical model is based on state and conservation equations applied to turbulent flow of a two-component, incompressible fluid having a free surface. Thus, bouyancy effects caused by density differences between the fresh and salt water, inertia from thare river and tidal currents, and differences in hydrostatic head are taken into account. The conservation equations, which are partial differential equations, are solved numerically by an explicit, one-step finite difference scheme and the solutions displayed numerically and graphically. To test the validity of the model, a specific estuary for which scaled model and experimental field data are available, Mobile Bay, was simulated. Comparisons of velocity, salinity and water level data show that the model is valid and a viable means of simulating the hydrodynamics and mass transport in non-idealized estuaries.

  19. A high resolution model of linear trend in mass variations from DMT-2: Added value of accounting for coloured noise in GRACE data

    NASA Astrophysics Data System (ADS)

    Farahani, Hassan H.; Ditmar, Pavel; Inácio, Pedro; Didova, Olga; Gunter, Brian; Klees, Roland; Guo, Xiang; Guo, Jing; Sun, Yu; Liu, Xianglin; Zhao, Qile; Riva, Riccardo

    2017-01-01

    We present a high resolution model of the linear trend in the Earth's mass variations based on DMT-2 (Delft Mass Transport model, release 2). DMT-2 was produced primarily from K-Band Ranging (KBR) data of the Gravity Recovery And Climate Experiment (GRACE). It comprises a time series of monthly solutions complete to spherical harmonic degree 120. A novel feature in its production was the accurate computation and incorporation of stochastic properties of coloured noise when processing KBR data. The unconstrained DMT-2 monthly solutions are used to estimate the linear trend together with a bias, as well as annual and semi-annual sinusoidal terms. The linear term is further processed with an anisotropic Wiener filter, which uses full noise and signal covariance matrices. Given the fact that noise in an unconstrained model of the trend is reduced substantially as compared to monthly solutions, the Wiener filter associated with the trend is much less aggressive compared to a Wiener filter applied to monthly solutions. Consequently, the trend estimate shows an enhanced spatial resolution. It allows signals in relatively small water bodies, such as Aral sea and Ladoga lake, to be detected. Over the ice sheets, it allows for a clear identification of signals associated with some outlet glaciers or their groups. We compare the obtained trend estimate with the ones from the CSR-RL05 model using (i) the same approach based on monthly noise covariance matrices and (ii) a commonly-used approach based on the DDK-filtered monthly solutions. We use satellite altimetry data as independent control data. The comparison demonstrates a high spatial resolution of the DMT-2 linear trend. We link this to the usage of high-accuracy monthly noise covariance matrices, which is due to an accurate computation and incorporation of coloured noise when processing KBR data. A preliminary comparison of the linear trend based on DMT-2 with that computed from GSFC_global_mascons_v01 reveals, among other, a high concentration of the signal along the coast for both models in areas like the ice sheets, Gulf of Alaska, and Iceland.

  20. Numerical solutions of the Navier-Stokes equations for transonic afterbody flows

    NASA Technical Reports Server (NTRS)

    Swanson, R. C., Jr.

    1980-01-01

    The time dependent Navier-Stokes equations in mass averaged variables are solved for transonic flow over axisymmetric boattail plume simulator configurations. Numerical solution of these equations is accomplished with the unsplit explict finite difference algorithm of MacCormack. A grid subcycling procedure and computer code vectorization are used to improve computational efficiency. The two layer algebraic turbulence models of Cebeci-Smith and Baldwin-Lomax are employed for investigating turbulence closure. Two relaxation models based on these baseline models are also considered. Results in the form of surface pressure distribution for three different circular arc boattails at two free stream Mach numbers are compared with experimental data. The pressures in the recirculating flow region for all separated cases are poorly predicted with the baseline turbulence models. Significant improvements in the predictions are usually obtained by using the relaxation models.

  1. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    NASA Astrophysics Data System (ADS)

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  2. Development of an Advanced Computational Model for OMCVD of Indium Nitride

    NASA Technical Reports Server (NTRS)

    Cardelino, Carlos A.; Moore, Craig E.; Cardelino, Beatriz H.; Zhou, Ning; Lowry, Sam; Krishnan, Anantha; Frazier, Donald O.; Bachmann, Klaus J.

    1999-01-01

    An advanced computational model is being developed to predict the formation of indium nitride (InN) film from the reaction of trimethylindium (In(CH3)3) with ammonia (NH3). The components are introduced into the reactor in the gas phase within a background of molecular nitrogen (N2). Organometallic chemical vapor deposition occurs on a heated sapphire surface. The model simulates heat and mass transport with gas and surface chemistry under steady state and pulsed conditions. The development and validation of an accurate model for the interactions between the diffusion of gas phase species and surface kinetics is essential to enable the regulation of the process in order to produce a low defect material. The validation of the model will be performed in concert with a NASA-North Carolina State University project.

  3. BETR Global - A geographically explicit global-scale multimedia contaminant fate model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macleod, M.; Waldow, H. von; Tay, P.

    2011-04-01

    We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15{sup o} x 15{sup o} grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5).

  4. Modal mass estimation from ambient vibrations measurement: A method for civil buildings

    NASA Astrophysics Data System (ADS)

    Acunzo, G.; Fiorini, N.; Mori, F.; Spina, D.

    2018-01-01

    A new method for estimating the modal mass ratios of buildings from unscaled mode shapes identified from ambient vibrations is presented. The method is based on the Multi Rigid Polygons (MRP) model in which each floor of the building is ideally divided in several non-deformable polygons that move independent of each other. The whole mass of the building is concentrated in the centroid of the polygons and the experimental mode shapes are expressed in term of rigid translations and of rotations. In this way, the mass matrix of the building can be easily computed on the basis of simple information about the geometry and the materials of the structure. The modal mass ratios can be then obtained through the classical equation of structural dynamics. Ambient vibrations measurement must be performed according to this MRP models, using at least two biaxial accelerometers per polygon. After a brief illustration of the theoretical background of the method, numerical validations are presented analysing the method sensitivity for possible different source of errors. Quality indexes are defined for evaluating the approximation of the modal mass ratios obtained from a certain MRP model. The capability of the proposed model to be applied to real buildings is illustrated through two experimental applications. In the first one, a geometrically irregular reinforced concrete building is considered, using a calibrated Finite Element Model for validating the results of the method. The second application refers to a historical monumental masonry building, with a more complex geometry and with less information available. In both cases, MRP models with a different number of rigid polygons per floor are compared.

  5. Experimental and CFD-PBM approach coupled with a simplified dynamic analysis of mass transfer in phenol biodegradation in a three phase system of an aerated two-phase partitioning bioreactor for environmental applications

    NASA Astrophysics Data System (ADS)

    Moradkhani, Hamed; Anarjan Kouchehbagh, Navideh; Izadkhah, Mir-Shahabeddin

    2017-03-01

    A three-dimensional transient modeling of a two-phase partitioning bioreactor, combining system hydrodynamics, two simultaneous mass transfer and microorganism growth is modeled using computational fluid dynamics code FLUENT 6.2. The simulation is based on standard "k-ɛ" Reynolds-averaged Navier-Stokes model. Population balance model is implemented in order to describe gas bubble coalescence, breakage and species transport in the reaction medium and to predict oxygen volumetric mass transfer coefficient (kLa). Model results are verified against experimental data and show good agreement as 13 classes of bubble size is taking into account. Flow behavior in different operational conditions is studied. Almost at all impeller speeds and aeration intensities there were acceptable distributions of species caused by proper mixing. The magnitude of dissolved oxygen percentage in aqueous phase has a direct correlation with impeller speed and any increasing of the aeration magnitude leads to faster saturation in shorter periods of time.

  6. Refining Models of L1527-IRS

    NASA Astrophysics Data System (ADS)

    Baker Metzler-Winslow, Elizabeth; Terebey, Susan

    2018-06-01

    This project examines the Class 0/Class 1 protostar L1527-IRS (hereby referred to as L1527) in the interest of creating a more accurate computational model. In a Class 0/Class I protostar like L1527, the envelope is massive, the protostar is growing in mass, and the disk is a small fraction of the protostar mass. Recent work based on ALMA data indicates that L1527, located in the constellation Taurus (about 140 parsecs from Earth), is about ~0.44 solar masses. Existing models were able to fit the spectral energy distribution of L1527 by assuming a puffed-up inner disk. However, the inclusion of the puffed-up disk results in a portion of the disk coinciding with the outflow cavities, a physically unsatisfying arrangement. This project tests models which decrease the size of the disk and increase the density of the outflow cavities (hypothesizing that some dust from the walls of the outflow cavities is swept up into the cavity itself) against existing observational data, and finds that these models fit the data relatively well.

  7. Radiative corrections to masses and couplings in universal extra dimensions

    NASA Astrophysics Data System (ADS)

    Freitas, Ayres; Kong, Kyoungchul; Wiegand, Daniel

    2018-03-01

    Models with an orbifolded universal extra dimension receive important loop-induced corrections to the masses and couplings of Kaluza-Klein (KK) particles. The dominant contributions stem from so-called boundary terms which violate KK number. Previously, only the parts of these boundary terms proportional to ln(Λ R) have been computed, where R is the radius of the extra dimension and Λ is cut-off scale. However, for typical values of Λ R ˜ 10 · · · 50, the logarithms are not particularly large and non-logarithmic contributions may be numerically important. In this paper, these remaining finite terms are computed and their phenomenological impact is discussed. It is shown that the finite terms have a significant impact on the KK mass spectrum. Furthermore, one finds new KK-number violating interactions that do not depend on ln(Λ R) but nevertheless are non-zero. These lead to new production and decay channels for level-2 KK particles at colliders.

  8. Dynamic Discharge Arc Driver. [computerized simulation

    NASA Technical Reports Server (NTRS)

    Dannenberg, R. E.; Slapnicar, P. I.

    1975-01-01

    A computer program using nonlinear RLC circuit analysis was developed to accurately model the electrical discharge performance of the Ames 1-MJ energy storage and arc-driver system. Solutions of circuit parameters are compared with experimental circuit data and related to shock speed measurements. Computer analysis led to the concept of a Dynamic Discharge Arc Driver (DDAD) capable of increasing the range of operation of shock-driven facilities. Utilization of mass addition of the driver gas offers a unique means of improving driver performance. Mass addition acts to increase the arc resistance, which results in better electrical circuit damping with more efficient Joule heating, producing stronger shock waves. Preliminary tests resulted in an increase in shock Mach number from 34 to 39 in air at an initial pressure of 2.5 torr.

  9. Are metastases from metastases clinical relevant? Computer modelling of cancer spread in a case of hepatocellular carcinoma.

    PubMed

    Bethge, Anja; Schumacher, Udo; Wree, Andreas; Wedemann, Gero

    2012-01-01

    Metastasis formation remains an enigmatic process and one of the main questions recently asked is whether metastases are able to generate further metastases. Different models have been proposed to answer this question; however, their clinical significance remains unclear. Therefore a computer model was developed that permits comparison of the different models quantitatively with clinical data and that additionally predicts the outcome of treatment interventions. The computer model is based on discrete events simulation approach. On the basis of a case from an untreated patient with hepatocellular carcinoma and its multiple metastases in the liver, it was evaluated whether metastases are able to metastasise and in particular if late disseminated tumour cells are still capable to form metastases. Additionally, the resection of the primary tumour was simulated. The simulation results were compared with clinical data. The simulation results reveal that the number of metastases varies significantly between scenarios where metastases metastasise and scenarios where they do not. In contrast, the total tumour mass is nearly unaffected by the two different modes of metastasis formation. Furthermore, the results provide evidence that metastasis formation is an early event and that late disseminated tumour cells are still capable of forming metastases. Simulations also allow estimating how the resection of the primary tumour delays the patient's death. The simulation results indicate that for this particular case of a hepatocellular carcinoma late metastases, i.e., metastases from metastases, are irrelevant in terms of total tumour mass. Hence metastases seeded from metastases are clinically irrelevant in our model system. Only the first metastases seeded from the primary tumour contribute significantly to the tumour burden and thus cause the patient's death.

  10. SEMI-EMPIRICAL MODELING OF THE PHOTOSPHERE, CHROMOPSHERE, TRANSITION REGION, AND CORONA OF THE M-DWARF HOST STAR GJ 832

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontenla, J. M.; Linsky, Jeffrey L.; Witbrod, Jesse

    Stellar radiation from X-rays to the visible provides the energy that controls the photochemistry and mass loss from exoplanet atmospheres. The important extreme ultraviolet (EUV) region (10–91.2 nm) is inaccessible and should be computed from a reliable stellar model. It is essential to understand the formation regions and physical processes responsible for the various stellar emission features to predict how the spectral energy distribution varies with age and activity levels. We compute a state-of-the-art semi-empirical atmospheric model and the emergent high-resolution synthetic spectrum of the moderately active M2 V star GJ 832 as the first of a series of modelsmore » for stars with different activity levels. We construct a one-dimensional simple model for the physical structure of the star’s chromosphere, chromosphere-corona transition region, and corona using non-LTE radiative transfer techniques and many molecular lines. The synthesized spectrum for this model fits the continuum and lines across the UV-to-optical spectrum. Particular emphasis is given to the emission lines at wavelengths that are shorter than 300 nm observed with the Hubble Space Telescope , which have important effects on the photochemistry of the exoplanet atmospheres. The FUV line ratios indicate that the transition region of GJ 832 is more biased to hotter material than that of the quiet Sun. The excellent agreement of our computed EUV luminosity with that obtained by two other techniques indicates that our model predicts reliable EUV emission from GJ 832. We find that the unobserved EUV flux of GJ 832, which heats the outer atmospheres of exoplanets and drives their mass loss, is comparable to the active Sun.« less

  11. Using the CIFIST grid of CO5BOLD 3D model atmospheres to study the effects of stellar granulation on photometric colours. I. Grids of 3D corrections in the UBVRI, 2MASS, HIPPARCOS, Gaia, and SDSS systems

    NASA Astrophysics Data System (ADS)

    Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kučinskas, A.; Prakapavičius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.

    2018-03-01

    Context. The atmospheres of cool stars are temporally and spatially inhomogeneous due to the effects of convection. The influence of this inhomogeneity, referred to as granulation, on colours has never been investigated over a large range of effective temperatures and gravities. Aim. We aim to study, in a quantitative way, the impact of granulation on colours. Methods: We use the CIFIST (Cosmological Impact of the FIrst Stars) grid of CO5BOLD (COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions, L = 2, 3) hydrodynamical models to compute emerging fluxes. These in turn are used to compute theoretical colours in the UBV RI, 2MASS, HIPPARCOS, Gaia and SDSS systems. Every CO5BOLD model has a corresponding one dimensional (1D) plane-parallel LHD (Lagrangian HydroDynamics) model computed for the same atmospheric parameters, which we used to define a "3D correction" that can be applied to colours computed from fluxes computed from any 1D model atmosphere code. As an example, we illustrate these corrections applied to colours computed from ATLAS models. Results: The 3D corrections on colours are generally small, of the order of a few hundredths of a magnitude, yet they are far from negligible. We find that ignoring granulation effects can lead to underestimation of Teff by up to 200 K and overestimation of gravity by up to 0.5 dex, when using colours as diagnostics. We have identified a major shortcoming in how scattering is treated in the current version of the CIFIST grid, which could lead to offsets of the order 0.01 mag, especially for colours involving blue and UV bands. We have investigated the Gaia and HIPPARCOS photometric systems and found that the (G - Hp), (BP - RP) diagram is immune to the effects of granulation. In addition, we point to the potential of the RVS photometry as a metallicity diagnostic. Conclusions: Our investigation shows that the effects of granulation should not be neglected if one wants to use colours as diagnostics of the stellar parameters of F, G, K stars. A limitation is that scattering is treated as true absorption in our current computations, thus our 3D corrections are likely an upper limit to the true effect. We are already computing the next generation of the CIFIST grid, using an approximate treatment of scattering. The appendix tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A68

  12. A modified homogeneous relaxation model for CO2 two-phase flow in vapour ejector

    NASA Astrophysics Data System (ADS)

    Haida, M.; Palacz, M.; Smolka, J.; Nowak, A. J.; Hafner, A.; Banasiak, K.

    2016-09-01

    In this study, the homogenous relaxation model (HRM) for CO2 flow in a two-phase ejector was modified in order to increase the accuracy of the numerical simulations The two- phase flow model was implemented on the effective computational tool called ejectorPL for fully automated and systematic computations of various ejector shapes and operating conditions. The modification of the HRM was performed by a change of the relaxation time and the constants included in the relaxation time equation based on the experimental result under the operating conditions typical for the supermarket refrigeration system. The modified HRM was compared to the HEM results, which were performed based on the comparison of motive nozzle and suction nozzle mass flow rates.

  13. Mesoscale Modeling of LX-17 Under Isentropic Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, H K; Willey, T M; Friedman, G

    Mesoscale simulations of LX-17 incorporating different equilibrium mixture models were used to investigate the unreacted equation-of-state (UEOS) of TATB. Candidate TATB UEOS were calculated using the equilibrium mixture models and benchmarked with mesoscale simulations of isentropic compression experiments (ICE). X-ray computed tomography (XRCT) data provided the basis for initializing the simulations with realistic microstructural details. Three equilibrium mixture models were used in this study. The single constituent with conservation equations (SCCE) model was based on a mass-fraction weighted specific volume and the conservation of mass, momentum, and energy. The single constituent equation-of-state (SCEOS) model was based on a mass-fraction weightedmore » specific volume and the equation-of-state of the constituents. The kinetic energy averaging (KEA) model was based on a mass-fraction weighted particle velocity mixture rule and the conservation equations. The SCEOS model yielded the stiffest TATB EOS (0.121{micro} + 0.4958{micro}{sup 2} + 2.0473{micro}{sup 3}) and, when incorporated in mesoscale simulations of the ICE, demonstrated the best agreement with VISAR velocity data for both specimen thicknesses. The SCCE model yielded a relatively more compliant EOS (0.1999{micro}-0.6967{micro}{sup 2} + 4.9546{micro}{sup 3}) and the KEA model yielded the most compliant EOS (0.1999{micro}-0.6967{micro}{sup 2}+4.9546{micro}{sup 3}) of all the equilibrium mixture models. Mesoscale simulations with the lower density TATB adiabatic EOS data demonstrated the least agreement with VISAR velocity data.« less

  14. Modeling Gas-Particle Partitioning of SOA: Effects of Aerosol Physical State and RH

    NASA Astrophysics Data System (ADS)

    Zuend, A.; Seinfeld, J.

    2011-12-01

    Aged tropospheric aerosol particles contain mixtures of inorganic salts, acids, water, and a large variety of organic compounds. In liquid aerosol particles non-ideal mixing of all species determines whether the condensed phase undergoes liquid-liquid phase separation or whether it is stable in a single mixed phase, and whether it contains solid salts in equilibrium with their saturated solution. The extended thermodynamic model AIOMFAC is able to predict such phase states by representing the variety of organic components using functional groups within a group-contribution concept. The number and composition of different condensed phases impacts the diversity of reaction media for multiphase chemistry and the gas-particle partitioning of semivolatile species. Recent studies show that under certain conditions biogenic and other organic-rich particles can be present in a highly viscous, semisolid or amorphous solid physical state, with consequences regarding reaction kinetics and mass transfer limitations. We present results of new gas-particle partitioning computations for aerosol chamber data using a model based on AIOMFAC activity coefficients and state-of-the-art vapor pressure estimation methods. Different environmental conditions in terms of temperature, relative humidity (RH), salt content, amount of precursor VOCs, and physical state of the particles are considered. We show how modifications of absorptive and adsorptive gas-particle mass transfer affects the total aerosol mass in the calculations and how the results of these modeling approaches compare to data of aerosol chamber experiments, such as alpha-pinene oxidation SOA. For a condensed phase in a mixed liquid state containing ammonium sulfate, the model predicts liquid-liquid phase separation up to high RH in case of, on average, moderately hydrophilic organic compounds, such as first generation oxidation products of alpha-pinene. The computations also reveal that treating liquid phases as ideal mixtures substantially overestimates the SOA mass, especially at high relative humidity.

  15. Computational Fluid Dynamic Modeling of Zinc Slag Fuming Process in Top-Submerged Lance Smelting Furnace

    NASA Astrophysics Data System (ADS)

    Huda, Nazmul; Naser, Jamal; Brooks, Geoffrey; Reuter, Markus A.; Matusewicz, Robert W.

    2012-02-01

    Slag fuming is a reductive treatment process for molten zinciferous slags for extracting zinc in the form of metal vapor by injecting or adding a reductant source such as pulverized coal or lump coal and natural gas. A computational fluid dynamic (CFD) model was developed to study the zinc slag fuming process from imperial smelting furnace (ISF) slag in a top-submerged lance furnace and to investigate the details of fluid flow, reaction kinetics, and heat transfer in the furnace. The model integrates combustion phenomena and chemical reactions with the heat, mass, and momentum interfacial interaction between the phases present in the system. A commercial CFD package AVL Fire 2009.2 (AVL, Graz, Austria) coupled with a number of user-defined subroutines in FORTRAN programming language were used to develop the model. The model is based on three-dimensional (3-D) Eulerian multiphase flow approach, and it predicts the velocity and temperature field of the molten slag bath, generated turbulence, and vortex and plume shape at the lance tip. The model also predicts the mass fractions of slag and gaseous components inside the furnace. The model predicted that the percent of ZnO in the slag bath decreases linearly with time and is consistent broadly with the experimental data. The zinc fuming rate from the slag bath predicted by the model was validated through macrostep validation process against the experimental study of Waladan et al. The model results predicted that the rate of ZnO reduction is controlled by the mass transfer of ZnO from the bulk slag to slag-gas interface and rate of gas-carbon reaction for the specified simulation time studied. Although the model is based on zinc slag fuming, the basic approach could be expanded or applied for the CFD analysis of analogous systems.

  16. P-MartCancer: A New Online Platform to Access CPTAC Datasets and Enable New Analyses | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    The November 1, 2017 issue of Cancer Research is dedicated to a collection of computational resource papers in genomics, proteomics, animal models, imaging, and clinical subjects for non-bioinformaticists looking to incorporate computing tools into their work. Scientists at Pacific Northwest National Laboratory have developed P-MartCancer, an open, web-based interactive software tool that enables statistical analyses of peptide or protein data generated from mass-spectrometry (MS)-based global proteomics experiments.

  17. Spectroscopy of triply charmed baryons from lattice QCD

    DOE PAGES

    Padmanath, M.; Edwards, Robert G.; Mathur, Nilmani; ...

    2014-10-14

    The spectrum of excitations of triply-charmed baryons is computed using lattice QCD including dynamical light quark fields. The spectrum obtained has baryonic states with well-defined total spin up to 7/2 and the low-lying states closely resemble the expectation from models with an SU(6) x O(3) symmetry. As a result, energy splittings between extracted states, including those due to spin-orbit coupling in the heavy quark limit are computed and compared against data at other quark masses.

  18. Studying the Chemistry of Cationized Triacylglycerols Using Electrospray Ionization Mass Spectrometry and Density Functional Theory Computations

    NASA Astrophysics Data System (ADS)

    Grossert, J. Stuart; Herrera, Lisandra Cubero; Ramaley, Louis; Melanson, Jeremy E.

    2014-08-01

    Analysis of triacylglycerols (TAGs), found as complex mixtures in living organisms, is typically accomplished using liquid chromatography, often coupled to mass spectrometry. TAGs, weak bases not protonated using electrospray ionization, are usually ionized by adduct formation with a cation, including those present in the solvent (e.g., Na+). There are relatively few reports on the binding of TAGs with cations or on the mechanisms by which cationized TAGs fragment. This work examines binding efficiencies, determined by mass spectrometry and computations, for the complexation of TAGs to a range of cations (Na+, Li+, K+, Ag+, NH4 +). While most cations bind to oxygen, Ag+ binding to unsaturation in the acid side chains is significant. The importance of dimer formation, [2TAG + M]+ was demonstrated using several different types of mass spectrometers. From breakdown curves, it became apparent that two or three acid side chains must be attached to glycerol for strong cationization. Possible mechanisms for fragmentation of lithiated TAGs were modeled by computations on tripropionylglycerol. Viable pathways were found for losses of neutral acids and lithium salts of acids from different positions on the glycerol moiety. Novel lactone structures were proposed for the loss of a neutral acid from one position of the glycerol moiety. These were studied further using triple-stage mass spectrometry (MS3). These lactones can account for all the major product ions in the MS3 spectra in both this work and the literature, which should allow for new insights into the challenging analytical methods needed for naturally occurring TAGs.

  19. Material point method modeling in oil and gas reservoirs

    DOEpatents

    Vanderheyden, William Brian; Zhang, Duan

    2016-06-28

    A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.

  20. The Standard Model and Higgs physics

    NASA Astrophysics Data System (ADS)

    Torassa, Ezio

    2018-05-01

    The Standard Model is a consistent and computable theory that successfully describes the elementary particle interactions. The strong, electromagnetic and weak interactions have been included in the theory exploiting the relation between group symmetries and group generators, in order to smartly introduce the force carriers. The group properties lead to constraints between boson masses and couplings. All the measurements performed at the LEP, Tevatron, LHC and other accelerators proved the consistency of the Standard Model. A key element of the theory is the Higgs field, which together with the spontaneous symmetry breaking, gives mass to the vector bosons and to the fermions. Unlike the case of vector bosons, the theory does not provide prediction for the Higgs boson mass. The LEP experiments, while providing very precise measurements of the Standard Model theory, searched for the evidence of the Higgs boson until the year 2000. The discovery of the top quark in 1994 by the Tevatron experiments and of the Higgs boson in 2012 by the LHC experiments were considered as the completion of the fundamental particles list of the Standard Model theory. Nevertheless the neutrino oscillations, the dark matter and the baryon asymmetry in the Universe evidence that we need a new extended model. In the Standard Model there are also some unattractive theoretical aspects like the divergent loop corrections to the Higgs boson mass and the very small Yukawa couplings needed to describe the neutrino masses. For all these reasons, the hunt of discrepancies between Standard Model and data is still going on with the aim to finally describe the new extended theory.

  1. The bulge-halo conspiracy in massive elliptical galaxies: implications for the stellar initial mass function and halo response to baryonic processes

    NASA Astrophysics Data System (ADS)

    Dutton, Aaron A.; Treu, Tommaso

    2014-03-01

    Recent studies have shown that massive elliptical galaxies have total mass density profiles within an effective radius that can be approximated as ρ_tot∝ r^{-γ^', with mean slope <γ'> = 2.08 ± 0.03 and scatter σ _{γ ^' } }=0.16± 0.02. The small scatter of the slope (known as the bulge-halo conspiracy) is not generic in Λ cold dark matter (ΛCDM) based models and therefore contains information about the galaxy formation process. We compute the distribution of γ' for ΛCDM-based models that reproduce the observed correlations between stellar mass, velocity dispersion, and effective radius of early-type galaxies in the Sloan Digital Sky Survey. The models have a range of stellar initial mass functions (IMFs) and dark halo responses to galaxy formation. The observed distribution of γ' is well reproduced by a model with cosmologically motivated but uncontracted dark matter haloes, and a Salpeter-type IMF. Other models are on average ruled out by the data, even though they may happen in individual cases. Models with adiabatic halo contraction (and lighter IMFs) predict too small values of γ'. Models with halo expansion, or mass-follows-light predict too high values of γ'. Our study shows that the non-homologous structure of massive early-type galaxies can be precisely reproduced by ΛCDM models if the IMF is not universal and if mechanisms, such as feedback from active galactic nuclei, or dynamical friction, effectively on average counterbalance the contraction of the halo expected as a result of baryonic cooling.

  2. Development and validation of a mass casualty conceptual model.

    PubMed

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  3. xTract: software for characterizing conformational changes of protein complexes by quantitative cross-linking mass spectrometry.

    PubMed

    Walzthoeni, Thomas; Joachimiak, Lukasz A; Rosenberger, George; Röst, Hannes L; Malmström, Lars; Leitner, Alexander; Frydman, Judith; Aebersold, Ruedi

    2015-12-01

    Chemical cross-linking in combination with mass spectrometry generates distance restraints of amino acid pairs in close proximity on the surface of native proteins and protein complexes. In this study we used quantitative mass spectrometry and chemical cross-linking to quantify differences in cross-linked peptides obtained from complexes in spatially discrete states. We describe a generic computational pipeline for quantitative cross-linking mass spectrometry consisting of modules for quantitative data extraction and statistical assessment of the obtained results. We used the method to detect conformational changes in two model systems: firefly luciferase and the bovine TRiC complex. Our method discovers and explains the structural heterogeneity of protein complexes using only sparse structural information.

  4. Temporal Subtraction of Digital Breast Tomosynthesis Images for Improved Mass Detection

    DTIC Science & Technology

    2008-10-01

    K. Fishman and B. M. W. Tsui, "Development of a computer-generated model for the coronary arterial tree based on multislice CT and morphometric data...mathematical models based on geometric primitives8-22. Bakic et al created synthetic x-ray mammograms using a 3D simulated breast tissue model consisting of...utilized a combination of voxel matrices and geometric primitives to create a breast phantom that includes the breast surface, the duct system, and

  5. The Application of Systems Analysis and Mathematical Models to the Study of Erythropoiesis During Space Flight

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    Included in the report are: (1) review of the erythropoietic mechanisms; (2) an evaluation of existing models for the control of erythropoiesis; (3) a computer simulation of the model's response to hypoxia; (4) an hypothesis to explain observed decreases in red blood cell mass during weightlessness; (5) suggestions for further research; and (6) an assessment of the role that systems analysis can play in the Skylab hematological program.

  6. Evaluation of the Community Multiscale Air Quality (CMAQ) ...

    EPA Pesticide Factsheets

    This work evaluates particle size-composition distributions simulated by the Community Multiscale Air Quality (CMAQ) model using Micro-Orifice Uniform Deposit Impactor (MOUDI) measurements at 18 sites across North America. Size-resolved measurements of particulate SO4+, with the model ranging from an underestimation to overestimation of both the peak diameter and peak particle concentration across the sites. Computing PM2.5 from the modeled size distribution parameters rather than by summing the masses in the Aitken and a

  7. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  8. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.

  9. ESTIMATION OF THE RATE OF VOC EMISSIONS FROM SOLVENT-BASED INDOOR COATING MATERIALS BASED ON PRODUCT FORMULATION

    EPA Science Inventory

    Two computational methods are proposed for estimation of the emission rate of volatile organic compounds (VOCs) from solvent-based indoor coating materials based on the knowledge of product formulation. The first method utilizes two previously developed mass transfer models with ...

  10. Computational Study of Droplet Trains Impacting a Smooth Solid Surface

    NASA Astrophysics Data System (ADS)

    Markt, David, Jr.; Pathak, Ashish; Raessi, Mehdi; Lee, Seong-Young; Zhao, Emma

    2017-11-01

    The study of droplet impingement is vital to understanding the fluid dynamics of fuel injection in modern internal combustion engines. One widely accepted model was proposed by Yarin and Weiss (JFM, 1995), developed from experiments of single trains of ethanol droplets impacting a substrate. The model predicts the onset of splashing and the mass ejected upon splashing. In this study, using an in-house 3D multiphase flow solver, the experiments of Yarin and Weiss were computationally simulated. The experimentally observed splashing threshold was captured by the simulations, thus validating the solver's ability to accurately simulate the splashing dynamics. Then, we performed simulations of cases with multiple droplet trains, which have high relevance to dense fuel sprays, where droplets impact within the spreading diameters of their neighboring droplets, leading to changes in splashing dynamics due to interactions of spreading films. For both single and multi-train simulations the amount of splashed mass was calculated as a function of time, allowing a quantitative comparison between the two cases. Furthermore, using a passive scalar the amount of splashed mass per impinging droplet was also calculated. This work is supported by the Department of Energy, Office of Energy Efficiency and Renewable Energy (EERE) and the Department of Defense, Tank and Automotive Research, Development, and Engineering Center (TARDEC), under Award Number DE-EE0007292.

  11. Heat and mass transfer during the cryopreservation of a bioartificial liver device: a computational model.

    PubMed

    Balasubramanian, Saravana K; Coger, Robin N

    2005-01-01

    Bioartificial liver devices (BALs) have proven to be an effective bridge to transplantation for cases of acute liver failure. Enabling the long-term storage of these devices using a method such as cryopreservation will ensure their easy off the shelf availability. To date, cryopreservation of liver cells has been attempted for both single cells and sandwich cultures. This study presents the potential of using computational modeling to help develop a cryopreservation protocol for storing the three dimensional BAL: Hepatassist. The focus is upon determining the thermal and concentration profiles as the BAL is cooled from 37 degrees C-100 degrees C, and is completed in two steps: a cryoprotectant loading step and a phase change step. The results indicate that, for the loading step, mass transfer controls the duration of the protocol, whereas for the phase change step, when mass transfer is assumed negligible, the latent heat released during freezing is the control factor. The cryoprotocol that is ultimately proposed considers time, cooling rate, and the temperature gradients that the cellular space is exposed to during cooling. To our knowledge, this study is the first reported effort toward designing an effective protocol for the cryopreservation of a three-dimensional BAL device.

  12. Current Status on the use of Parallel Computing in Turbulent Reacting Flow Computations Involving Sprays, Monte Carlo PDF and Unstructured Grids. Chapter 4

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The state of the art in multidimensional combustor modeling as evidenced by the level of sophistication employed in terms of modeling and numerical accuracy considerations, is also dictated by the available computer memory and turnaround times afforded by present-day computers. With the aim of advancing the current multi-dimensional computational tools used in the design of advanced technology combustors, a solution procedure is developed that combines the novelty of the coupled CFD/spray/scalar Monte Carlo PDF (Probability Density Function) computations on unstructured grids with the ability to run on parallel architectures. In this approach, the mean gas-phase velocity and turbulence fields are determined from a standard turbulence model, the joint composition of species and enthalpy from the solution of a modeled PDF transport equation, and a Lagrangian-based dilute spray model is used for the liquid-phase representation. The gas-turbine combustor flows are often characterized by a complex interaction between various physical processes associated with the interaction between the liquid and gas phases, droplet vaporization, turbulent mixing, heat release associated with chemical kinetics, radiative heat transfer associated with highly absorbing and radiating species, among others. The rate controlling processes often interact with each other at various disparate time 1 and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and liquid phase evaporation in many practical combustion devices.

  13. Modeling of Multiphase Flow through Thin Porous Layers: Application to a Polymer Electrolyte Fuel Cell (PEFC)

    NASA Astrophysics Data System (ADS)

    Qin, C.; Hassanizadeh, S.

    2013-12-01

    Multiphase flow and species transport though thin porous layers are encountered in a number of industrial applications, such as fuel cells, filters, and hygiene products. Based on some macroscale models like the Darcy's law, to date, the modeling of flow and transport through such thin layers has been mostly performed in 3D discretized domains with many computational cells. But, there are a number of problems with this approach. First, a proper representative elementary volume (REV) is not defined. Second, one needs to discretize a thin porous medium into computational cells whose size may be comparable to the pore sizes. This suggests that the traditional models are not applicable to such thin domains. Third, the interfacial conditions between neighboring layers are usually not well defined. Last, 3D modeling of a number of interacting thin porous layers often requires heavy computational efforts. So, to eliminate the drawbacks mentioned above, we propose a new approach to modeling multilayers of thin porous media as 2D interacting continua (see Fig. 1). Macroscale 2D governing equations are formulated in terms of thickness-averaged material properties. Also, the exchange of thermodynamic properties between neighboring layers is described by thickness-averaged quantities. In Comparison to previous macroscale models, our model has the distinctive advantages of: (1) it is rigorous thermodynamics-based model; (2) it is formulated in terms of thickness-averaged material properties which are easily measureable; and (3) it reduces 3D modeling to 2D leading to a very significant reduction of computation efforts. As an application, we employ the new approach in the study of liquid water flooding in the cathode of a polymer electrolyte fuel cell (PEFC). To highlight the advantages of the present model, we compare the results of water distribution with those obtained from the traditional 3D Darcy-based modeling. Finally, it is worth noting that, for specific case studies, a number of material properties in the model need to be determined experimentally, such as mass and heat exchange coefficients between neighboring layers. Fig. 1: Schematic representation of three thin porous layers, which may exchange mass, momentum, and energy. Also, a typical averaging domain (REV) is shown. Note that the layer thickness and thus the REV height can be spatially variable. Also, in reality, the layers are tightly stacked and there is no gap between them.

  14. Trajectory optimization for an asymmetric launch vehicle. M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Sullivan, Jeanne Marie

    1990-01-01

    A numerical optimization technique is used to fully automate the trajectory design process for an symmetric configuration of the proposed Advanced Launch System (ALS). The objective of the ALS trajectory design process is the maximization of the vehicle mass when it reaches the desired orbit. The trajectories used were based on a simple shape that could be described by a small set of parameters. The use of a simple trajectory model can significantly reduce the computation time required for trajectory optimization. A predictive simulation was developed to determine the on-orbit mass given an initial vehicle state, wind information, and a set of trajectory parameters. This simulation utilizes an idealized control system to speed computation by increasing the integration time step. The conjugate gradient method is used for the numerical optimization of on-orbit mass. The method requires only the evaluation of the on-orbit mass function using the predictive simulation, and the gradient of the on-orbit mass function with respect to the trajectory parameters. The gradient is approximated with finite differencing. Prelaunch trajectory designs were carried out using the optimization procedure. The predictive simulation is used in flight to redesign the trajectory to account for trajectory deviations produced by off-nominal conditions, e.g., stronger than expected head winds.

  15. Optimizing tuning masses for helicopter rotor blade vibration reduction including computed airloads and comparison with test data

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Walsh, Joanne L.; Wilbur, Matthew L.

    1992-01-01

    The development and validation of an optimization procedure to systematically place tuning masses along a rotor blade span to minimize vibratory loads are described. The masses and their corresponding locations are the design variables that are manipulated to reduce the harmonics of hub shear for a four-bladed rotor system without adding a large mass penalty. The procedure incorporates a comprehensive helicopter analysis to calculate the airloads. Predicting changes in airloads due to changes in design variables is an important feature of this research. The procedure was applied to a one-sixth, Mach-scaled rotor blade model to place three masses and then again to place six masses. In both cases the added mass was able to achieve significant reductions in the hub shear. In addition, the procedure was applied to place a single mass of fixed value on a blade model to reduce the hub shear for three flight conditions. The analytical results were compared to experimental data from a wind tunnel test performed in the Langley Transonic Dynamics Tunnel. The correlation of the mass location was good and the trend of the mass location with respect to flight speed was predicted fairly well. However, it was noted that the analysis was not entirely successful at predicting the absolute magnitudes of the fixed system loads.

  16. User's guide to the Residual Gas Analyzer (RGA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Artman, S.A.

    1988-08-04

    The Residual Gas Analyzer (RGA), a Model 100C UTI quadrupole mass spectrometer, measures the concentrations of selected masses in the Fusion Energy Division's (FED) Advanced Toroidal Facility (ATF). The RGA software is a VAX FORTRAN computer program which controls the experimental apparatus, records the raw data, performs data reduction, and plots the data. The RGA program allows data to be collected from an RGA on ATF or from either of two RGAs in the laboratory. In the laboratory, the RGA diagnostic plays an important role in outgassing studied on various candidate materials for fusion experiments. One such material, graphite, ismore » being used more often in fusion experiments due to its ability to withstand high power loads. One of the functions of the RGA diagnostic is aid in the determination of the best grade of graphite to be used in these experiments and to study the procedures used to condition it. A procedure of particular interest involves baking the graphite sample in order to remove impurities that may be present in it. These impurities can be studied while in the ATF plasma or while being baked and outgassed in the laboratory. The Residual Gas Analyzer is a quadrupole mass spectrometer capable of scanning masses ranging in size from 1 atomic mass unit (amu) to 300 amu while under computer control. The procedure for collecting data for a particular mass is outlined.« less

  17. Evaluation of a New Ensemble Learning Framework for Mass Classification in Mammograms.

    PubMed

    Rahmani Seryasat, Omid; Haddadnia, Javad

    2018-06-01

    Mammography is the most common screening method for diagnosis of breast cancer. In this study, a computer-aided system for diagnosis of benignity and malignity of the masses was implemented in mammogram images. In the computer aided diagnosis system, we first reduce the noise in the mammograms using an effective noise removal technique. After the noise removal, the mass in the region of interest must be segmented and this segmentation is done using a deformable model. After the mass segmentation, a number of features are extracted from it. These features include: features of the mass shape and border, tissue properties, and the fractal dimension. After extracting a large number of features, a proper subset must be chosen from among them. In this study, we make use of a new method on the basis of a genetic algorithm for selection of a proper set of features. After determining the proper features, a classifier is trained. To classify the samples, a new architecture for combination of the classifiers is proposed. In this architecture, easy and difficult samples are identified and trained using different classifiers. Finally, the proposed mass diagnosis system was also tested on mini-Mammographic Image Analysis Society and digital database for screening mammography databases. The obtained results indicate that the proposed system can compete with the state-of-the-art methods in terms of accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Rapid prototyping and stereolithography in dentistry

    PubMed Central

    Nayar, Sanjna; Bhuminathan, S.; Bhat, Wasim Manzoor

    2015-01-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena. PMID:26015715

  19. Rapid prototyping and stereolithography in dentistry.

    PubMed

    Nayar, Sanjna; Bhuminathan, S; Bhat, Wasim Manzoor

    2015-04-01

    The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena.

  20. Simple models for the simulation of submarine melt for a Greenland glacial system model

    NASA Astrophysics Data System (ADS)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.

  1. A hydrodynamic treatment of the tilted cold dark matter cosmological scenario

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Ostriker, Jeremiah P.

    1993-01-01

    A standard hydrodynamic code coupled with a particle-mesh code is used to compute the evolution of a tilted cold dark matter (TCDM) model containing both baryonic matter and dark matter. Six baryonic species are followed, with allowance for both collisional and radiative ionization in every cell. The mean final Zel'dovich-Sunyaev y parameter is estimated to be (5.4 +/- 2.7) x 10 exp -7, below currently attainable observations, with an rms fluctuation of about (6.0 +/- 3.0) x 10 exp -7 on arcmin scales. The rate of galaxy formation peaks at a relatively late epoch (z is about 0.5). In the case of mass function, the smallest objects are stabilized against collapse by thermal energy: the mass-weighted mass spectrum peaks in the vicinity of 10 exp 9.1 solar masses, with a reasonable fit to the Schechter luminosity function if the baryon mass to blue light ratio is about 4. It is shown that a bias factor of 2 required for the model to be consistent with COBE DMR signals is probably a natural outcome in the present multiple component simulations.

  2. Mathematical modeling of spinning elastic bodies for modal analysis.

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Barbera, F. J.; Baddeley, V.

    1973-01-01

    The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.

  3. Winds from T Tauri stars. I - Spherically symmetric models

    NASA Technical Reports Server (NTRS)

    Hartmann, Lee; Avrett, Eugene H.; Loeser, Rudolf; Calvet, Nuria

    1990-01-01

    Line fluxes and profiles are computed for a sequence of spherically symmetric T Tauri wind models. The calculations indicate that the H-alpha emission of T Tauri stars arises in an extended and probably turbulent circumstellar envelope at temperatures above about 8000 K. The models predict that Mg II resonance line emission should be strongly correlated with H-alpha fluxes; observed Mg II/H-alpha ratios are inconsistent with the models unless extinction corrections have been underestimated. The models predict that most of the Ca II resonance line and IR triplet emission arises in dense layers close to the star rather than in the wind. H-alpha emission levels suggest mass loss rates of about 10 to the -8th solar mass/yr for most T Tauri stars, in reasonable agreement with independent analysis of forbidden emission lines. These results should be useful for interpreting observed line profiles in terms of wind densities, temperatures, and velocity fields.

  4. Lunar Pole Illumination and Communications Maps Computed from GSSR Elevation Data

    NASA Technical Reports Server (NTRS)

    Bryant, Scott

    2009-01-01

    A Digital Elevation Model of the lunar south pole was produced using Goldstone Solar System RADAR (GSSR) data obtained in 2006.12 This model has 40-meter horizontal resolution and about 5-meter relative vertical accuracy. This Digital Elevation Model was used to compute average solar illumination and Earth visibility with 100 kilometers of the lunar south pole. The elevation data were converted into local terrain horizon masks, then converted into lunar-centric latitude and longitude coordinates. The horizon masks were compared to latitude, longitude regions bounding the maximum Sun and Earth motions relative to the moon. Estimates of Earth visibility were computed by integrating the area of the region bounding the Earth's motion that was below the horizon mask. Solar illumination and other metrics were computed similarly. Proposed lunar south pole base sites were examined in detail, with the best site showing yearly solar power availability of 92 percent and Direct-To-Earth (DTE) communication availability of about 50 percent. Similar analysis of the lunar south pole used an older GSSR Digital Elevation Model with 600-meter horizontal resolution. The paper also explores using a heliostat to reduce the photovoltaic power system mass and complexity.

  5. Decay of charmonium states into a scalar and a pseudoscalar glueball

    NASA Astrophysics Data System (ADS)

    Eshraim, Walaa I.

    2016-11-01

    In the framework of a chiral symmetric model, we expand a U(4)R × U(4)L symmetric linear sigma model with (axial-)vector mesons by including a dilaton field, a scalar glueball, and the pseudoscalar glueball. We compute the decay width of the scalar charmonium state χC0(IP) into a predominantly scalar glueball f0(1710). We calculate the decay width of the pseudoscalar charmonium states ηC(IS) into a predominantly scalar glueball f0(1710) as well as into a pseudoscalar glueball with a mass of 2.6 GeV (as predicted by Lattice-QCD simulations) and with a mass of 2.37 GeV (corresponding to the mass of the resonance X(2370)). This study is interesting for the upcoming PANDA experiment at the FAIR facility and BESIII experiment. Moreover, we obtain the mixing angle between a pseudoscalar glueball, with a mass of 2.6 GeV, and the charmonium state ηC.

  6. The topological susceptibility from grand canonical simulations in the interacting instanton liquid model: Chiral phase transition and axion mass

    NASA Astrophysics Data System (ADS)

    Wantz, Olivier; Shellard, E. P. S.

    2010-04-01

    This is the last in a series of papers on the topological susceptibility in the interacting instanton liquid model (IILM). We will derive improved finite temperature interactions to study the thermodynamic limit of grand canonical Monte Carlo simulations in the quenched and unquenched case with light, physical quark masses. In particular, we will be interested in chiral symmetry breaking. The paper culminates by giving, for the first time, a well-motivated temperature-dependent axion mass. Especially, this work finally provides a computation of the axion mass in the low temperature regime, ma2fa2=1.46×10-3Λ41+0.50 T/Λ1+(3.53 . It connects smoothly to the high temperature dilute gas approximation; the latter is improved by including quark threshold effects. To compare with earlier studies, we also provide the usual power-law ma2=αΛ4fa2(T, where Λ=400 MeV, n=6.68 and α=1.68×10-7.

  7. Computational modelling of the mechanics of trabecular bone and marrow using fluid structure interaction techniques.

    PubMed

    Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E

    2013-04-01

    Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.

  8. Crosslinking Constraints and Computational Models as Complementary Tools in Modeling the Extracellular Domain of the Glycine Receptor

    PubMed Central

    Liu, Zhenyu; Szarecka, Agnieszka; Yonkunas, Michael; Speranskiy, Kirill; Kurnikova, Maria; Cascio, Michael

    2014-01-01

    The glycine receptor (GlyR), a member of the pentameric ligand-gated ion channel superfamily, is the major inhibitory neurotransmitter-gated receptor in the spinal cord and brainstem. In these receptors, the extracellular domain binds agonists, antagonists and various other modulatory ligands that act allosterically to modulate receptor function. The structures of homologous receptors and binding proteins provide templates for modeling of the ligand-binding domain of GlyR, but limitations in sequence homology and structure resolution impact on modeling studies. The determination of distance constraints via chemical crosslinking studies coupled with mass spectrometry can provide additional structural information to aid in model refinement, however it is critical to be able to distinguish between intra- and inter-subunit constraints. In this report we model the structure of GlyBP, a structural and functional homolog of the extracellular domain of human homomeric α1 GlyR. We then show that intra- and intersubunit Lys-Lys crosslinks in trypsinized samples of purified monomeric and oligomeric protein bands from SDS-polyacrylamide gels may be identified and differentiated by MALDI-TOF MS studies of limited resolution. Thus, broadly available MS platforms are capable of providing distance constraints that may be utilized in characterizing large complexes that may be less amenable to NMR and crystallographic studies. Systematic studies of state-dependent chemical crosslinking and mass spectrometric identification of crosslinked sites has the potential to complement computational modeling efforts by providing constraints that can validate and refine allosteric models. PMID:25025226

  9. Pulsating low-mass white dwarfs in the frame of new evolutionary sequences. V. Asteroseismology of ELMV white dwarf stars

    NASA Astrophysics Data System (ADS)

    Calcaferro, Leila M.; Córsico, Alejandro H.; Althaus, Leandro G.

    2017-11-01

    Context. Many pulsating low-mass white dwarf stars have been detected in the past years in the field of our Galaxy. Some of them exhibit multiperiodic brightness variation, therefore it is possible to probe their interiors through asteroseismology. Aims: We present a detailed asteroseismological study of all the known low-mass variable white dwarf stars based on a complete set of fully evolutionary models that are representative of low-mass He-core white dwarf stars. Methods: We employed adiabatic radial and nonradial pulsation periods for low-mass white dwarf models with stellar masses ranging from 0.1554 to 0.4352 M⊙ that were derived by simulating the nonconservative evolution of a binary system consisting of an initially 1 M⊙ zero-age main-sequence (ZAMS) star and a 1.4 M⊙ neutron star companion. We estimated the mean period spacing for the stars under study (where this was possible), and then we constrained the stellar mass by comparing the observed period spacing with the average of the computed period spacings for our grid of models. We also employed the individual observed periods of every known pulsating low-mass white dwarf star to search for a representative seismological model. Results: We found that even though the stars under analysis exhibit few periods and the period fits show multiplicity of solutions, it is possible to find seismological models whose mass and effective temperature are in agreement with the values given by spectroscopy for most of the cases. Unfortunately, we were not able to constrain the stellar masses by employing the observed period spacing because, in general, only few periods are exhibited by these stars. In the two cases where we were able to extract the period spacing from the set of observed periods, this method led to stellar mass values that were substantially higher than expected for this type of stars. Conclusions: The results presented in this work show the need for further photometric searches, on the one hand, and that some improvements of the theoretical models are required on the other hand in order to place the asteroseismological results on a firmer ground.

  10. The effect of model fidelity on prediction of char burnout for single-particle coal combustion

    DOE PAGES

    McConnell, Josh; Sutherland, James C.

    2016-07-09

    In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less

  11. The effect of model fidelity on prediction of char burnout for single-particle coal combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Josh; Sutherland, James C.

    In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less

  12. Comparison between two scalar field models using rotation curves of spiral galaxies

    NASA Astrophysics Data System (ADS)

    Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh

    2018-04-01

    Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.

  13. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the craniad position of the CM in all of our models reinforces the notion that T. rex did not stand or move with extremely columnar, elephantine limbs. It required some flexion in the limbs to stand still, but how much flexion depends directly on where its CM is assumed to lie. Finally we used our model to test an unsolved problem in dinosaur biomechanics: how fast a huge biped like T. rex could turn. Depending on the assumptions, our whole body model integrated with a musculoskeletal model estimates that turning 45 degrees on one leg could be achieved slowly, in about 1-2s.

  14. Impact of plant shoot architecture on leaf cooling: a coupled heat and mass transfer model

    PubMed Central

    Bridge, L. J.; Franklin, K. A.; Homer, M. E.

    2013-01-01

    Plants display a range of striking architectural adaptations when grown at elevated temperatures. In the model plant Arabidopsis thaliana, these include elongation of petioles, and increased petiole and leaf angles from the soil surface. The potential physiological significance of these architectural changes remains speculative. We address this issue computationally by formulating a mathematical model and performing numerical simulations, testing the hypothesis that elongated and elevated plant configurations may reflect a leaf-cooling strategy. This sets in place a new basic model of plant water use and interaction with the surrounding air, which couples heat and mass transfer within a plant to water vapour diffusion in the air, using a transpiration term that depends on saturation, temperature and vapour concentration. A two-dimensional, multi-petiole shoot geometry is considered, with added leaf-blade shape detail. Our simulations show that increased petiole length and angle generally result in enhanced transpiration rates and reduced leaf temperatures in well-watered conditions. Furthermore, our computations also reveal plant configurations for which elongation may result in decreased transpiration rate owing to decreased leaf liquid saturation. We offer further qualitative and quantitative insights into the role of architectural parameters as key determinants of leaf-cooling capacity. PMID:23720538

  15. Unraveling the benzocaine-receptor interaction at molecular level using mass-resolved spectroscopy.

    PubMed

    Aguado, Edurne; León, Iker; Millán, Judith; Cocinero, Emilio J; Jaeqx, Sander; Rijs, Anouk M; Lesarri, Alberto; Fernández, José A

    2013-10-31

    The benzocaine-toluene cluster has been used as a model system to mimic the interaction between the local anesthetic benzocaine and the phenylalanine residue in Na(+) channels. The cluster was generated in a supersonic expansion of benzocaine and toluene in helium. Using a combination of mass-resolved laser-based experimental techniques and computational methods, the complex was fully characterized, finding four conformational isomers in which the molecules are bound through N-H···π and π···π weak hydrogen bonds. The structures of the detected isomers closely resemble those predicted for benzocaine in the inner pore of the ion channels, giving experimental support to previously reported molecular chemistry models.

  16. The computation of standard solar models

    NASA Technical Reports Server (NTRS)

    Ulrich, Roger K.; Cox, Arthur N.

    1991-01-01

    Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.

  17. Universal relations for differentially rotating relativistic stars at the threshold to collapse

    NASA Astrophysics Data System (ADS)

    Bozzola, Gabriele; Stergioulas, Nikolaos; Bauswein, Andreas

    2018-03-01

    A binary neutron star merger produces a rapidly and differentially rotating compact remnant whose lifespan heavily affects the electromagnetic and gravitational emissions. Its stability depends on both the equation of state (EOS) and the rotation law and it is usually investigated through numerical simulations. Nevertheless, by means of a sufficient criterion for secular instability, equilibrium sequences can be used as a computational inexpensive way to estimate the onset of dynamical instability, which, in general, is close to the secular one. This method works well for uniform rotation and relies on the location of turning points: stellar models that are stationary points in a sequence of equilibrium solutions with constant rest mass or angular momentum. Here, we investigate differentially rotating models (using a large number of EOSs and different rotation laws) and find that several universal relations between properly scaled gravitational mass, rest mass and angular momentum of the turning-point models that are valid for uniform rotation are insensitive to the degree of differential rotation, to high accuracy.

  18. Ising-based model of opinion formation in a complex network of interpersonal interactions

    NASA Astrophysics Data System (ADS)

    Grabowski, A.; Kosiński, R. A.

    2006-03-01

    In our work the process of opinion formation in the human population, treated as a scale-free network, is modeled and investigated numerically. The individuals (nodes of the network) are characterized by their authorities, which influence the interpersonal interactions in the population. Hierarchical, two-level structures of interpersonal interactions and spatial localization of individuals are taken into account. The effect of the mass media, modeled as an external stimulation acting on the social network, on the process of opinion formation is investigated. It was found that in the time evolution of opinions of individuals critical phenomena occur. The first one is observed in the critical temperature of the system TC and is connected with the situation in the community, which may be described by such quantifiers as the economic status of people, unemployment or crime wave. Another critical phenomenon is connected with the influence of mass media on the population. As results from our computations, under certain circumstances the mass media can provoke critical rebuilding of opinions in the population.

  19. An assessment of a new settling velocity parameterisation for cohesive sediment transport modeling

    NASA Astrophysics Data System (ADS)

    Baugh, John V.; Manning, Andrew J.

    2007-07-01

    An important element within the Defra funded Estuary Process Research project "EstProc" was the implementation of the new or refined algorithms, produced under EstProc, into cohesive sediment numerical models. The implementation stage was important as any extension in the understanding of estuarine processes from EstProc was required to be suitable for dissemination into the wider research community with a level of robustness for general applications demonstrated. This report describes work undertaken to implement the new Manning Floc Settling Velocity Model, developed during EstProc. All Manning component algorithms could be combined to provide estimates of mass settling flux. The algorithms are initially assessed in a number of 1-D scenarios, where the Manning model output is compared against both real observations and the output from alternative settling parameterisations. The Manning model is then implemented into a fully 3-D computational model (TELEMAC3D) of estuarine hydraulics and sediment transport of the Lower Thames estuary. The 3-D model results with the Manning algorithm included were compared to runs with a constant settling velocity of 0.5 mm s -1 and settling velocity based on a simple linear multiplier of concentration and with the above mentioned observations of suspended concentration. The findings of the 1-D case studies found the Manning empirical settling model could reproduce 93% of the total mass settling flux observed over a spring tidal cycle. The floc model fit was even better within the turbidity maximum (TM) zone. A constant 0.5 mm s -1 only estimated 15% of the TM mass flux, whereas the fixed 5 mm s -1 settling rate over-predicted the TM mass flux by 47%. Both settling velocity as a simple linear function of concentration, and van Leussen's method, did not fare much better estimating less than half the observed flux during the various tidal and sub-tidal cycle periods. When the Manning-settling model was applied to a layer with suspended concentrations approaching 6 g l -1, it calculated 96% of the observed mass flux. The main conclusions of the implementation exercise were that it was feasible to implement a complex relationship between settling velocity and concentration in a 3-D computational model of estuarine hydraulics, without producing any significant increase in model run times or reducing model stability. The use of the Manning algorithm greatly improved the reproduction of the observed distribution of suspended concentration both in the vertical and horizontal directions compared to the other simulations. During the 1-D assessments, the Manning-settling model demonstrated flexibility in adapting to a wide range of estuarine environmental conditions (i.e. shear stress and concentration), specifically for applied modelling purposes.

  20. Computer code for single-point thermodynamic analysis of hydrogen/oxygen expander-cycle rocket engines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.; Jones, Scott M.

    1991-01-01

    This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.

Top