Numerical Modeling of Transport of Biomass Burning Emissions on South America
NASA Technical Reports Server (NTRS)
RibeirodeFreitas, Saulo
2001-01-01
Our research efforts have addressed theoretical and numerical modeling of sources emissions and transport processes of trace gases and aerosols emitted by biomass burning on the central of Brazil and Amazon basin. For this effort we coupled all Eulerian transport model with the mesoscale atmospheric model RAMS (Regional Atmospheric Modeling System).
High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters
2016-06-01
space propulsion . This effort consists of numerical model development, physical model development, and systematic studies of the non-linear plasma...studies of the physical characteristics of Field Reversed Configuration (FRC) plasma for advanced space propulsion . This effort consists of numerical...FRCs for propulsion application. Two of the most advanced designs are based on the theta-pinch formation and the RMF formation mechanism, which
A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data
NASA Technical Reports Server (NTRS)
Smith, Laura J.
2004-01-01
Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.
Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Reid, Terry V.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. Microporous bulk insulation is used in the ground support test hardware to minimize the loss of thermal energy from the electric heat source to the environment. The insulation package is characterized before operation to predict how much heat will be absorbed by the convertor and how much will be lost to the environment during operation. In an effort to validate these predictions, numerous tasks have been performed, which provided a more accurate value for net heat input into the ASCs. This test and modeling effort included: (a) making thermophysical property measurements of test setup materials to provide inputs to the numerical models, (b) acquiring additional test data that was collected during convertor tests to provide numerical models with temperature profiles of the test setup via thermocouple and infrared measurements, (c) using multidimensional numerical models (computational fluid dynamics code) to predict net heat input of an operating convertor, and (d) using validation test hardware to provide direct comparison of numerical results and validate the multidimensional numerical models used to predict convertor net heat input. This effort produced high fidelity ASC net heat input predictions, which were successfully validated using specially designed test hardware enabling measurement of heat transferred through a simulated Stirling cycle. The overall effort and results are discussed.
Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis
NASA Technical Reports Server (NTRS)
Comiskey, J. G.
1979-01-01
This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.
Composite material bend-twist coupling for wind turbine blade applications
NASA Astrophysics Data System (ADS)
Walsh, Justin M.
Current efforts in wind turbine blade design seek to employ bend-twist coupling of composite materials for passive power control by twisting blades to feather. Past efforts in this area of study have proved to be problematic, especially in formulation of the bend-twist coupling coefficient alpha. Kevlar/epoxy, carbon/epoxy and glass/epoxy specimens were manufactured to study bend-twist coupling, from which numerical and analytical models could be verified. Finite element analysis was implemented to evaluate fiber orientation and material property effects on coupling magnitude. An analytical/empirical model was then derived to describe numerical results and serve as a replacement for the commonly used coupling coefficient alpha. Through the results from numerical and analytical models, a foundation for aeroelastic design of wind turbines blades utilizing biased composite materials is provided.
NASA Astrophysics Data System (ADS)
Anderson, Charles E., Jr.; O'Donoghue, Padraic E.; Lankford, James; Walker, James D.
1992-06-01
Complementary to a study of the compressive strength of ceramic as a function of strain rate and confinement, numerical simulations of the split-Hopkinson pressure bar (SHPB) experiments have been performed using the two-dimensional wave propagation computer program HEMP. The numerical effort had two main thrusts. Firstly, the interpretation of the experimental data relies on several assumptions. The numerical simulations were used to investigate the validity of these assumptions. The second part of the effort focused on computing the idealized constitutive response of a ceramic within the SHPB experiment. These numerical results were then compared against experimental data. Idealized models examined included a perfectly elastic material, an elastic-perfectly plastic material, and an elastic material with failure. Post-failure material was modeled as having either no strength, or a strength proportional to the mean stress. The effects of confinement were also studied. Conclusions concerning the dynamic behavior of a ceramic up to and after failure are drawn from the numerical study.
2003-01-01
the overall effort. Mr. Wei Shih of Allcomp , Inc ., City of Industry, CA, provided the mechanical and thermal property data for the carbon -phenolic...AFRL-PR-WP-TR-2003-2033 CARBON -PHENOLIC CAGES FOR HIGH-SPEED BEARINGS Part III – Development of Numerical Models for Heat Generation and...NUMBER In-house 5b. GRANT NUMBER 4. TITLE AND SUBTITLE CARBON -PHENOLIC CAGES FOR HIGH-SPEED BEARINGS Part III – Development of Numerical Models
HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION
Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...
An adapted yield criterion for the evolution of subsequent yield surfaces
NASA Astrophysics Data System (ADS)
Küsters, N.; Brosius, A.
2017-09-01
In numerical analysis of sheet metal forming processes, the anisotropic material behaviour is often modelled with isotropic work hardening and an average Lankford coefficient. In contrast, experimental observations show an evolution of the Lankford coefficients, which can be associated with a yield surface change due to kinematic and distortional hardening. Commonly, extensive efforts are carried out to describe these phenomena. In this paper an isotropic material model based on the Yld2000-2d criterion is adapted with an evolving yield exponent in order to change the yield surface shape. The yield exponent is linked to the accumulative plastic strain. This change has the effect of a rotating yield surface normal. As the normal is directly related to the Lankford coefficient, the change can be used to model the evolution of the Lankford coefficient during yielding. The paper will focus on the numerical implementation of the adapted material model for the FE-code LS-Dyna, mpi-version R7.1.2-d. A recently introduced identification scheme [1] is used to obtain the parameters for the evolving yield surface and will be briefly described for the proposed model. The suitability for numerical analysis will be discussed for deep drawing processes in general. Efforts for material characterization and modelling will be compared to other common yield surface descriptions. Besides experimental efforts and achieved accuracy, the potential of flexibility in material models and the risk of ambiguity during identification are of major interest in this paper.
2013-09-30
numerical efforts undertaken here implement established aspects of Boussinesq -type modeling, developed by the PI and other researchers. These aspects...the Boussinesq -type framework, and then implement in a numerical model. Once this comprehensive model is developed and tested against established...phenomena that might be observed at New River. WORK COMPLETED In FY13 we have continued the development of a Boussinesq -type formulation that
Analytical and experimental study of control effort associated with model reference adaptive control
NASA Technical Reports Server (NTRS)
Messer, R. S.; Haftka, R. T.; Cudney, H. H.
1992-01-01
Numerical simulation results presently obtained for the performance of model reference adaptive control (MRAC) are experimentally verified, with a view to accounting for differences between the plant and the reference model after the control function has been brought to bear. MRAC is both experimentally and analytically applied to a single-degree-of-freedom system, as well as analytically to a MIMO system having controlled differences between the reference model and the plant. The control effort is noted to be sensitive to differences between the plant and the reference model.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.
1991-01-01
This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.
Performance testing of a vertical Bridgman furnace using experiments and numerical modeling
NASA Astrophysics Data System (ADS)
Rosch, W. R.; Fripp, A. L.; Debnam, W. J.; Pendergrass, T. K.
1997-04-01
This paper details a portion of the work performed in preparation for the growth of lead tin telluride crystals during a Space Shuttle flight. A coordinated effort of experimental measurements and numerical modeling was completed to determine the optimum growth parameters and the performance of the furnace. This work was done using NASA's Advanced Automated Directional Solidification Furnace, but the procedures used should be equally valid for other vertical Bridgman furnaces.
Dredging Equipment Modifications for Detection and Removal of Ordnance
2006-12-01
and numerically modeled to describe an underwa- ter munitions detonation within an enclosed hydraulic circuit similar to that found in a dredge...by a numerical modeling effort describing the poten- tial blast effects that can be associated with munitions passing into and through a modern...screen was subsequently removed and bars were welded on the cutterhead (as previously described in Umm Qsar ) to construct a “screen” with 7- cm (2.75
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
NASA National Combustion Code Simulations
NASA Technical Reports Server (NTRS)
Iannetti, Anthony; Davoudzadeh, Farhad
2001-01-01
A systematic effort is in progress to further validate the National Combustion Code (NCC) that has been developed at NASA Glenn Research Center (GRC) for comprehensive modeling and simulation of aerospace combustion systems. The validation efforts include numerical simulation of the gas-phase combustor experiments conducted at the Center for Turbulence Research (CTR), Stanford University, followed by comparison and evaluation of the computed results with the experimental data. Presently, at GRC, a numerical model of the experimental gaseous combustor is built to simulate the experimental model. The constructed numerical geometry includes the flow development sections for air annulus and fuel pipe, 24 channel air and fuel swirlers, hub, combustor, and tail pipe. Furthermore, a three-dimensional multi-block, multi-grid grid (1.6 million grid points, 3-levels of multi-grid) is generated. Computational simulation of the gaseous combustor flow field operating on methane fuel has started. The computational domain includes the whole flow regime starting from the fuel pipe and the air annulus, through the 12 air and 12 fuel channels, in the combustion region and through the tail pipe.
The cost of model reference adaptive control - Analysis, experiments, and optimization
NASA Technical Reports Server (NTRS)
Messer, R. S.; Haftka, R. T.; Cudney, H. H.
1993-01-01
In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.
A nested numerical tidal model of the southern New England bight
NASA Technical Reports Server (NTRS)
Gordon, R. B.; Spaulding, M. L.
1979-01-01
Efforts were focused on the development and application of a three-dimensional numerical model for predicting pollutant and sediment transport in estuarine and coastal environments. To successfully apply the pollutant and sediment transport model to Rhode Island coastal waters, it was determined that the flow field in this region had to be better described through the use of existing numerical circulation models. A nested, barotropic numerical tidal model was applied to the southern New England Bight (Long Island, Block Island, Rhode Island Sounds, Buzzards Bay, and the shelf south of Block Island). Forward time and centered spatial differences were employed with the bottom friction term evaluated at both time levels. Using existing tide records on the New England shelf, adequate information was available to specify the tide height boundary condition further out on the shelf. Preliminary results are within the accuracy of the National Ocean Survey tide table data.
Numerical Investigation of Hot Gas Ingestion by STOVL Aircraft
NASA Technical Reports Server (NTRS)
Vanka, S. P.
1998-01-01
This report compiles the various research activities conducted under the auspices of the NASA Grant NAG3-1026, "Numerical Investigation of Hot Gas Ingestion by STOVL Aircraft" during the period of April 1989 to April 1994. The effort involved the development of multigrid based algorithms and computer programs for the calculation of the flow and temperature fields generated by Short Take-off and Vertical Landing (STOVL) aircraft, while hovering in ground proximity. Of particular importance has been the interaction of the exhaust jets with the head wind which gives rise to the hot gas ingestion process. The objective of new STOVL designs to reduce the temperature of the gases ingested into the engine. The present work describes a solution algorithm for the multi-dimensional elliptic partial-differential equations governing fluid flow and heat transfer in general curvilinear coordinates. The solution algorithm is based on the multigrid technique which obtains rapid convergence of the iterative numerical procedure for the discrete equations. Initial efforts were concerned with the solution of the Cartesian form of the equations. This algorithm was applied to a simulated STOVL configuration in rectangular coordinates. In the next phase of the work, a computer code for general curvilinear coordinates was constructed. This was applied to model STOVL geometries on curvilinear grids. The code was also validated in model problems. In all these efforts, the standard k-Epsilon model was used.
CURRENT METHODS AND RESEARCH STRATEGIES FOR MODELING ATMOSPHERIC MERCURY
The atmospheric pathway of the global mercury cycle is known to be the primary source of mercury contamination to most threatened aquatic ecosystems. Current efforts toward numerical modeling of atmospheric mercury are hindered by an incomplete understanding of emissions, atmosp...
EXPERIMENTAL EVALUATION OF TWO SHARP FRONT MODELS FOR VADOSE ZONE NON-AQUEOUS PHASE LIQUID TRANSPORT
Recent research efforts on the transport of immiscible organic wastes in subsurface the development of numerical models of various levels of sophistication. Systems have focused on the site characterization data needed to obtain. However, in real field applications, the model p...
Three-Dimensional Numerical Modeling of Magnetohydrodynamic Augmented Propulsion Experiment
NASA Technical Reports Server (NTRS)
Turner, M. W.; Hawk, C. W.; Litchford, R. J.
2009-01-01
Over the past several years, NASA Marshall Space Flight Center has engaged in the design and development of an experimental research facility to investigate the use of diagonalized crossed-field magnetohydrodynamic (MHD) accelerators as a possible thrust augmentation device for thermal propulsion systems. In support of this effort, a three-dimensional numerical MHD model has been developed for the purpose of analyzing and optimizing accelerator performance and to aid in understanding critical underlying physical processes and nonideal effects. This Technical Memorandum fully summarizes model development efforts and presents the results of pretest performance optimization analyses. These results indicate that the MHD accelerator should utilize a 45deg diagonalization angle with the applied current evenly distributed over the first five inlet electrode pairs. When powered at 100 A, this configuration is expected to yield a 50% global efficiency with an 80% increase in axial velocity and a 50% increase in centerline total pressure.
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
The politics of participation in watershed modeling.
Korfmacher, K S
2001-02-01
While researchers and decision-makers increasingly recognize the importance of public participation in environmental decision-making, there is less agreement about how to involve the public. One of the most controversial issues is how to involve citizens in producing scientific information. Although this question is relevant to many areas of environmental policy, it has come to the fore in watershed management. Increasingly, the public is becoming involved in the sophisticated computer modeling efforts that have been developed to inform watershed management decisions. These models typically have been treated as technical inputs to the policy process. However, model-building itself involves numerous assumptions, judgments, and decisions that are relevant to the public. This paper examines the politics of public involvement in watershed modeling efforts and proposes five guidelines for good practice for such efforts. Using these guidelines, I analyze four cases in which different approaches to public involvement in the modeling process have been attempted and make recommendations for future efforts to involve communities in watershed modeling. Copyright 2001 Springer-Verlag
Evaluating shallow-flow rock structures as scour countermeasures at bridges.
DOT National Transportation Integrated Search
2009-12-01
A study to determine whether or not shallow-flow rock structures could reliably be used at bridge abutments in place of riprap. Research was conducted in a two-phase effort beginning with numerical modeling and ending with field verification of model...
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
Nonlinear Constitutive Relations for High Temperature Application, 1984
NASA Technical Reports Server (NTRS)
1985-01-01
Nonlinear constitutive relations for high temperature applications were discussed. The state of the art in nonlinear constitutive modeling of high temperature materials was reviewed and the need for future research and development efforts in this area was identified. Considerable research efforts are urgently needed in the development of nonlinear constitutive relations for high temperature applications prompted by recent advances in high temperature materials technology and new demands on material and component performance. Topics discussed include: constitutive modeling, numerical methods, material testing, and structural applications.
Full-scale Dynamic Testing of Soft-Story Retrofitted and Un-Retrofitted Woodframe Buildings
John W. van de Lindt; George T. Abell; Pouria Bahmani; Mikhail Gershfeld; Xiaoyun Shao; Weichiang Pang; Michael D. Symans; Ershad Ziaei; Steven E. Pryor; Douglas Rammer; Jingjing Tian
2013-01-01
The existence of thousands of soft-story woodframe buildings in California has been recognized as a disaster preparedness problem with concerted mitigation efforts underway in many cities throughout the state. The vast majority of those efforts are based on numerical modeling, often with half-century old data in which assumptions have to be made based on best...
Computational fluid dynamics combustion analysis evaluation
NASA Technical Reports Server (NTRS)
Kim, Y. M.; Shang, H. M.; Chen, C. P.; Ziebarth, J. P.
1992-01-01
This study involves the development of numerical modelling in spray combustion. These modelling efforts are mainly motivated to improve the computational efficiency in the stochastic particle tracking method as well as to incorporate the physical submodels of turbulence, combustion, vaporization, and dense spray effects. The present mathematical formulation and numerical methodologies can be casted in any time-marching pressure correction methodologies (PCM) such as FDNS code and MAST code. A sequence of validation cases involving steady burning sprays and transient evaporating sprays will be included.
Numerical simulation of film-cooled ablative rocket nozzles
NASA Technical Reports Server (NTRS)
Landrum, D. B.; Beard, R. M.
1996-01-01
The objective of this research effort was to evaluate the impact of incorporating an additional cooling port downstream between the injector and nozzle throat in the NASA Fast Track chamber. A numerical model of the chamber was developed for the analysis. The analysis did not model ablation but instead correlated the initial ablation rate with the initial nozzle wall temperature distribution. The results of this study provide guidance in the development of a potentially lighter, second generation ablative rocket nozzle which maintains desired performance levels.
Ongoing Fixed Wing Research within the NASA Langley Aeroelasticity Branch
NASA Technical Reports Server (NTRS)
Bartels, Robert; Chwalowski, Pawel; Funk, Christie; Heeg, Jennifer; Hur, Jiyoung; Sanetrik, Mark; Scott, Robert; Silva, Walter; Stanford, Bret; Wiseman, Carol
2015-01-01
The NASA Langley Aeroelasticity Branch is involved in a number of research programs related to fixed wing aeroelasticity and aeroservoelasticity. These ongoing efforts are summarized here, and include aeroelastic tailoring of subsonic transport wing structures, experimental and numerical assessment of truss-braced wing flutter and limit cycle oscillations, and numerical modeling of high speed civil transport configurations. Efforts devoted to verification, validation, and uncertainty quantification of aeroelastic physics in a workshop setting are also discussed. The feasibility of certain future civil transport configurations will depend on the ability to understand and control complex aeroelastic phenomena, a goal that the Aeroelasticity Branch is well-positioned to contribute through these programs.
NASA Astrophysics Data System (ADS)
Sorensen, Ira Joseph
A primary objective of the effort reported here is to develop a radiometric instrument modeling environment to provide complete end-to-end numerical models of radiometric instruments, integrating the optical, electro-thermal, and electronic systems. The modeling environment consists of a Monte Carlo ray-trace (MCRT) model of the optical system coupled to a transient, three-dimensional finite-difference electrothermal model of the detector assembly with an analytic model of the signal-conditioning circuitry. The environment provides a complete simulation of the dynamic optical and electrothermal behavior of the instrument. The modeling environment is used to create an end-to-end model of the CERES scanning radiometer, and its performance is compared to the performance of an operational CERES total channel as a benchmark. A further objective of this effort is to formulate an efficient design environment for radiometric instruments. To this end, the modeling environment is then combined with evolutionary search algorithms known as genetic algorithms (GA's) to develop a methodology for optimal instrument design using high-level radiometric instrument models. GA's are applied to the design of the optical system and detector system separately and to both as an aggregate function with positive results.
Choosing appropriate subpopulations for modeling tree canopy cover nationwide
Gretchen G. Moisen; John W. Coulston; Barry T. Wilson; Warren B. Cohen; Mark V. Finco
2012-01-01
In prior national mapping efforts, the country has been divided into numerous ecologically similar mapping zones, and individual models have been constructed for each zone. Additionally, a hierarchical approach has been taken within zones to first mask out areas of nonforest, then target models of tree attributes within forested areas only. This results in many models...
3D nozzle flow simulations including state-to-state kinetics calculation
NASA Astrophysics Data System (ADS)
Cutrone, L.; Tuttafesta, M.; Capitelli, M.; Schettino, A.; Pascazio, G.; Colonna, G.
2014-12-01
In supersonic and hypersonic flows, thermal and chemical non-equilibrium is one of the fundamental aspects that must be taken into account for the accurate characterization of the plasma. In this paper, we present an optimized methodology to approach plasma numerical simulation by state-to-state kinetics calculations in a fully 3D Navier-Stokes CFD solver. Numerical simulations of an expanding flow are presented aimed at comparing the behavior of state-to-state chemical kinetics models with respect to the macroscopic thermochemical non-equilibrium models that are usually used in the numerical computation of high temperature hypersonic flows. The comparison is focused both on the differences in the numerical results and on the computational effort associated with each approach.
ULTRASONIC STUDIES OF THE FUNDAMENTAL MECHANISMS OF RECRYSTALLIZATION AND SINTERING OF METALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
TURNER, JOSEPH A.
2005-11-30
The purpose of this project was to develop a fundamental understanding of the interaction of an ultrasonic wave with complex media, with specific emphases on recrystallization and sintering of metals. A combined analytical, numerical, and experimental research program was implemented. Theoretical models of elastic wave propagation through these complex materials were developed using stochastic wave field techniques. The numerical simulations focused on finite element wave propagation solutions through complex media. The experimental efforts were focused on corroboration of the models developed and on the development of new experimental techniques. The analytical and numerical research allows the experimental results to bemore » interpreted quantitatively.« less
Modeling of Powder Bed Manufacturing Defects
NASA Astrophysics Data System (ADS)
Mindt, H.-W.; Desmaison, O.; Megahed, M.; Peralta, A.; Neumann, J.
2018-01-01
Powder bed additive manufacturing offers unmatched capabilities. The deposition resolution achieved is extremely high enabling the production of innovative functional products and materials. Achieving the desired final quality is, however, hampered by many potential defects that have to be managed in due course of the manufacturing process. Defects observed in products manufactured via powder bed fusion have been studied experimentally. In this effort we have relied on experiments reported in the literature and—when experimental data were not sufficient—we have performed additional experiments providing an extended foundation for defect analysis. There is large interest in reducing the effort and cost of additive manufacturing process qualification and certification using integrated computational material engineering. A prerequisite is, however, that numerical methods can indeed capture defects. A multiscale multiphysics platform is developed and applied to predict and explain the origin of several defects that have been observed experimentally during laser-based powder bed fusion processes. The models utilized are briefly introduced. The ability of the models to capture the observed defects is verified. The root cause of the defects is explained by analyzing the numerical results thus confirming the ability of numerical methods to provide a foundation for rapid process qualification.
REGRESSION MODELS THAT RELATE STREAMS TO WATERSHEDS: COPING WITH NUMEROUS, COLLINEAR PEDICTORS
GIS efforts can produce a very large number of watershed variables (climate, land use/land cover and topography, all defined for multiple areas of influence) that could serve as candidate predictors in a regression model of reach-scale stream features. Invariably, many of these ...
Particle Engulfment and Pushing By Solidifying Interfaces
NASA Technical Reports Server (NTRS)
2003-01-01
The study of particle behavior at solid/liquid interfaces (SLI s) is at the center of the Particle Engulfment and Pushing (PEP) research program. Interactions of particles with SLI s have been of interest since the 1960 s, starting with geological observations, i.e., frost heaving. Ever since, this field of research has become significant to such diverse areas as metal matrix composite materials, fabrication of superconductors, and inclusion control in steels. The PEP research effort is geared towards understanding the fundamental physics of the interaction between particles and a planar SLI. Experimental work including 1-g and mu-g experiments accompany the development of analytical and numerical models. The experimental work comprised of substantial groundwork with aluminum (Al) and zinc (Zn) matrices containing spherical zirconia particles, mu-g experiments with metallic Al matrices and the use of transparent organic metal-analogue materials. The modeling efforts have grown from the initial steady-state analytical model to dynamic models, accounting for the initial acceleration of a particle at rest by an advancing SLI. To gain a more comprehensive understanding, numerical models were developed to account for the influence of the thermal and solutal field. Current efforts are geared towards coupling the diffusive 2-D front tracking model with a fluid flow model to account for differences in the physics of interaction between 1-g and -g environments. A significant amount of this theoretical investigation has been and is being performed by co-investigators at NASA MSFC.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.
1990-01-01
This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.
Multiple control strategies for prevention of avian influenza pandemic.
Ullah, Roman; Zaman, Gul; Islam, Saeed
2014-01-01
We present the prevention of avian influenza pandemic by adjusting multiple control functions in the human-to-human transmittable avian influenza model. First we show the existence of the optimal control problem; then by using both analytical and numerical techniques, we investigate the cost-effective control effects for the prevention of transmission of disease. To do this, we use three control functions, the effort to reduce the number of contacts with human infected with mutant avian influenza, the antiviral treatment of infected individuals, and the effort to reduce the number of infected birds. We completely characterized the optimal control and compute numerical solution of the optimality system by using an iterative method.
Fu, Pengcheng; Johnson, Scott M.; Carrigan, Charles R.
2012-01-31
This paper documents our effort to use a fully coupled hydro-geomechanical numerical test bed to study using low hydraulic pressure to stimulate geothermal reservoirs with existing fracture network. In this low pressure stimulation strategy, fluid pressure is lower than the minimum in situ compressive stress, so the fractures are not completely open but permeability improvement can be achieved through shear dilation. We found that in this low pressure regime, the coupling between the fluid phase and the rock solid phase becomes very simple, and the numerical model can achieve a low computational cost. Using this modified model, we study the behavior of a single fracture and a random fracture network.
Three-D Flow Analysis of the Alternate SSME HPOT TAD
NASA Technical Reports Server (NTRS)
Kubinski, Cheryl A.
1993-01-01
This paper describes the results of numerical flow analyses performed in support of design development of the Space Shuttle Main Engine Alternate High Pressure Oxidizer Turbine Turn-around duct (TAD). The flow domain has been modeled using a 3D, Navier-Stokes, general purpose flow solver. The goal of this effort is to achieve an alternate TAD exit flow distribution which closely matches that of the baseline configuration. 3D Navier Stokes CFD analyses were employed to evaluate numerous candidate geometry modifications to the TAD flowpath in order to achieve this goal. The design iterations are summarized, as well as a description of the computational model, numerical results and the conclusions based on these calculations.
John W. van de Lindt; Pouria Bahmani; Mikhail Gershfeld; Gary Mochizuki; Xiaoyun Shao; Steven E. Pryor; Weichiang Pang; Michael D. Symans; Jingjing Tian; Ershad Ziaei; Elaina N. Jennings; Douglas Rammer
2014-01-01
There are thousands of soft-story wood-frame buildings in California which have been recognized as a disaster preparedness problem with concerted mitigation efforts underway in many cities throughout the state. The vast majority of those efforts are based on numerical modelling, often with half-century old data in which assumptions have to be made based on engineering...
NASA Technical Reports Server (NTRS)
Smith, S. D.
1984-01-01
The overall contractual effort and the theory and numerical solution for the Reacting and Multi-Phase (RAMP2) computer code are described. The code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields. Fundamental equations for steady flow of reacting gas-particle mixtures, method of characteristics, mesh point construction, and numerical integration of the conservation equations are considered herein.
The NASA-Langley Wake Vortex Modelling Effort in Support of an Operational Aircraft Spacing System
NASA Technical Reports Server (NTRS)
Proctor, Fred H.
1998-01-01
Two numerical modelling efforts, one using a large eddy simulation model and the other a numerical weather prediction model, are underway in support of NASA's Terminal Area Productivity program. The large-eddy simulation model (LES) has a meteorological framework and permits the interaction of wake vortices with environments characterized by crosswind shear, stratification, humidity, and atmospheric turbulence. Results from the numerical simulations are being used to assist in the development of algorithms for an operational wake-vortex aircraft spacing system. A mesoscale weather forecast model is being adapted for providing operational forecast of winds, temperature, and turbulence parameters to be used in the terminal area. This paper describes the goals and modelling approach, as well as achievements obtained to date. Simulation results will be presented from the LES model for both two and three dimensions. The 2-D model is found to be generally valid for studying wake vortex transport, while the 3-D approach is necessary for realistic treatment of decay via interaction of wake vortices and atmospheric boundary layer turbulence. Meteorology is shown to have an important affect on vortex transport and decay. Presented are results showing that wake vortex transport is unaffected by uniform fog or rain, but wake vortex transport can be strongly affected by nonlinear vertical change in the ambient crosswind. Both simulation and observations show that atmospheric vortices decay from the outside with minimal expansion of the core. Vortex decay and the onset three-dimensional instabilities are found to be enhanced by the presence of ambient turbulence.
Translating Sustainability: The Design of a Secondary Charter School
ERIC Educational Resources Information Center
Hodgkinson, Todd Michael
2011-01-01
Although numerous efforts have been made to enact the concept of sustainability in schools around the world, a single, replicable model of sustainability education fails to exist. Without a replicable model to follow or adapt, educators looking to enact the concept of sustainability are left to their own devices for deciding what this orientation…
NASA Astrophysics Data System (ADS)
Veldkamp, A.; Baartman, J. E. M.; Coulthard, T. J.; Maddy, D.; Schoorl, J. M.; Storms, J. E. A.; Temme, A. J. A. M.; van Balen, R.; van De Wiel, M. J.; van Gorp, W.; Viveen, W.; Westaway, R.; Whittaker, A. C.
2017-06-01
The development and application of numerical models to investigate fluvial sedimentary archives has increased during the last decades resulting in a sustained growth in the number of scientific publications with keywords, 'fluvial models', 'fluvial process models' and 'fluvial numerical models'. In this context we compile and review the current contributions of numerical modelling to the understanding of fluvial archives. In particular, recent advances, current limitations, previous unexpected results and future perspectives are all discussed. Numerical modelling efforts have demonstrated that fluvial systems can display non-linear behaviour with often unexpected dynamics causing significant delay, amplification, attenuation or blurring of externally controlled signals in their simulated record. Numerical simulations have also demonstrated that fluvial records can be generated by intrinsic dynamics without any change in external controls. Many other model applications demonstrate that fluvial archives, specifically of large fluvial systems, can be convincingly simulated as a function of the interplay of (palaeo) landscape properties and extrinsic climate, base level and crustal controls. All discussed models can, after some calibration, produce believable matches with real world systems suggesting that equifinality - where a given end state can be reached through many different pathways starting from different initial conditions and physical assumptions - plays an important role in fluvial records and their modelling. The overall future challenge lies in the development of new methodologies for a more independent validation of system dynamics and research strategies that allow the separation of intrinsic and extrinsic record signals using combined fieldwork and modelling.
NASA Astrophysics Data System (ADS)
Leung, L. R.; Thornton, P. E.; Riley, W. J.; Calvin, K. V.
2017-12-01
Towards the goal of understanding the contributions from natural and managed systems to current and future greenhouse gas fluxes and carbon-climate and carbon-CO2 feedbacks, efforts have been underway to improve representations of the terrestrial, river, and human components of the ACME earth system model. Broadly, our efforts include implementation and comparison of approaches to represent the nutrient cycles and nutrient limitations on ecosystem production, extending the river transport model to represent sediment and riverine biogeochemistry, and coupling of human systems such as irrigation, reservoir operations, and energy and land use with the ACME land and river components. Numerical experiments have been designed to understand how terrestrial carbon, nitrogen, and phosphorus cycles regulate climate system feedbacks and the sensitivity of the feedbacks to different model treatments, examine key processes governing sediment and biogeochemistry in the rivers and their role in the carbon cycle, and exploring the impacts of human systems in perturbing the hydrological and carbon cycles and their interactions. This presentation will briefly introduce the ACME modeling approaches and discuss preliminary results and insights from numerical experiments that lay the foundation for improving understanding of the integrated climate-biogeochemistry-human system.
Atmospheric Research 2011 Technical Highlights
NASA Technical Reports Server (NTRS)
2012-01-01
The 2011 Technical Highlights describes the efforts of all members of Atmospheric Research. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
Comparison of numerical model simulations and SFO wake vortex windline measurements
DOT National Transportation Integrated Search
2003-06-23
To provide quantitative support for the Simultaneous Offset Instrument Approach (SOIA) procedure, an extensive data collection effort was undertaken at San Francisco International Airport by the Federal Aviation Administration (FAA, U.S. Dept. of Tra...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baptista, António M.
This work focuses on the numerical modeling of Columbia River estuarine circulation and associated modeling-supported analyses conducted as an integral part of a multi-disciplinary and multi-institutional effort led by NOAA's Northwest Fisheries Science Center. The overall effort is aimed at: (1) retrospective analyses to reconstruct historic bathymetric features and assess effects of climate and river flow on the extent and distribution of shallow water, wetland and tidal-floodplain habitats; (2) computer simulations using a 3-dimensional numerical model to evaluate the sensitivity of salmon rearing opportunities to various historical modifications affecting the estuary (including channel changes, flow regulation, and diking of tidalmore » wetlands and floodplains); (3) observational studies of present and historic food web sources supporting selected life histories of juvenile salmon as determined by stable isotope, microchemistry, and parasitology techniques; and (4) experimental studies in Grays River in collaboration with Columbia River Estuary Study Taskforce (CREST) and the Columbia Land Trust (CLT) to assess effects of multiple tidal wetland restoration projects on various life histories of juvenile salmon and to compare responses to observed habitat-use patterns in the mainstem estuary. From the above observations, experiments, and additional modeling simulations, the effort will also (5) examine effects of alternative flow-management and habitat-restoration scenarios on habitat opportunity and the estuary's productive capacity for juvenile salmon. The underlying modeling system is part of the SATURN1coastal-margin observatory [1]. SATURN relies on 3D numerical models [2, 3] to systematically simulate and understand baroclinic circulation in the Columbia River estuary-plume-shelf system [4-7] (Fig. 1). Multi-year simulation databases of circulation are produced as an integral part of SATURN, and have multiple applications in understanding estuary/plume variability, the role of the estuary and plume on salmon survival, and functional changes in the estuary-plume system in response to climate and human activities.« less
Numerical simulation of magmatic hydrothermal systems
Ingebritsen, S.E.; Geiger, S.; Hurwitz, S.; Driesner, T.
2010-01-01
The dynamic behavior of magmatic hydrothermal systems entails coupled and nonlinear multiphase flow, heat and solute transport, and deformation in highly heterogeneous media. Thus, quantitative analysis of these systems depends mainly on numerical solution of coupled partial differential equations and complementary equations of state (EOS). The past 2 decades have seen steady growth of computational power and the development of numerical models that have eliminated or minimized the need for various simplifying assumptions. Considerable heuristic insight has been gained from process-oriented numerical modeling. Recent modeling efforts employing relatively complete EOS and accurate transport calculations have revealed dynamic behavior that was damped by linearized, less accurate models, including fluid property control of hydrothermal plume temperatures and three-dimensional geometries. Other recent modeling results have further elucidated the controlling role of permeability structure and revealed the potential for significant hydrothermally driven deformation. Key areas for future reSearch include incorporation of accurate EOS for the complete H2O-NaCl-CO2 system, more realistic treatment of material heterogeneity in space and time, realistic description of large-scale relative permeability behavior, and intercode benchmarking comparisons. Copyright 2010 by the American Geophysical Union.
Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amber Shrivastava; Brian Williams; Ali S. Siahpush
2014-06-01
There have been significant efforts by the heat transfer community to investigate the melting phenomenon of materials. These efforts have included the analytical development of equations to represent melting, numerical development of computer codes to assist in modeling the phenomena, and collection of experimental data. The understanding of the melting phenomenon has application in several areas of interest, for example, the melting of a Phase Change Material (PCM) used as a thermal storage medium as well as the melting of the fuel bundle in a nuclear power plant during an accident scenario. The objective of this research is two-fold. Firstmore » a numerical investigation, using computational fluid dynamics (CFD), of melting with internal heat generation for a vertical cylindrical geometry is presented. Second, to the best of authors knowledge, there are very limited number of engineering experimental results available for the case of melting with Internal Heat Generation (IHG). An experiment was performed to produce such data using resistive, or Joule, heating as the IHG mechanism. The numerical results are compared against the experimental results and showed favorable correlation. Uncertainties in the numerical and experimental analysis are discussed. Based on the numerical and experimental analysis, recommendations are made for future work.« less
Laboratory and theoretical models of planetary-scale instabilities and waves
NASA Technical Reports Server (NTRS)
Hart, John E.; Toomre, Juri
1990-01-01
Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. In the past it has been impossible to accurately model the effects of sphericity on these motions in the laboratory because of the invariant relationship between the uni-directional terrestrial gravity and the rotation axis of an experiment. Researchers studied motions of rotating convecting liquids in spherical shells using electrohydrodynamic polarization forces to generate radial gravity, and hence centrally directed buoyancy forces, in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. Recent efforts at interpretation led to numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. In addition, efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument led to theoretical and numerical models of baroclinic instability. Rather surprising properties were discovered, which may be useful in generating rational (rather than artificially truncated) models for nonlinear baroclinic instability and baroclinic chaos.
Alcohol Abuse Prevention: A Comprehensive Guide for Youth Organizations.
ERIC Educational Resources Information Center
Boys' Clubs of America, New York, NY.
This guide, the culmination of a three year Project TEAM effort by the Boys' Clubs of America, describes numerous strategies for developing an alcohol abuse prevention program. The core of this guide consists of program models developed by the Boys' Club project at seven pilot sites. The models presented cover the following areas: peer leadership,…
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
Alisa A. Wade; Kevin S. McKelvey; Michael K. Schwartz
2015-01-01
Resistance-surface-based connectivity modeling has become a widespread tool for conservation planning. The current ease with which connectivity models can be created, however, masks the numerous untested assumptions underlying both the rules that produce the resistance surface and the algorithms used to locate low-cost paths across the target landscape. Here we present...
Spray combustion model improvement study, 1
NASA Technical Reports Server (NTRS)
Chen, C. P.; Kim, Y. M.; Shang, H. M.
1993-01-01
This study involves the development of numerical and physical modeling in spray combustion. These modeling efforts are mainly motivated to improve the physical submodels of turbulence, combustion, atomization, dense spray effects, and group vaporization. The present mathematical formulation can be easily implemented in any time-marching multiple pressure correction methodologies such as MAST code. A sequence of validation cases includes the nonevaporating, evaporating and_burnin dense_sprays.
Laboratory for Atmospheres: 2006 Technical Highlights
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
2007-01-01
The 2006 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, are highlighted in this report.
Laboratory for Atmospheres 2009 Technical Highlights
NASA Technical Reports Server (NTRS)
Cote, Charles E.
2010-01-01
The 2009 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
Laboratory for Atmospheres 2005 Technical Highlights
NASA Technical Reports Server (NTRS)
2006-01-01
The 2005 Technical highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
Laboratory for Atmospheres 2007 Technical Highlights
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
2008-01-01
The 2007 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
Laboratory for Atmospheres 2010 Technical Highlights
NASA Technical Reports Server (NTRS)
2011-01-01
The 2010 Technical Highlights describes the efforts of all members of the Laboratory for Atmospheres. Their dedication to advancing Earth Science through conducting research, developing and running models, designing instruments, managing projects, running field campaigns, and numerous other activities, is highlighted in this report.
NASA Astrophysics Data System (ADS)
Moradi, A.; Smits, K. M.
2014-12-01
A promising energy storage option to compensate for daily and seasonal energy offsets is to inject and store heat generated from renewable energy sources (e.g. solar energy) in the ground, oftentimes referred to as soil borehole thermal energy storage (SBTES). Nonetheless in SBTES modeling efforts, it is widely recognized that the movement of water vapor is closely coupled to thermal processes. However, their mutual interactions are rarely considered in most soil water modeling efforts or in practical applications. The validation of numerical models that are designed to capture these processes is difficult due to the scarcity of experimental data, limiting the testing and refinement of heat and water transfer theories. A common assumption in most SBTES modeling approaches is to consider the soil as a purely conductive medium with constant hydraulic and thermal properties. However, this simplified approach can be improved upon by better understanding the coupled processes at play. Consequently, developing new modeling techniques along with suitable experimental tools to add more complexity in coupled processes has critical importance in obtaining necessary knowledge in efficient design and implementation of SBTES systems. The goal of this work is to better understand heat and mass transfer processes for SBTES. In this study, we implemented a fully coupled numerical model that solves for heat, liquid water and water vapor flux and allows for non-equilibrium liquid/gas phase change. This model was then used to investigate the influence of different hydraulic and thermal parameterizations on SBTES system efficiency. A two dimensional tank apparatus was used with a series of soil moisture, temperature and soil thermal properties sensors. Four experiments were performed with different test soils. Experimental results provide evidences of thermally induced moisture flow that was also confirmed by numerical results. Numerical results showed that for the test conditions applied here, moisture flow is more influenced by thermal gradients rather than hydraulic gradients. The results also demonstrate that convective fluxes are higher compared to conductive fluxes indicating that moisture flow has more contribution to the overall heat flux than conductive fluxes.
Progress in Validation of Wind-US for Ramjet/Scramjet Combustion
NASA Technical Reports Server (NTRS)
Engblom, William A.; Frate, Franco C.; Nelson, Chris C.
2005-01-01
Validation of the Wind-US flow solver against two sets of experimental data involving high-speed combustion is attempted. First, the well-known Burrows- Kurkov supersonic hydrogen-air combustion test case is simulated, and the sensitively of ignition location and combustion performance to key parameters is explored. Second, a numerical model is developed for simulation of an X-43B candidate, full-scale, JP-7-fueled, internal flowpath operating in ramjet mode. Numerical results using an ethylene-air chemical kinetics model are directly compared against previously existing pressure-distribution data along the entire flowpath, obtained in direct-connect testing conducted at NASA Langley Research Center. Comparison to derived quantities such as burn efficiency and thermal throat location are also made. Reasonable to excellent agreement with experimental data is demonstrated for key parameters in both simulation efforts. Additional Wind-US feature needed to improve simulation efforts are described herein, including maintaining stagnation conditions at inflow boundaries for multi-species flow. An open issue regarding the sensitivity of isolator unstart to key model parameters is briefly discussed.
Contribution of the Recent AUSM Schemes to the Overflow Code: Implementation and Validation
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Buning, Pieter G.
2000-01-01
We shall present results of a recent collaborative effort between the authors attempting to implement the numerical flux scheme, AUSM+ and its new developments, into a widely used NASA code, OVERFLOW. This paper is intended to give a thorough and systematic documentation about the solutions of default test cases using the AUSNI+ scheme. Hence we will address various aspects of numerical solutions, such as accuracy, convergence rate, and effects of turbulence models, over a variety of geometries, speed regimes. We will briefly describe the numerical schemes employed in the calculations, including the capability of solving for low-speed flows and multiphase flows by employing the concept of numerical speed of sound. As a bonus, this low Mach number formulations also enhances convergence to steady solutions for flows even at transonic speed. Calculations for complex 3D turbulent flows were performed with several turbulence models and the results display excellent agreements with measured data.
A Pythonic Approach for Computational Geosciences and Geo-Data Processing
NASA Astrophysics Data System (ADS)
Morra, G.; Yuen, D. A.; Lee, S. M.
2016-12-01
Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
Center for Extended Magnetohydrodynamics Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos, Jesus
This researcher participated in the DOE-funded Center for Extended Magnetohydrodynamics Modeling (CEMM), a multi-institutional collaboration led by the Princeton Plasma Physics Laboratory with Dr. Stephen Jardin as the overall Principal Investigator. This project developed advanced simulation tools to study the non-linear macroscopic dynamics of magnetically confined plasmas. The collaborative effort focused on the development of two large numerical simulation codes, M3D-C1 and NIMROD, and their application to a wide variety of problems. Dr. Ramos was responsible for theoretical aspects of the project, deriving consistent sets of model equations applicable to weakly collisional plasmas and devising test problems for verification ofmore » the numerical codes. This activity was funded for twelve years.« less
Use of hydrologic and hydrodynamic modeling for ecosystem restoration
Obeysekera, J.; Kuebler, L.; Ahmed, S.; Chang, M.-L.; Engel, V.; Langevin, C.; Swain, E.; Wan, Y.
2011-01-01
Planning and implementation of unprecedented projects for restoring the greater Everglades ecosystem are underway and the hydrologic and hydrodynamic modeling of restoration alternatives has become essential for success of restoration efforts. In view of the complex nature of the South Florida water resources system, regional-scale (system-wide) hydrologic models have been developed and used extensively for the development of the Comprehensive Everglades Restoration Plan. In addition, numerous subregional-scale hydrologic and hydrodynamic models have been developed and are being used for evaluating project-scale water management plans associated with urban, agricultural, and inland costal ecosystems. The authors provide a comprehensive summary of models of all scales, as well as the next generation models under development to meet the future needs of ecosystem restoration efforts in South Florida. The multiagency efforts to develop and apply models have allowed the agencies to understand the complex hydrologic interactions, quantify appropriate performance measures, and use new technologies in simulation algorithms, software development, and GIS/database techniques to meet the future modeling needs of the ecosystem restoration programs. Copyright ?? 2011 Taylor & Francis Group, LLC.
Mechanical testing of bones: the positive synergy of finite-element models and in vitro experiments.
Cristofolini, Luca; Schileo, Enrico; Juszczyk, Mateusz; Taddei, Fulvia; Martelli, Saulo; Viceconti, Marco
2010-06-13
Bone biomechanics have been extensively investigated in the past both with in vitro experiments and numerical models. In most cases either approach is chosen, without exploiting synergies. Both experiments and numerical models suffer from limitations relative to their accuracy and their respective fields of application. In vitro experiments can improve numerical models by: (i) preliminarily identifying the most relevant failure scenarios; (ii) improving the model identification with experimentally measured material properties; (iii) improving the model identification with accurately measured actual boundary conditions; and (iv) providing quantitative validation based on mechanical properties (strain, displacements) directly measured from physical specimens being tested in parallel with the modelling activity. Likewise, numerical models can improve in vitro experiments by: (i) identifying the most relevant loading configurations among a number of motor tasks that cannot be replicated in vitro; (ii) identifying acceptable simplifications for the in vitro simulation; (iii) optimizing the use of transducers to minimize errors and provide measurements at the most relevant locations; and (iv) exploring a variety of different conditions (material properties, interface, etc.) that would require enormous experimental effort. By reporting an example of successful investigation of the femur, we show how a combination of numerical modelling and controlled experiments within the same research team can be designed to create a virtuous circle where models are used to improve experiments, experiments are used to improve models and their combination synergistically provides more detailed and more reliable results than can be achieved with either approach singularly.
Heating and Large Scale Dynamics of the Solar Corona
NASA Technical Reports Server (NTRS)
Schnack, Dalton D.
2000-01-01
The effort was concentrated in the areas: coronal heating mechanism, unstructured adaptive grid algorithms, numerical modeling of magnetic reconnection in the MRX experiment: effect of toroidal magnetic field and finite pressure, effect of OHMIC heating and vertical magnetic field, effect of dynamic MESH adaption.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.
1992-01-01
The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.
Perspectives On Dilution Jet Mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.
1990-01-01
NASA recently completed program of measurements and modeling of mixing of transverse jets with ducted crossflow, motivated by need to design or tailor temperature pattern at combustor exit in gas turbine engines. Objectives of program to identify dominant physical mechanisms governing mixing, extend empirical models to provide near-term predictive capability, and compare numerical code calculations with data to guide future analysis improvement efforts.
AEETES - A solar reflux receiver thermal performance numerical model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, R.E. Jr.
1994-02-01
Reflux solar receivers for dish-Stirling electric power generation systems are currently being investigated by several companies and laboratories. In support of these efforts, the AEETES thermal performance numerical model has been developed to predict thermal performance of pool-boiler and heat-pipe reflux receivers. The formulation of the AEETES numerical model, which is applicable to axisymmetric geometries with asymmetric incident fluxes, is presented in detail. Thermal efficiency predictions agree to within 4.1% with test data from on-sun tests of a pool-boiler reflux receiver. Predicted absorber and sidewall temperatures agree with thermocouple data to within 3.3 and 7.3%, respectively. The importance of accountingmore » for the asymmetric incident fluxes is demonstrated in comparisons with predictions using azimuthally averaged variables. The predicted receiver heat losses are characterized in terms of convective, solar radiative, and infrared radiative, and conductive heat transfer mechanisms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael S. Bruno
This report summarizes the research efforts on the DOE supported research project Percussion Drilling (DE-FC26-03NT41999), which is to significantly advance the fundamental understandings of the physical mechanisms involved in combined percussion and rotary drilling, and thereby facilitate more efficient and lower cost drilling and exploration of hard-rock reservoirs. The project has been divided into multiple tasks: literature reviews, analytical and numerical modeling, full scale laboratory testing and model validation, and final report delivery. Literature reviews document the history, pros and cons, and rock failure physics of percussion drilling in oil and gas industries. Based on the current understandings, a conceptualmore » drilling model is proposed for modeling efforts. Both analytical and numerical approaches are deployed to investigate drilling processes such as drillbit penetration with compression, rotation and percussion, rock response with stress propagation, damage accumulation and failure, and debris transportation inside the annulus after disintegrated from rock. For rock mechanics modeling, a dynamic numerical tool has been developed to describe rock damage and failure, including rock crushing by compressive bit load, rock fracturing by both shearing and tensile forces, and rock weakening by repetitive compression-tension loading. Besides multiple failure criteria, the tool also includes a damping algorithm to dissipate oscillation energy and a fatigue/damage algorithm to update rock properties during each impact. From the model, Rate of Penetration (ROP) and rock failure history can be estimated. For cuttings transport in annulus, a 3D numerical particle flowing model has been developed with aid of analytical approaches. The tool can simulate cuttings movement at particle scale under laminar or turbulent fluid flow conditions and evaluate the efficiency of cutting removal. To calibrate the modeling efforts, a series of full-scale fluid hammer drilling tests, as well as single impact tests, have been designed and executed. Both Berea sandstone and Mancos shale samples are used. In single impact tests, three impacts are sequentially loaded at the same rock location to investigate rock response to repetitive loadings. The crater depth and width are measured as well as the displacement and force in the rod and the force in the rock. Various pressure differences across the rock-indentor interface (i.e. bore pressure minus pore pressure) are used to investigate the pressure effect on rock penetration. For hammer drilling tests, an industrial fluid hammer is used to drill under both underbalanced and overbalanced conditions. Besides calibrating the modeling tool, the data and cuttings collected from the tests indicate several other important applications. For example, different rock penetrations during single impact tests may reveal why a fluid hammer behaves differently with diverse rock types and under various pressure conditions at the hole bottom. On the other hand, the shape of the cuttings from fluid hammer tests, comparing to those from traditional rotary drilling methods, may help to identify the dominant failure mechanism that percussion drilling relies on. If so, encouraging such a failure mechanism may improve hammer performance. The project is summarized in this report. Instead of compiling the information contained in the previous quarterly or other technical reports, this report focuses on the descriptions of tasks, findings, and conclusions, as well as the efforts on promoting percussion drilling technologies to industries including site visits, presentations, and publications. As a part of the final deliveries, the 3D numerical model for rock mechanics is also attached.« less
A numerical model for thermal energy storage systems utilising encapsulated phase change materials
NASA Astrophysics Data System (ADS)
Jacob, Rhys; Saman, Wasim; Bruno, Frank
2016-05-01
In an effort to reduce the cost of thermal energy storage for concentrated solar power plants, a thermocline storage concept was investigated. Two systems were investigated being a sensible-only and an encapsulated phase change system. Both systems have the potential to reduce the storage tank volume and/or reduce the cost of the filler material, thereby reducing the cost of the system when compared to current two-tank molten salt systems. The objective of the current paper is to create a numerical model capable of designing and simulating the aforementioned thermocline storage concepts in the open source programming language known as Python. The results of the current study are compared to previous numerical results and are found to be in good agreement.
Annual Research Briefs - 2000: Center for Turbulence Research
NASA Technical Reports Server (NTRS)
2000-01-01
This report contains the 2000 annual progress reports of the postdoctoral Fellows and visiting scholars of the Center for Turbulence Research (CTR). It summarizes the research efforts undertaken under the core CTR program. Last year, CTR sponsored sixteen resident Postdoctoral Fellows, nine Research Associates, and two Senior Research Fellows, hosted seven short term visitors, and supported four doctoral students. The Research Associates are supported by the Departments of Defense and Energy. The reports in this volume are divided into five groups. The first group largely consists of the new areas of interest at CTR. It includes efficient algorithms for molecular dynamics, stability in protoplanetary disks, and experimental and numerical applications of evolutionary optimization algorithms for jet flow control. The next group of reports is in experimental, theoretical, and numerical modeling efforts in turbulent combustion. As more challenging computations are attempted, the need for additional theoretical and experimental studies in combustion has emerged. A pacing item for computation of nonpremixed combustion is the prediction of extinction and re-ignition phenomena, which is currently being addressed at CTR. The third group of reports is in the development of accurate and efficient numerical methods, which has always been an important part of CTR's work. This is the tool development part of the program which supports our high fidelity numerical simulations in such areas as turbulence in complex geometries, hypersonics, and acoustics. The final two groups of reports are concerned with LES and RANS prediction methods. There has been significant progress in wall modeling for LES of high Reynolds number turbulence and in validation of the v(exp 2) - f model for industrial applications.
Unsteady numerical simulations of the stability and dynamics of flames
NASA Technical Reports Server (NTRS)
Kailasanath, K.; Patnaik, G.; Oran, E. S.
1995-01-01
In this report we describe the research performed at the Naval Research Laboratory in support of the NASA Microgravity Science and Applications Program over the past three years (from Feb. 1992) with emphasis on the work performed since the last microgravity combustion workshop. The primary objective of our research is to develop an understanding of the differences in the structure, stability, dynamics and extinction of flames in earth gravity and in microgravity environments. Numerical simulations, in which the various physical and chemical processes can be independently controlled, can significantly advance our understanding of these differences. Therefore, our approach is to use detailed time-dependent, multi-dimensional, multispecies numerical models to perform carefully designed computational experiments. The basic issues we have addressed, a general description of the numerical approach, and a summary of the results are described in this report. More detailed discussions are available in the papers published which are referenced herein. Some of the basic issues we have addressed recently are (1) the relative importance of wall losses and gravity on the extinguishment of downward-propagating flames; (2) the role of hydrodynamic instabilities in the formation of cellular flames; (3) effects of gravity on burner-stabilized flames, and (4) effects of radiative losses and chemical-kinetics on flames near flammability limits. We have also expanded our efforts to include hydrocarbon flames in addition to hydrogen flames and to perform simulations in support of other on-going efforts in the microgravity combustion sciences program. Modeling hydrocarbon flames typically involves a larger number of species and a much larger number of reactions when compared to hydrogen. In addition, more complex radiation models may also be needed. In order to efficiently compute such complex flames recent developments in parallel computing have been utilized to develop a state-of-the-art parallel flame code. This is discussed below in some detail after a brief discussion of the numerical models.
Effort to Accelerate MBSE Adoption and Usage at JSC
NASA Technical Reports Server (NTRS)
Wang, Lui; Izygon, Michel; Okron, Shira; Garner, Larry; Wagner, Howard
2016-01-01
This paper describes the authors' experience in adopting Model Based System Engineering (MBSE) at the NASA/Johnson Space Center (JSC). Since 2009, NASA/JSC has been applying MBSE using the Systems Modeling Language (SysML) to a number of advanced projects. Models integrate views of the system from multiple perspectives, capturing the system design information for multiple stakeholders. This method has allowed engineers to better control changes, improve traceability from requirements to design and manage the numerous interactions between components. As the project progresses, the models become the official source of information and used by multiple stakeholders. Three major types of challenges that hamper the adoption of the MBSE technology are described. These challenges are addressed by a multipronged approach that includes educating the main stakeholders, implementing an organizational infrastructure that supports the adoption effort, defining a set of modeling guidelines to help engineers in their modeling effort, providing a toolset that support the generation of valuable products, and providing a library of reusable models. JSC project case studies are presented to illustrate how the proposed approach has been successfully applied.
NASA Technical Reports Server (NTRS)
Kim, Quiesup
2001-01-01
Key elements of space qualification of opto-electric and photonic optical devices were overviewed. Efforts were concentrated on the reliability concerns of the devices needed for potential applications in space environments. The ultimate goal for this effort is to gradually establish enough data to develop a space qualification plan of newly developed specific photonic parts using empirical and numerical models to assess the life-time and degradation of the devices for potential long term space missions.
An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described. The ultimate goal is to model an entire engine working space. Parallel plate and tubular heat exchanger models are described, with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects). The model assumes laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations describing the model have been solved using the Crack-Nicloson finite-difference scheme. Model predictions are compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement is obtained for flow both in circular tubes and between parallel plates. The computational heat transfer results are in good agreement with the analytical heat transfer results for parallel plates.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761
The single-zone numerical model of homogeneous charge compression ignition engine performance
NASA Astrophysics Data System (ADS)
Fedyanov, E. A.; Itkis, E. M.; Kuzmin, V. N.; Shumskiy, S. N.
2017-02-01
The single-zone model of methane-air mixture combustion in the Homogeneous Charge Compression Ignition engine was developed. First modeling efforts resulted in the selection of the detailed kinetic reaction mechanism, most appropriate for the conditions of the HCCI process. Then, the model was completed so as to simulate the performance of the four-stroke engine and was coupled by physically reasonable adjusting functions. Validation of calculations against experimental data showed acceptable agreement.
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
NASA Technical Reports Server (NTRS)
Benedetti, Angela; Baldasano, Jose M.; Basart, Sara; Benincasa, Francesco; Boucher, Olivier; Brooks, Malcolm E.; Chen, Jen-Ping; Colarco, Peter R.; Gong, Sunlin; Huneeus, Nicolas;
2014-01-01
Over the last few years, numerical prediction of dust aerosol concentration has become prominent at several research and operational weather centres due to growing interest from diverse stakeholders, such as solar energy plant managers, health professionals, aviation and military authorities and policymakers. Dust prediction in numerical weather prediction-type models faces a number of challenges owing to the complexity of the system. At the centre of the problem is the vast range of scales required to fully account for all of the physical processes related to dust. Another limiting factor is the paucity of suitable dust observations available for model, evaluation and assimilation. This chapter discusses in detail numerical prediction of dust with examples from systems that are currently providing dust forecasts in near real-time or are part of international efforts to establish daily provision of dust forecasts based on multi-model ensembles. The various models are introduced and described along with an overview on the importance of dust prediction activities and a historical perspective. Assimilation and evaluation aspects in dust prediction are also discussed.
Utilization of satellite data and regional scale numerical models in short range weather forecasting
NASA Technical Reports Server (NTRS)
Kreitzberg, C. W.
1985-01-01
Overwhelming evidence was developed in a number of studies of satellite data impact on numerical weather prediction that it is unrealistic to expect satellite temperature soundings to improve detailed regional numerical weather prediction. It is likely that satellite data over the United States would substantially impact mesoscale dynamical predictions if the effort were made to develop a composite moisture analysis system. The horizontal variability of moisture, most clearly depicited in images from satellite water vapor channels, would not be determined from conventional rawinsondes even if that network were increased by a doubling of both the number of sites and the time frequency.
Exoatmospheric intercepts using zero effort miss steering for midcourse guidance
NASA Astrophysics Data System (ADS)
Newman, Brett
The suitability of proportional navigation, or an equivalent zero effort miss formulation, for exatmospheric intercepts during midcourse guidance, followed by a ballistic coast to the endgame, is addressed. The problem is formulated in terms of relative motion in a general, three dimensional framework. The proposed guidance law for the commanded thrust vector orientation consists of the sum of two terms: (1) along the line of sight unit direction and (2) along the zero effort miss component perpendicular to the line of sight and proportional to the miss itself and a guidance gain. If the guidance law is to be suitable for longer range targeting applications with significant ballistic coasting after burnout, determination of the zero effort miss must account for the different gravitational accelerations experienced by each vehicle. The proposed miss determination techniques employ approximations for the true differential gravity effect and thus, are less accurate than a direct numerical propagation of the governing equations, but more accurate than a baseline determination, which assumes equal accelerations for both vehicles. Approximations considered are constant, linear, quadratic, and linearized inverse square models. Theoretical results are applied to a numerical engagement scenario and the resulting performance is evaluated in terms of the miss distances determined from nonlinear simulation.
Numerical Modeling of Propellant Boiloff in Cryogenic Storage Tank
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; Steadman, T. E.; Maroney, J. L.
2007-01-01
This Technical Memorandum (TM) describes the thermal modeling effort undertaken at Marshall Space Flight Center to support the Cryogenic Test Laboratory at Kennedy Space Center (KSC) for a study of insulation materials for cryogenic tanks in order to reduce propellant boiloff during long-term storage. The Generalized Fluid System Simulation program has been used to model boiloff in 1,000-L demonstration tanks built for testing the thermal performance of glass bubbles and perlite insulation. Numerical predictions of boiloff rate and ullage temperature have been compared with the measured data from the testing of demonstration tanks. A satisfactory comparison between measured and predicted data has been observed for both liquid nitrogen and hydrogen tests. Based on the experience gained with the modeling of the demonstration tanks, a numerical model of the liquid hydrogen storage tank at launch complex 39 at KSC was built. The predicted boiloff rate of hydrogen has been found to be in good agreement with observed field data. This TM describes three different models that have been developed during this period of study (March 2005 to June 2006), comparisons with test data, and results of parametric studies.
Advanced Concepts Theory Annual Report 1983.
1984-05-18
variety of theoretical models, tools, and computational strategies to understand, guide, and predict the behavior of high brightness, laboratory x-ray... theoretical models must treat hard and soft x-ray emission from different electron configurations with K, L, and M shells, and they must include... theoretical effort has basis for comprehending the trends which appear in the been devoted to elucidating the effects of opacity on the numerical results
Advanced Combustion Numerics and Modeling - FY18 First Quarter Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitesides, R. A.; Killingsworth, N. J.; McNenly, M. J.
This project is focused on early stage research and development of numerical methods and models to improve advanced engine combustion concepts and systems. The current focus is on development of new mathematics and algorithms to reduce the time to solution for advanced combustion engine design using detailed fuel chemistry. The research is prioritized towards the most time-consuming workflow bottlenecks (computer and human) and accuracy gaps that slow ACS program members. Zero-RK, the fast and accurate chemical kinetics solver software developed in this project, is central to the research efforts and continues to be developed to address the current and emergingmore » needs of the engine designers, engine modelers and fuel mechanism developers.« less
Numerical tests of local scale invariance in ageing q-state Potts models
NASA Astrophysics Data System (ADS)
Lorenz, E.; Janke, W.
2007-01-01
Much effort has been spent over the last years to achieve a coherent theoretical description of ageing as a non-linear dynamics process. Long supposed to be a consequence of the slow dynamics of glassy systems only, ageing phenomena could also be identified in the phase-ordering kinetics of simple ferromagnets. As a phenomenological approach Henkel et al. developed a group of local scale transformations under which two-time autocorrelation and response functions should transform covariantly. This work is to extend previous numerical tests of the predicted scaling functions for the Ising model by Monte Carlo simulations of two-dimensional q-state Potts models with q=3 and 8, which, in equilibrium, undergo temperature-driven phase transitions of second and first order, respectively.
Two-dimensional numerical simulation of a Stirling engine heat exchanger
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Tew, Roy C.; Dudenhoefer, James E.
1989-01-01
The first phase of an effort to develop multidimensional models of Stirling engine components is described; the ultimate goal is to model an entire engine working space. More specifically, parallel plate and tubular heat exchanger models with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects) are described. The model assumes: laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations, describing the model, were solved using Crank-Nicloson finite-difference scheme. Model predictions were compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement was obtained for the model predictions with analytical solutions available for both flow in circular tubes and between parallel plates. Also the heat transfer computational results are in good agreement with the heat transfer analytical results for parallel plates.
Towards Automatic Processing of Virtual City Models for Simulations
NASA Astrophysics Data System (ADS)
Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.
2016-10-01
Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.
Modelling of evaporation of a dispersed liquid component in a chemically active gas flow
NASA Astrophysics Data System (ADS)
Kryukov, V. G.; Naumov, V. I.; Kotov, V. Yu.
1994-01-01
A model has been developed to investigate evaporation of dispersed liquids in chemically active gas flow. Major efforts have been directed at the development of algorithms for implementing this model. The numerical experiments demonstrate that, in the boundary layer, significant changes in the composition and temperature of combustion products take place. This gives the opportunity to more correctly model energy release processes in combustion chambers of liquid-propellant rocket engines, gas-turbine engines, and other power devices.
Interoperability of Neuroscience Modeling Software
Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik
2009-01-01
Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374
A New Experiment for Investigating Evaporation and Condensation of Cryogenic Propellants.
Bellur, K; Médici, E F; Kulshreshtha, M; Konduru, V; Tyrewala, D; Tamilarasan, A; McQuillen, J; Leao, J; Hussey, D S; Jacobson, D L; Scherschligt, J; Hermanson, J C; Choi, C K; Allen, J S
2016-03-01
Passive and active technologies have been used to control propellant boil-off, but the current state of understanding of cryogenic evaporation and condensation in microgravity is insufficient for designing large cryogenic depots critical to the long-term space exploration missions. One of the key factors limiting the ability to design such systems is the uncertainty in the accommodation coefficients (evaporation and condensation), which are inputs for kinetic modeling of phase change. A novel, combined experimental and computational approach is being used to determine the accommodation coefficients for liquid hydrogen and liquid methane. The experimental effort utilizes the Neutron Imaging Facility located at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland to image evaporation and condensation of hydrogenated propellants inside of metallic containers. The computational effort includes numerical solution of a model for phase change in the contact line and thin film regions as well as an CFD effort for determining the appropriate thermal boundary conditions for the numerical solution of the evaporating and condensing liquid. Using all three methods, there is the possibility of extracting the accommodation coefficients from the experimental observations. The experiments are the first known observation of a liquid hydrogen menisci condensing and evaporating inside aluminum and stainless steel cylinders. The experimental technique, complimentary computational thermal model and meniscus shape determination are reported. The computational thermal model has been shown to accurately track the transient thermal response of the test cells. The meniscus shape determination suggests the presence of a finite contact angle, albeit very small, between liquid hydrogen and aluminum oxide.
A New Experiment for Investigating Evaporation and Condensation of Cryogenic Propellants
Bellur, K.; Médici, E. F.; Kulshreshtha, M.; Konduru, V.; Tyrewala, D.; Tamilarasan, A.; McQuillen, J.; Leao, J.; Hussey, D. S.; Jacobson, D. L.; Scherschligt, J.; Hermanson, J. C.; Choi, C. K.; Allen, J. S.
2016-01-01
Passive and active technologies have been used to control propellant boil-off, but the current state of understanding of cryogenic evaporation and condensation in microgravity is insufficient for designing large cryogenic depots critical to the long-term space exploration missions. One of the key factors limiting the ability to design such systems is the uncertainty in the accommodation coefficients (evaporation and condensation), which are inputs for kinetic modeling of phase change. A novel, combined experimental and computational approach is being used to determine the accommodation coefficients for liquid hydrogen and liquid methane. The experimental effort utilizes the Neutron Imaging Facility located at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland to image evaporation and condensation of hydrogenated propellants inside of metallic containers. The computational effort includes numerical solution of a model for phase change in the contact line and thin film regions as well as an CFD effort for determining the appropriate thermal boundary conditions for the numerical solution of the evaporating and condensing liquid. Using all three methods, there is the possibility of extracting the accommodation coefficients from the experimental observations. The experiments are the first known observation of a liquid hydrogen menisci condensing and evaporating inside aluminum and stainless steel cylinders. The experimental technique, complimentary computational thermal model and meniscus shape determination are reported. The computational thermal model has been shown to accurately track the transient thermal response of the test cells. The meniscus shape determination suggests the presence of a finite contact angle, albeit very small, between liquid hydrogen and aluminum oxide. PMID:28154426
Numerical description of cavitation on axisymmetric bodies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hickox, C.E.; Hailey, C.E.; Wolfe, W.P.
1988-01-01
This paper reports on ongoing studies which are directed toward the development of predictive techniques for the modeling of steady cavitation on axisymmetric bodies. The primary goal of the modeling effort is the prediction of cavity shape and pressure distribution from which forces and moments can be calculated. Here we present an overview of the modeling techniques developed and compare predictions with experimental data obtained from water tunnel tests for both limited and supercavitation. 14 refs., 4 figs.
NASA Technical Reports Server (NTRS)
Thomas, Andrew C.; Chai, F.; Townsend, D. W.; Xue, H.
2002-01-01
The goals of this project were to acquire, process, QC, archive and analyze SeaWiFS chlorophyll fields over the Gulf of Maine and Scotia Shelf region. The focus of the analysis effort was to calculate and quantify seasonality and interannual. variability of SeaWiFS-measured phytoplankton biomass in the study area and compare these to physical forcing and hydrography. An additional focus within this effort was on regional differences within the heterogeneous biophysical regions of the Gulf of Maine / Scotia Shelf. Overall goals were approached through the combined use of SeaWiFS and AVHRR data and the development of a coupled biology-physical numerical model.
Validation of Model Forecasts of the Ambient Solar Wind
NASA Technical Reports Server (NTRS)
Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.
2009-01-01
Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.
NASA Technical Reports Server (NTRS)
Carlson, T. N. (Principal Investigator)
1981-01-01
Efforts were made (1) to bring the image processing and boundary layer model operation into a completely interactive mode and (2) to test a method for determining the surface energy budget and surface moisture availability and thermal inertia on a scale appreciably larger than that of the city. A region a few hundred kilometers on a side centered over southern Indiana was examined.
Combustion Fundamentals Research
NASA Technical Reports Server (NTRS)
1983-01-01
Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Stauffer, Philip H.; Reed, Donald T.
The primary objective of the experimental effort described here is to aid in understanding the complex nature of liquid, vapor, and solid transport occurring around heated nuclear waste in bedded salt. In order to gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that (a) hydrological and physiochemical parameters and (b) processes are correctly simulated. The experiments proposed here are designed to study aspects of the system that have not been satisfactorily quantified in prior work. In addition to exploring the complex coupled physical processes in support of numerical model validation, lessons learnedmore » from these experiments will facilitate preparations for larger-scale experiments that may utilize similar instrumentation techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steich, D J; Brugger, S T; Kallman, J S
2000-02-01
This final report describes our efforts on the Three-Dimensional Massively Parallel CEM Technologies LDRD project (97-ERD-009). Significant need exists for more advanced time domain computational electromagnetics modeling. Bookkeeping details and modifying inflexible software constitute a vast majority of the effort required to address such needs. The required effort escalates rapidly as problem complexity increases. For example, hybrid meshes requiring hybrid numerics on massively parallel platforms (MPPs). This project attempts to alleviate the above limitations by investigating flexible abstractions for these numerical algorithms on MPPs using object-oriented methods, providing a programming environment insulating physics from bookkeeping. The three major design iterationsmore » during the project, known as TIGER-I to TIGER-III, are discussed. Each version of TIGER is briefly discussed along with lessons learned during the development and implementation. An Application Programming Interface (API) of the object-oriented interface for Tiger-III is included in three appendices. The three appendices contain the Utilities, Entity-Attribute, and Mesh libraries developed during the project. The API libraries represent a snapshot of our latest attempt at insulated the physics from the bookkeeping.« less
Quality effort decision in service supply chain with quality preference based on quantum game
NASA Astrophysics Data System (ADS)
Zhang, Cuihua; Xing, Peng; Wang, Jianwei
2015-04-01
Service quality preference behaviors of both members are considered in service supply chain (SSC) including a service integrator and a service provider with stochastic demand. Through analysis of service quality cost and revenue, the utility functions are established on service quality effort degree and service quality preference level in integrated and decentralized SSC. Nash equilibrium and quantum game are used to optimize the models. By comparing the different solutions, the optimal strategies are obtained in SSC with quality preference. Then some numerical examples are studied and the changing trend of service quality effort is further analyzed by the influence of the entanglement operator and quality preferences.
Effect of risk perception on epidemic spreading in temporal networks
NASA Astrophysics Data System (ADS)
Moinet, Antoine; Pastor-Satorras, Romualdo; Barrat, Alain
2018-01-01
Many progresses in the understanding of epidemic spreading models have been obtained thanks to numerous modeling efforts and analytical and numerical studies, considering host populations with very different structures and properties, including complex and temporal interaction networks. Moreover, a number of recent studies have started to go beyond the assumption of an absence of coupling between the spread of a disease and the structure of the contacts on which it unfolds. Models including awareness of the spread have been proposed, to mimic possible precautionary measures taken by individuals that decrease their risk of infection, but have mostly considered static networks. Here, we adapt such a framework to the more realistic case of temporal networks of interactions between individuals. We study the resulting model by analytical and numerical means on both simple models of temporal networks and empirical time-resolved contact data. Analytical results show that the epidemic threshold is not affected by the awareness but that the prevalence can be significantly decreased. Numerical studies on synthetic temporal networks highlight, however, the presence of very strong finite-size effects, resulting in a significant shift of the effective epidemic threshold in the presence of risk awareness. For empirical contact networks, the awareness mechanism leads as well to a shift in the effective threshold and to a strong reduction of the epidemic prevalence.
2004-11-01
variation in ventilation rates over time and the distribution of ventilation air within a building, and to estimate the impact of envelope air ... tightening efforts on infiltration rates. • It may be used to determine the indoor air quality performance of a building before construction, and to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mickens, Ronald E.
2008-12-22
This research examined the following items/issues: the NSFD methodology, technical achievements and applications, dissemination efforts and research related professional activities. Also a list of unresolved issues were identified that could form the basis for future research in the area of constructing and analyzing NSFD schemes for both ODE's and PDE's.
Enhancing Extension and Research Activities through the Use of Web GIS
ERIC Educational Resources Information Center
Estwick, Noel M.; Griffin, Richard W.; James, Annette A.; Roberson, Samuel G.
2016-01-01
There have been numerous efforts aimed at improving geographic literacy in order to address societal challenges. Extension educators can use geographic information system (GIS) technology to help their clients cultivate spatial thinking skills and solve problems. Researchers can use it to model relationships and better answer questions. A program…
Application of real rock pore-threat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakibul, M.; Sarker, H.; McIntyre, D.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data.« less
Application of real rock pore-throat statistics to a regular pore network model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, M.R.; McIntyre, D.; Ferer, M.
2011-01-01
This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data. Introduction« less
NASA Astrophysics Data System (ADS)
Lee, Jonghyun; SanSoucie, Michael P.
2017-08-01
Materials research is being conducted using an electromagnetic levitator installed in the International Space Station. Various metallic alloys were tested to elucidate unknown links among the structures, processes, and properties. To accomplish the mission of these space experiments, several ground-based activities have been carried out. This article presents some of our ground-based supporting experiments and numerical modeling efforts. Mass evaporation of Fe50Co50, one of flight compositions, was predicted numerically and validated by the tests using an electrostatic levitator (ESL). The density of various compositions within the Fe-Co system was measured with ESL. These results are being served as reference data for the space experiments. The convection inside a electromagnetically-levitated droplet was also modeled to predict the flow status, shear rate, and convection velocity under various process parameters, which is essential information for designing and analyzing the space experiments of some flight compositions influenced by convection.
NASA Astrophysics Data System (ADS)
Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun
2017-11-01
Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.
Palazoğlu, T K; Gökmen, V
2008-04-01
In this study, a numerical model was developed to simulate frying of potato strips and estimate acrylamide levels in French fries. Heat and mass transfer parameters determined during frying of potato strips and the formation and degradation kinetic parameters of acrylamide obtained with a sugar-asparagine model system were incorporated within the model. The effect of reducing sugar content (0.3 to 2.15 g/100 g dry matter), strip thickness (8.5 x 8.5 mm and 10 x 10 mm), and frying time (3, 4, 5, and 6 min) and temperature (150, 170, and 190 degrees C) on resultant acrylamide level in French fries was investigated both numerically and experimentally. The model appeared to closely estimate the acrylamide contents, and thereby may potentially save considerable time, money, and effort during the stages of process design and optimization.
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal
NASA Astrophysics Data System (ADS)
Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi
2017-06-01
Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.
Abrams , Robert H.; Loague, Keith; Kent, Douglas B.
1998-01-01
The work reported here is the first part of a larger effort focused on efficient numerical simulation of redox zone development in contaminated aquifers. The sequential use of various electron acceptors, which is governed by the energy yield of each reaction, gives rise to redox zones. The large difference in energy yields between the various redox reactions leads to systems of equations that are extremely ill-conditioned. These equations are very difficult to solve, especially in the context of coupled fluid flow, solute transport, and geochemical simulations. We have developed a general, rational method to solve such systems where we focus on the dominant reactions, compartmentalizing them in a manner that is analogous to the redox zones that are often observed in the field. The compartmentalized approach allows us to easily solve a complex geochemical system as a function of time and energy yield, laying the foundation for our ongoing work in which we couple the reaction network, for the development of redox zones, to a model of subsurface fluid flow and solute transport. Our method (1) solves the numerical system without evoking a redox parameter, (2) improves the numerical stability of redox systems by choosing which compartment and thus which reaction network to use based upon the concentration ratios of key constituents, (3) simulates the development of redox zones as a function of time without the use of inhibition factors or switching functions, and (4) can reduce the number of transport equations that need to be solved in space and time. We show through the use of various model performance evaluation statistics that the appropriate compartment choice under different geochemical conditions leads to numerical solutions without significant error. The compartmentalized approach described here facilitates the next phase of this effort where we couple the redox zone reaction network to models of fluid flow and solute transport.
Results of the Workshop on Impact Cratering: Bridging the Gap Between Modeling and Observations
NASA Technical Reports Server (NTRS)
Herrick, Robert (Editor); Pierazzo, Elisabetta (Editor)
2003-01-01
On February 7-9,2003, approximately 60 scientists gathered at the Lunar and Planetary Institute in Houston, Texas, for a workshop devoted to improving knowledge of the impact cratering process. We (co-conveners Elisabetta Pierazzo and Robert Herrick) both focus research efforts on studying the impact cratering process, but the former specializes in numerical modeling while the latter draws inferences from observations of planetary craters. Significant work has been done in several key areas of impact studies over the past several years, but in many respects there seem to be a disconnect between the groups employing different approaches, in particular modeling versus observations. The goal in convening this workshop was to bring together these disparate groups to have an open dialogue for the purposes of answering outstanding questions about the impact process and setting future research directions. We were successful in getting participation from most of the major research groups studying the impact process. Participants gathered from five continents with research specialties ranging from numerical modeling to field geology, and from small-scale experimentation and geochemical sample analysis to seismology and remote sensing.With the assistance of the scientific advisory committee (Bevan French, Kevin Housen, Bill McKinnon, Jay Melosh, and Mike Zolensky), the workshop was divided into a series of sessions devoted to different aspects of the cratering process. Each session was opened by two invited t a b , one given by a specialist in numerical or experimental modeling approaches, and the other by a specialist in geological, geophysical, or geochemical observations. Shorter invited and contributed talks filled out the sessions, which were then concluded with an open discussion time. All modelers were requested to address the question of what observations would better constrain their models, and all observationists were requested to discuss how their observations can constrain modeling efforts.
Predictive Capability Maturity Model for computational modeling and simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronauticsmore » and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.« less
Modeling of Turbulent Free Shear Flows
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.; DeBonis, James R.; Georgiadis, Nicolas J.
2013-01-01
The modeling of turbulent free shear flows is crucial to the simulation of many aerospace applications, yet often receives less attention than the modeling of wall boundary layers. Thus, while turbulence model development in general has proceeded very slowly in the past twenty years, progress for free shear flows has been even more so. This paper highlights some of the fundamental issues in modeling free shear flows for propulsion applications, presents a review of past modeling efforts, and identifies areas where further research is needed. Among the topics discussed are differences between planar and axisymmetric flows, development versus self-similar regions, the effect of compressibility and the evolution of compressibility corrections, the effect of temperature on jets, and the significance of turbulent Prandtl and Schmidt numbers for reacting shear flows. Large eddy simulation greatly reduces the amount of empiricism in the physical modeling, but is sensitive to a number of numerical issues. This paper includes an overview of the importance of numerical scheme, mesh resolution, boundary treatment, sub-grid modeling, and filtering in conducting a successful simulation.
NASA Astrophysics Data System (ADS)
Born, A.; Stocker, T. F.
2014-12-01
The long, high-resolution and largely undisturbed depositional record of polar ice sheets is one of the greatest resources in paleoclimate research. The vertical profile of isotopic and other geochemical tracers provides a full history of depositional and dynamical variations. Numerical simulations of this archive could afford great advances both in the interpretation of these tracers as well as to help improve ice sheet models themselves, as show successful implementations in oceanography and atmospheric dynamics. However, due to the slow advection velocities, tracer modeling in ice sheets is particularly prone to numerical diffusion, thwarting efforts that employ straightforward solutions. Previous attemps to circumvent this issue follow conceptually and computationally extensive approaches that augment traditional Eulerian models of ice flow with a semi-Lagrangian tracer scheme (e.g. Clarke et al., QSR, 2005). Here, we propose a new vertical discretization for ice sheet models that eliminates numerical diffusion entirely. Vertical motion through the model mesh is avoided by mimicking the real-world ice flow as a thinning of underlying layers (see figure). A new layer is added to the surface at equidistant time intervals (isochronally). Therefore, each layer is uniquely identified with an age. Horizontal motion follows the shallow ice approximation using an implicit numerical scheme. Vertical diffusion of heat which is physically desirable is also solved implicitly. A simulation of a two-dimensional section through the Greenland ice sheet will be discussed.
NASA Astrophysics Data System (ADS)
Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef
2013-01-01
The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.
NASA Technical Reports Server (NTRS)
Rubinstein, R. (Editor); Rumsey, C. L. (Editor); Salas, M. D. (Editor); Thomas, J. L. (Editor); Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Advances in turbulence modeling are needed in order to calculate high Reynolds number flows near the onset of separation and beyond. To this end, the participants in this workshop made the following recommendations. (1) A national/international database and standards for turbulence modeling assessment should be established. Existing experimental data sets should be reviewed and categorized. Advantage should be taken of other efforts already under-way, such as that of the European Research Community on Flow, Turbulence, and Combustion (ERCOFTAC) consortium. Carefully selected "unit" experiments will be needed, as well as advances in instrumentation, to fill the gaps in existing datasets. A high priority should be given to document existing turbulence model capabilities in a standard form, including numerical implementation issues such as grid quality and resolution. (2) NASA should support long-term research on Algebraic Stress Models and Reynolds Stress Models. The emphasis should be placed on improving the length-scale equation, since it is the least understood and is a key component of two-equation and higher models. Second priority should be given to the development of improved near-wall models. Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) would provide valuable guidance in developing and validating new Reynolds-averaged Navier-Stokes (RANS) models. Although not the focus of this workshop, DNS, LES, and hybrid methods currently represent viable approaches for analysis on a limited basis. Therefore, although computer limitations require the use of RANS methods for realistic configurations at high Reynolds number in the foreseeable future, a balanced effort in turbulence modeling development, validation, and implementation should include these approaches as well.
NASA Astrophysics Data System (ADS)
Pickett, Leon, Jr.
Past research has conclusively shown that long fiber structural composites possess superior specific energy absorption characteristics as compared to steel and aluminum structures. However, destructive physical testing of composites is very costly and time consuming. As a result, numerical solutions are desirable as an alternative to experimental testing. Up until this point, very little numerical work has been successful in predicting the energy absorption of composite crush structures. This research investigates the ability to use commercially available numerical modeling tools to approximate the energy absorption capability of long-fiber composite crush tubes. This study is significant because it provides a preliminary analysis of the suitability of LS-DYNA to numerically characterize the crushing behavior of a dynamic axial impact crushing event. Composite crushing theory suggests that there are several crushing mechanisms occurring during a composite crush event. This research evaluates the capability and suitability of employing, LS-DYNA, to simulate the dynamic crush event of an E-glass/epoxy cylindrical tube. The model employed is the composite "progressive failure model", a much more limited failure model when compared to the experimental failure events which naturally occur. This numerical model employs (1) matrix cracking, (2) compression, and (3) fiber breakage failure modes only. The motivation for the work comes from the need to reduce the significant cost associated with experimental trials. This research chronicles some preliminary efforts to better understand the mechanics essential in pursuit of this goal. The immediate goal is to begin to provide deeper understanding of a composite crush event and ultimately create a viable alternative to destructive testing of composite crush tubes.
ERIC Educational Resources Information Center
Herrera, Oriel A.; Fuller, David A.
2011-01-01
Remote experimentation laboratories (REL) are systems based on real equipment that allow students to carry out a laboratory practice through the Internet on the computer. In engineering, there have been numerous initiatives to implement REL over recent years, given the fundamental role of laboratory activities. However, in the past efforts have…
Dimensional Analysis in Mathematical Modeling Systems: A Simple Numerical Method
1991-02-01
US Army Ballistic Research Laboratories, Aberden Proving Ground , NID, August 1975. [18] Hi1irlimann, T., and .J. lKohlas "LPL: A Structured Language...such systems can prove that (a’ + ab + b2 + ba) = (a + b) 2 . With some effort, since the laws of physical algebra are a minor variant on those of
ERIC Educational Resources Information Center
Law, Wing-Wah
2013-01-01
Since the early 20th century, numerous scholars have proposed theories and models describing, interpreting, and suggesting the development paths countries have taken or should take. None of these, however, can fully explain China's efforts, mainly through education and citizenship education, to modernize itself and foster a modern citizenry since…
The Polygonal Model: A Simple Representation of Biomolecules as a Tool for Teaching Metabolism
ERIC Educational Resources Information Center
Bonafe, Carlos Francisco Sampaio; Bispo, Jose Ailton Conceição; de Jesus, Marcelo Bispo
2018-01-01
Metabolism involves numerous reactions and organic compounds that the student must master to understand adequately the processes involved. Part of biochemical learning should include some knowledge of the structure of biomolecules, although the acquisition of such knowledge can be time-consuming and may require significant effort from the student.…
Application of LANDSAT TM images to assess circulation and dispersion in coastal lagoons
NASA Technical Reports Server (NTRS)
Kjerfve, B.; Jensen, J. R.; Magill, K. E.
1986-01-01
The main objectives are formulated around a four pronged work approach, consisting of tasks related to: image processing and analysis of LANDSAT thematic mapping; numerical modeling of circulation and dispersion; hydrographic and spectral radiation field sampling/ground truth data collection; and special efforts to focus the investigation on turbid coastal/estuarine fronts.
Higgs Boson: god particle or divine comedy?
NASA Astrophysics Data System (ADS)
Rangacharyulu, Chary
2013-10-01
While particle physicists around the world rejoice the announcement of discovery of Higgs particle as a momentous event, it is also an opportune moment to assess the physicists' conception of nature. Particle theorists, in their ingenious efforts to unravel mysteries of the physical universe at a very fundamental level, resort to macroscopic many body theoretical methods of solid state physicists. Their efforts render the universe a superconductor of correlated quasi-particle pairs. Experimentalists, devoted to ascertain the elementary constituents and symmetries, depend heavily on numerical simulations based on those models and conform to theoretical slang in planning and interpretation of measurements . It is to the extent that the boundaries between theory/modeling and experiment are blurred. Is it possible that they are meandering in Dante's Inferno?
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
Numerical and flight simulator test of the flight deterioration concept
NASA Technical Reports Server (NTRS)
Mccarthy, J.; Norviel, V.
1982-01-01
Manned flight simulator response to theoretical wind shear profiles was studied in an effort to calibrate fixed-stick and pilot-in-the-loop numerical models of jet transport aircraft on approach to landing. Results of the study indicate that both fixed-stick and pilot-in-the-loop models overpredict the deleterious effects of aircraft approaches when compared to pilot performance in the manned simulator. Although the pilot-in-the-loop model does a better job than does the fixed-stick model, the study suggests that the pilot-in-the-loop model is suitable for use in meteorological predictions of adverse low-level wind shear along approach and departure courses to identify situations in which pilots may find difficulty. The model should not be used to predict the success or failure of a specific aircraft. It is suggested that the pilot model be used as part of a ground-based Doppler radar low-level wind shear detection and warning system.
Improvements in continuum modeling for biomolecular systems
NASA Astrophysics Data System (ADS)
Yu, Qiao; Ben-Zhuo, Lu
2016-01-01
Modeling of biomolecular systems plays an essential role in understanding biological processes, such as ionic flow across channels, protein modification or interaction, and cell signaling. The continuum model described by the Poisson- Boltzmann (PB)/Poisson-Nernst-Planck (PNP) equations has made great contributions towards simulation of these processes. However, the model has shortcomings in its commonly used form and cannot capture (or cannot accurately capture) some important physical properties of the biological systems. Considerable efforts have been made to improve the continuum model to account for discrete particle interactions and to make progress in numerical methods to provide accurate and efficient simulations. This review will summarize recent main improvements in continuum modeling for biomolecular systems, with focus on the size-modified models, the coupling of the classical density functional theory and the PNP equations, the coupling of polar and nonpolar interactions, and numerical progress. Project supported by the National Natural Science Foundation of China (Grant No. 91230106) and the Chinese Academy of Sciences Program for Cross & Cooperative Team of the Science & Technology Innovation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorne, P.D.; Chamness, M.A.; Vermeul, V.R.
This report documents work conducted during the fiscal year 1994 to development an improved three-dimensional conceptual model of ground-water flow in the unconfined aquifer system across the Hanford Site Ground-Water Surveillance Project, which is managed by Pacific Northwest Laboratory. The main objective of the ongoing effort to develop an improved conceptual model of ground-water flow is to provide the basis for improved numerical report models that will be capable of accurately predicting the movement of radioactive and chemical contaminant plumes in the aquifer beneath Hanford. More accurate ground-water flow models will also be useful in assessing the impacts of changesmore » in facilities and operations. For example, decreasing volumes of operational waste-water discharge are resulting in a declining water table in parts of the unconfined aquifer. In addition to supporting numerical modeling, the conceptual model also provides a qualitative understanding of the movement of ground water and contaminants in the aquifer.« less
Analysis of free turbulent shear flows by numerical methods
NASA Technical Reports Server (NTRS)
Korst, H. H.; Chow, W. L.; Hurt, R. F.; White, R. A.; Addy, A. L.
1973-01-01
Studies are described in which the effort was essentially directed to classes of problems where the phenomenologically interpreted effective transport coefficients could be absorbed by, and subsequently extracted from (by comparison with experimental data), appropriate coordinate transformations. The transformed system of differential equations could then be solved without further specifications or assumptions by numerical integration procedures. An attempt was made to delineate different regimes for which specific eddy viscosity models could be formulated. In particular, this would account for the carryover of turbulence from attached boundary layers, the transitory adjustment, and the asymptotic behavior of initially disturbed mixing regions. Such models were subsequently used in seeking solutions for the prescribed two-dimensional test cases, yielding a better insight into overall aspects of the exchange mechanisms.
Numerical Modeling of Propellant Boil-Off in a Cryogenic Storage Tank
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; Steadman, T. E.; Maroney, J. L.; Sass, J. P.; Fesmire, J. E.
2007-01-01
A numerical model to predict boil-off of stored propellant in large spherical cryogenic tanks has been developed. Accurate prediction of tank boil-off rates for different thermal insulation systems was the goal of this collaboration effort. The Generalized Fluid System Simulation Program, integrating flow analysis and conjugate heat transfer for solving complex fluid system problems, was used to create the model. Calculation of tank boil-off rate requires simultaneous simulation of heat transfer processes among liquid propellant, vapor ullage space, and tank structure. The reference tank for the boil-off model was the 850,000 gallon liquid hydrogen tank at Launch Complex 39B (LC- 39B) at Kennedy Space Center, which is under study for future infrastructure improvements to support the Constellation program. The methodology employed in the numerical model was validated using a sub-scale model and tank. Experimental test data from a 1/15th scale version of the LC-39B tank using both liquid hydrogen and liquid nitrogen were used to anchor the analytical predictions of the sub-scale model. Favorable correlations between sub-scale model and experimental test data have provided confidence in full-scale tank boil-off predictions. These methods are now being used in the preliminary design for other cases including future launch vehicles
NASA Astrophysics Data System (ADS)
Soni, Hardik N.; Chauhan, Ashaba D.
2018-03-01
This study models a joint pricing, inventory, and preservation decision-making problem for deteriorating items subject to stochastic demand and promotional effort. The generalized price-dependent stochastic demand, time proportional deterioration, and partial backlogging rates are used to model the inventory system. The objective is to find the optimal pricing, replenishment, and preservation technology investment strategies while maximizing the total profit per unit time. Based on the partial backlogging and lost sale cases, we first deduce the criterion for optimal replenishment schedules for any given price and technology investment cost. Second, we show that, respectively, total profit per time unit is concave function of price and preservation technology cost. At the end, some numerical examples and the results of a sensitivity analysis are used to illustrate the features of the proposed model.
Studies of Solar Wind Interaction and Ionospheric Processes at Venus and Mars
NASA Technical Reports Server (NTRS)
Bogan, Denis (Technical Monitor); Nagy, Andrew F.
2003-01-01
This is the final report summarizing the work done during the last three years under NASA Grant NAG5-8946. Our efforts centered on a systematic development of a new generation of three dimensional magneto-hydrodynamic (MHD) numerical code, which models the interaction processes of the solar wind or fast flowing magnetospheric plasma with 'non-magnetic' solar system bodies (e.g. Venus, Mars, Europa, Titan). We have also worked on a number of different, more specific and discrete studies, as various opportunities arose. In the next few pages we briefly summarize these efforts.
Research on the control of large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1983-01-01
The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.
Numerical simulation of swept-wing flows
NASA Technical Reports Server (NTRS)
Reed, Helen L.
1991-01-01
Efforts of the last six months to computationally model the transition process characteristics of flow over swept wings are described. Specifically, the crossflow instability and crossflow/Tollmien-Schlichting wave interactions are analyzed through the numerical solution of the full 3D Navier-Stokes equations including unsteadiness, curvature, and sweep. This approach is chosen because of the complexity of the problem and because it appears that linear stability theory is insufficient to explain the discrepancies between different experiments and between theory and experiment. The leading edge region of a swept wing is considered in a 3D spatial simulation with random disturbances as the initial conditions.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
Reduced order model of a blended wing body aircraft configuration
NASA Astrophysics Data System (ADS)
Stroscher, F.; Sika, Z.; Petersson, O.
2013-12-01
This paper describes the full development process of a numerical simulation model for the ACFA2020 (Active Control for Flexible 2020 Aircraft) blended wing body (BWB) configuration. Its requirements are the prediction of aeroelastic and flight dynamic response in time domain, with relatively small model order. Further, the model had to be parameterized with regard to multiple fuel filling conditions, as well as flight conditions. High efforts have been conducted in high-order aerodynamic analysis, for subsonic and transonic regime, by several project partners. The integration of the unsteady aerodynamic databases was one of the key issues in aeroelastic modeling.
Modeling interfacial fracture in Sierra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang
2013-09-01
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less
Fluid dynamic mechanisms and interactions within separated flows
NASA Astrophysics Data System (ADS)
Dutton, J. C.; Addy, A. L.
1990-02-01
The significant results of a joint research effort investigating the fundamental fluid dynamic mechanisms and interactions within high-speed separated flows are presented in detail. The results have obtained through analytical and numerical approaches, but with primary emphasis on experimental investigations of missile and projectile base flow-related configurations. The objectives of the research program focus on understanding the component mechanisms and interactions which establish and maintain high-speed separated flow regions. The analytical and numerical efforts have centered on unsteady plume-wall interactions in rocket launch tubes and on predictions of the effects of base bleed on transonic and supersonic base flowfields. The experimental efforts have considered the development and use of a state-of-the-art two component laser Doppler velocimeter (LDV) system for experiments with planar, two-dimensional, small-scale models in supersonic flows. The LDV experiments have yielded high quality, well documented mean and turbulence velocity data for a variety of high-speed separated flows including initial shear layer development, recompression/reattachment processes for two supersonic shear layers, oblique shock wave/turbulent boundary layer interactions in a compression corner, and two-stream, supersonic, near-wake flow behind a finite-thickness base.
Integration of Local Observations into the One Dimensional Fog Model PAFOG
NASA Astrophysics Data System (ADS)
Thoma, Christina; Schneider, Werner; Masbou, Matthieu; Bott, Andreas
2012-05-01
The numerical prediction of fog requires a very high vertical resolution of the atmosphere. Owing to a prohibitive computational effort of high resolution three dimensional models, operational fog forecast is usually done by means of one dimensional fog models. An important condition for a successful fog forecast with one dimensional models consists of the proper integration of observational data into the numerical simulations. The goal of the present study is to introduce new methods for the consideration of these data in the one dimensional radiation fog model PAFOG. First, it will be shown how PAFOG may be initialized with observed visibilities. Second, a nudging scheme will be presented for the inclusion of measured temperature and humidity profiles in the PAFOG simulations. The new features of PAFOG have been tested by comparing the model results with observations of the German Meteorological Service. A case study will be presented that reveals the importance of including local observations in the model calculations. Numerical results obtained with the modified PAFOG model show a distinct improvement of fog forecasts regarding the times of fog formation, dissipation as well as the vertical extent of the investigated fog events. However, model results also reveal that a further improvement of PAFOG might be possible if several empirical model parameters are optimized. This tuning can only be realized by comprehensive comparisons of model simulations with corresponding fog observations.
A review of laboratory and numerical modelling in volcanology
NASA Astrophysics Data System (ADS)
Kavanagh, Janine L.; Engwell, Samantha L.; Martin, Simon A.
2018-04-01
Modelling has been used in the study of volcanic systems for more than 100 years, building upon the approach first applied by Sir James Hall in 1815. Informed by observations of volcanological phenomena in nature, including eye-witness accounts of eruptions, geophysical or geodetic monitoring of active volcanoes, and geological analysis of ancient deposits, laboratory and numerical models have been used to describe and quantify volcanic and magmatic processes that span orders of magnitudes of time and space. We review the use of laboratory and numerical modelling in volcanological research, focussing on sub-surface and eruptive processes including the accretion and evolution of magma chambers, the propagation of sheet intrusions, the development of volcanic flows (lava flows, pyroclastic density currents, and lahars), volcanic plume formation, and ash dispersal. When first introduced into volcanology, laboratory experiments and numerical simulations marked a transition in approach from broadly qualitative to increasingly quantitative research. These methods are now widely used in volcanology to describe the physical and chemical behaviours that govern volcanic and magmatic systems. Creating simplified models of highly dynamical systems enables volcanologists to simulate and potentially predict the nature and impact of future eruptions. These tools have provided significant insights into many aspects of the volcanic plumbing system and eruptive processes. The largest scientific advances in volcanology have come from a multidisciplinary approach, applying developments in diverse fields such as engineering and computer science to study magmatic and volcanic phenomena. A global effort in the integration of laboratory and numerical volcano modelling is now required to tackle key problems in volcanology and points towards the importance of benchmarking exercises and the need for protocols to be developed so that models are routinely tested against real world
data.
The Managerial Activities and Leadership Roles of Five Achieving the Dream Leader College Presidents
ERIC Educational Resources Information Center
Mace, Teresa Marie Taylor
2013-01-01
A significant increase in community colleges' (CC) presidential retirements is resulting in a huge loss of critical knowledge and experience. Recognition of this has led to numerous efforts and initiatives to prepare future community college leaders. These efforts have included numerous attempts to identify the competencies, skills, and leadership…
A Clash of Symbols: An Analysis of Competing Images and Arguments in the AIDS Controversy.
ERIC Educational Resources Information Center
Gilder, Eric
Efforts to contain the spread of Acquired Immune Deficiency Syndrome (AIDS) have been slowed by numerous arguing factions, political, religious, and medical, all of which perceive the AIDS epidemic through a different set of symbols. The images can be more easily understood using Kenneth Boulding's Threat, Integry, and Exchange (or TIE) model. The…
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
NASA Astrophysics Data System (ADS)
Becker, T. W.
2011-12-01
I present results from ongoing, NSF-CAREER funded educational and research efforts that center around making numerical tools in seismology and geodynamics more accessible to a broader audience. The goal is not only to train students in quantitative, interdisciplinary research, but also to make methods more easily accessible to practitioners across disciplines. I describe the two main efforts that were funded, the Solid Earth Research and Teaching Environment (SEATREE, geosys.usc.edu/projects/seatree/), and a new Numerical Methods class. SEATREE is a modular and user-friendly software framework to facilitate using solid Earth research tools in the undergraduate and graduate classroom and for interdisciplinary, scientific collaboration. We use only open-source software, and most programming is done in the Python computer language. We strive to make use of modern software design and development concepts while remaining compatible with traditional scientific coding and existing, legacy software. Our goals are to provide a fully contained, yet transparent package that lets users operate in an easy, graphically supported "black box" mode, while also allowing to look under the hood, for example to conduct numerous forward models to explore parameter space. SEATREE currently has several implemented modules, including on global mantle flow, 2D phase velocity tomography, and 2D mantle convection and was used at the University of Southern California, Los Angeles, and at a 2010 CIDER summer school tutorial. SEATREE was developed in collaboration with engineering and computer science undergraduate students, some of which have gone on to work in Earth Science projects. In the long run, we envision SEATREE to contribute to new ways of sharing scientific research, and making (numerical) experiments truly reproducible again. The other project is a set of lecture notes and Matlab exercises on Numerical Methods in solid Earth, focusing on finite difference and element methods. The class has been taught several times at USC to a broad audience of Earth science students with very diverse levels of exposure to math and physics. It is our goal to bring everyone up to speed and empower students, and we have seen structural geology students with very little exposure to math go on to construct their own numerical models of pTt-paths in a core-complex setting. This exemplifies the goal of teaching students to both be able to put together simple numerical models from scratch, and, perhaps more importantly, to truly understand the basic concepts, capabilities, and pitfalls of the more powerful community codes that are being increasingly used. SEATREE and the Numerical Methods class material are freely available at geodynamics.usc.edu/~becker.
Dynamic stresses in a Francis model turbine at deep part load
NASA Astrophysics Data System (ADS)
Weber, Wilhelm; von Locquenghien, Florian; Conrad, Philipp; Koutnik, Jiri
2017-04-01
A comparison between numerically obtained dynamic stresses in a Francis model turbine at deep part load with experimental ones is presented. Due to the change in the electrical power mix to more content of new renewable energy sources, Francis turbines are forced to operate at deep part load in order to compensate stochastic nature of wind and solar power and to ensure grid stability. For the extension of the operating range towards deep part load improved understanding of the harsh flow conditions and their impact on material fatigue of hydraulic components is required in order to ensure long life time of the power unit. In this paper pressure loads on a model turbine runner from unsteady two-phase computational fluid dynamics simulation at deep part load are used for calculation of mechanical stresses by finite element analysis. Therewith, stress distribution over time is determined. Since only few runner rotations are simulated due to enormous numerical cost, more effort has to be spent to evaluation procedure in order to obtain objective results. By comparing the numerical results with measured strains accuracy of the whole simulation procedure is verified.
Analytical solutions for coagulation and condensation kinetics of composite particles
NASA Astrophysics Data System (ADS)
Piskunov, Vladimir N.
2013-04-01
The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.
Integrating multiple distribution models to guide conservation efforts of an endangered toad
Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.
The Utility of Behavioral Economics in Expanding the Free-Feed Model of Obesity
Rasmussen, Erin B.; Robertson, Stephen H.; Rodriguez, Luis R.
2016-01-01
Animal models of obesity are numerous and diverse in terms of identifying specific neural and peripheral mechanisms related to obesity; however, they are limited when it comes to behavior. The standard behavioral measure of food intake in most animal models occurs in a free-feeding environment. While easy and cost-effective for the researcher, the free-feeding environment omits some of the most important features of obesity-related food consumption—namely, properties of food availability, such as effort and delay to obtaining food. Behavior economics expands behavioral measures of obesity animal models by identifying such behavioral mechanisms. First, economic demand analysis allows researchers to understand the role of effort in food procurement, and how physiological and neural mechanisms are related. Second, studies on delay discounting contribute to a growing literature that shows that sensitivity to delayed food- and food-related outcomes is likely a fundamental process of obesity. Together, these data expand the animal model in a manner that better characterizes how environmental factors influence food consumption. PMID:26923097
An Improved Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.; Schifer, Nicholas A.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.
NASA Astrophysics Data System (ADS)
Wimer, N. T.; Mackoweicki, A. S.; Poludnenko, A. Y.; Hoffman, C.; Daily, J. W.; Rieker, G. B.; Hamlington, P.
2017-12-01
Results are presented from a joint computational and experimental research effort focused on understanding and characterizing wildland fire spread at small scales (roughly 1m-1mm) using direct numerical simulations (DNS) with chemical kinetics mechanisms that have been calibrated using data from high-speed laser diagnostics. The simulations are intended to directly resolve, with high physical accuracy, all small-scale fluid dynamic and chemical processes relevant to wildland fire spread. The high fidelity of the simulations is enabled by the calibration and validation of DNS sub-models using data from high-speed laser diagnostics. These diagnostics have the capability to measure temperature and chemical species concentrations, and are used here to characterize evaporation and pyrolysis processes in wildland fuels subjected to an external radiation source. The chemical kinetics code CHEMKIN-PRO is used to study and reduce complex reaction mechanisms for water removal, pyrolysis, and gas phase combustion during solid biomass burning. Simulations are then presented for a gaseous pool fire coupled with the resulting multi-step chemical reaction mechanisms, and the results are connected to the fundamental structure and spread of wildland fires. It is anticipated that the combined computational and experimental approach of this research effort will provide unprecedented access to information about chemical species, temperature, and turbulence during the entire pyrolysis, evaporation, ignition, and combustion process, thereby permitting more complete understanding of the physics that must be represented by coarse-scale numerical models of wildland fire spread.
Development Of Maneuvering Autopilot For Flight Tests
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Walker, R. A.
1992-01-01
Report describes recent efforts to develop automatic control system operating under supervision of pilot and making airplane follow prescribed trajectories during flight tests. Report represents additional progress on this project. Gives background information on technology of control of test-flight trajectories; presents mathematical models of airframe, engine and command-augmentation system; focuses on mathematical modeling of maneuvers; addresses design of autopilots for maneuvers; discusses numerical simulation and evaluation of results of simulation of eight maneuvers under control of simulated autopilot; and presents summary and discussion of future work.
Particle-gas dynamics in the protoplanetary nebula
NASA Technical Reports Server (NTRS)
Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.
1991-01-01
In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.
Understanding Slat Noise Sources
NASA Technical Reports Server (NTRS)
Khorrami, Medhi R.
2003-01-01
Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.
Computational analysis of the SSME fuel preburner flow
NASA Technical Reports Server (NTRS)
Wang, T. S.; Farmer, R. C.
1986-01-01
A computational fluid dynamics model which simulates the steady state operation of the SSME fuel preburner is developed. Specifically, the model will be used to quantify the flow factors which cause local hot spots in the fuel preburner in order to recommend experiments whereby the control of undesirable flow features can be demonstrated. The results of a two year effort to model the preburner are presented. In this effort, investigating the fuel preburner flowfield, the appropriate transport equations were numerically solved for both an axisymmetric and a three-dimensional configuration. Continuum's VAST (Variational Solution of the Transport equations) code, in conjunction with the CM-1000 Engineering Analysis Workstation and the NASA/Ames CYBER 205, was used to perform the required calculations. It is concluded that the preburner operational anomalies are not due to steady state phenomena and must, therefore, be related to transient operational procedures.
High Power MPD Thruster Development at the NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
LaPointe, Michael R.; Mikellides, Pavlos G.; Reddy, Dhanireddy (Technical Monitor)
2001-01-01
Propulsion requirements for large platform orbit raising, cargo and piloted planetary missions, and robotic deep space exploration have rekindled interest in the development and deployment of high power electromagnetic thrusters. Magnetoplasmadynamic (MPD) thrusters can effectively process megawatts of power over a broad range of specific impulse values to meet these diverse in-space propulsion requirements. As NASA's lead center for electric propulsion, the Glenn Research Center has established an MW-class pulsed thruster test facility and is refurbishing a high-power steady-state facility to design, build, and test efficient gas-fed MPD thrusters. A complimentary numerical modeling effort based on the robust MACH2 code provides a well-balanced program of numerical analysis and experimental validation leading to improved high power MPD thruster performance. This paper reviews the current and planned experimental facilities and numerical modeling capabilities at the Glenn Research Center and outlines program plans for the development of new, efficient high power MPD thrusters.
Wellbore Seal Repair Using Nanocomposite Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stormont, John
2016-08-31
Nanocomposite wellbore repair materials have been developed, tested, and modeled through an integrated program of laboratory testing and numerical modeling. Numerous polymer-cement nanocomposites were synthesized as candidate wellbore repair materials using various combinations of base polymers and nanoparticles. Based on tests of bond strength to steel and cement, ductility, stability, flowability, and penetrability in opening of 50 microns and less, we identified Novolac epoxy reinforced with multi-walled carbon nanotubes and/or alumina nanoparticles to be a superior wellbore seal material compared to conventional microfine cements. A system was developed for testing damaged and repaired wellbore specimens comprised of a cement sheathmore » cast on a steel casing. The system allows independent application of confining pressures and casing pressures while gas flow is measured through the specimens along the wellbore axis. Repair with the nanocomposite epoxy base material was successful in dramatically reducing the flow through flaws of various sizes and types, and restoring the specimen comparable to an intact condition. In contrast, repair of damaged specimens with microfine cement was less effective, and the repair degraded with application of stress. Post-test observations confirm the complete penetration and sealing of flaws using the nanocomposite epoxy base material. A number of modeling efforts have supported the material development and testing efforts. We have modeled the steel-repair material interface behavior in detail during slant shear tests, which we used to characterize bond strength of candidate repair materials. A numerical model of the laboratory testing of damaged wellbore specimens was developed. This investigation found that microannulus permeability can satisfactorily be described by a joint model. Finally, a wellbore model has been developed that can be used to evaluate the response of the wellbore system (casing, cement, and microannulus), including the use of either cement or a nanocomposite in the microannulus to represent a repaired system. This wellbore model was successfully coupled with a field-scale model of CO 2 injection, to enable predictions of stress and strains in the wellbore subjected to subsurface changes (i.e. domal uplift) associated with fluid injection.« less
NASA Technical Reports Server (NTRS)
Knupp, Kevin R.
1988-01-01
Described is work performed under NASA Grant NAG8-654 for the period 15 March to 15 September 1988. This work entails primarily data analysis and numerical modeling efforts related to the 1986 Satellite Precipitation and Cloud Experiment (SPACE). In the following, the SPACE acronym is used along with the acronym COHMEX, which represents the encompassing Cooperative Huntsville Meteorological Experiment. Progress made during the second half of the first year of the study included: (1) installation and testing of the RAMS numerical Modeling system on the Alabama CRAY X-MP/24; (2) a start on the analysis of the mesoscale convection system (MCS) of 13 July 1986 COHMEX case; and (3) a cursory examination of a small MCS that formed over the COHMEX region on 15 July 1986. Details of each of these individual tasks are given.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
ERIC Educational Resources Information Center
D'Amico, Antonella; Passolunghi, Maria Chiara
2009-01-01
We report a two-year longitudinal study aimed at investigating the rate of access to numerical and non-numerical information in long-term memory and the functioning of automatic and effortful cognitive inhibition processes in children with arithmetical learning disabilities (ALDs). Twelve children with ALDs, of age 9.3 years, and twelve…
NASA Astrophysics Data System (ADS)
Basith, Abdul; Prakoso, Yudhono; Kongko, Widjo
2017-07-01
A tsunami model using high resolution geometric data is indispensable in efforts to tsunami mitigation, especially in tsunami prone areas. It is one of the factors that affect the accuracy results of numerical modeling of tsunami. Sadeng Port is a new infrastructure in the Southern Coast of Java which could potentially hit by massive tsunami from seismic gap. This paper discusses validation and error estimation of tsunami model created using high resolution geometric data in Sadeng Port. Tsunami model validation uses the height wave of Tsunami Pangandaran 2006 recorded by Tide Gauge of Sadeng. Tsunami model will be used to accommodate the tsunami numerical modeling involves the parameters of earthquake-tsunami which is derived from the seismic gap. The validation results using t-test (student) shows that the height of the tsunami modeling results and observation in Tide Gauge of Sadeng are considered statistically equal at 95% confidence level and the value of the RMSE and NRMSE are 0.428 m and 22.12%, while the differences of tsunami wave travel time is 12 minutes.
An Introduction to Numerical Control. Problems for Numerical Control Part Programming.
ERIC Educational Resources Information Center
Campbell, Clifton P.
This combination text and workbook is intended to introduce industrial arts students to numerical control part programming. Discussed in the first section are the impact of numerical control, training efforts, numerical control in established programs, related information for drafting, and the Cartesian Coordinate System and dimensioning…
Evaluation of the UnTRIM model for 3-D tidal circulation
Cheng, R.T.; Casulli, V.; ,
2001-01-01
A family of numerical models, known as the TRIM models, shares the same modeling philosophy for solving the shallow water equations. A characteristic analysis of the shallow water equations points out that the numerical instability is controlled by the gravity wave terms in the momentum equations and by the transport terms in the continuity equation. A semi-implicit finite-difference scheme has been formulated so that these terms and the vertical diffusion terms are treated implicitly and the remaining terms explicitly to control the numerical stability and the computations are carried out over a uniform finite-difference computational mesh without invoking horizontal or vertical coordinate transformations. An unstructured grid version of TRIM model is introduced, or UnTRIM (pronounces as "you trim"), which preserves these basic numerical properties and modeling philosophy, only the computations are carried out over an unstructured orthogonal grid. The unstructured grid offers the flexibilities in representing complex study areas so that fine grid resolution can be placed in regions of interest, and coarse grids are used to cover the remaining domain. Thus, the computational efforts are concentrated in areas of importance, and an overall computational saving can be achieved because the total number of grid-points is dramatically reduced. To use this modeling approach, an unstructured grid mesh must be generated to properly reflect the properties of the domain of the investigation. The new modeling flexibility in grid structure is accompanied by new challenges associated with issues of grid generation. To take full advantage of this new model flexibility, the model grid generation should be guided by insights into the physics of the problems; and the insights needed may require a higher degree of modeling skill.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stauffer, Philip H.; Chu, Shaoping; Miller, Terry A.
This report consists of four major sections, including this introductory section. Section 2 provides an overview of previous investigations related to the development of the current sitescale model. The methods and data used to develop the 3-D groundwater model and the techniques used to distill that model into a form suitable for use in the GoldSim models are discussed in Section 3. Section 4 presents the results of the model development effort and discusses some of the uncertainties involved. Eight attachments that provide details about the components and data used in this groundwater pathway model are also included with thismore » report. The groundwater modeling effort reported here is a revision of the work that was conducted in 2005 (Stauffer et al., 2005a) in support of the 2008 Area G performance assessment and composite analysis (LANL, 2008). The revision effort was undertaken primarily to incorporate new geologic information that has been collected since 2003 at, and in the vicinity of, Area G. The new data were used to create a more accurate geologic framework model (GFM) that forms the basis of the numerical modeling of the site’s long-term performance. The groundwater modeling uses mean hydrologic properties of the geologic strata underlying Area G; this revision includes an evaluation of the impacts that natural variability in these properties may have on the model projections.« less
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
2011-04-01
NavyFOAM has been developed using an open-source CFD software tool-kit ( OpenFOAM ) that draws heavily upon object-oriented programming. The...numerical methods and the physical models in the original version of OpenFOAM have been upgraded in an effort to improve accuracy and robustness of...computational fluid dynamics OpenFOAM , Object Oriented Programming (OOP) (CFD), NavyFOAM, 16. SECURITY CLASSIFICATION OF: a. REPORT UNCLASSIFIED b
Application of modern radiative transfer tools to model laboratory quartz emissivity
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.
2005-08-01
Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.
NASA Astrophysics Data System (ADS)
Masson, V.; Le Moigne, P.; Martin, E.; Faroux, S.; Alias, A.; Alkama, R.; Belamari, S.; Barbu, A.; Boone, A.; Bouyssel, F.; Brousseau, P.; Brun, E.; Calvet, J.-C.; Carrer, D.; Decharme, B.; Delire, C.; Donier, S.; Essaouini, K.; Gibelin, A.-L.; Giordani, H.; Habets, F.; Jidane, M.; Kerdraon, G.; Kourzeneva, E.; Lafaysse, M.; Lafont, S.; Lebeaupin Brossier, C.; Lemonsu, A.; Mahfouf, J.-F.; Marguinaud, P.; Mokhtari, M.; Morin, S.; Pigeon, G.; Salgado, R.; Seity, Y.; Taillefer, F.; Tanguy, G.; Tulet, P.; Vincendon, B.; Vionnet, V.; Voldoire, A.
2013-07-01
SURFEX is a new externalized land and ocean surface platform that describes the surface fluxes and the evolution of four types of surfaces: nature, town, inland water and ocean. It is mostly based on pre-existing, well-validated scientific models that are continuously improved. The motivation for the building of SURFEX is to use strictly identical scientific models in a high range of applications in order to mutualise the research and development efforts. SURFEX can be run in offline mode (0-D or 2-D runs) or in coupled mode (from mesoscale models to numerical weather prediction and climate models). An assimilation mode is included for numerical weather prediction and monitoring. In addition to momentum, heat and water fluxes, SURFEX is able to simulate fluxes of carbon dioxide, chemical species, continental aerosols, sea salt and snow particles. The main principles of the organisation of the surface are described first. Then, a survey is made of the scientific module (including the coupling strategy). Finally, the main applications of the code are summarised. The validation work undertaken shows that replacing the pre-existing surface models by SURFEX in these applications is usually associated with improved skill, as the numerous scientific developments contained in this community code are used to good advantage.
Mathematical modeling of infectious disease dynamics
Siettos, Constantinos I.; Russo, Lucia
2013-01-01
Over the last years, an intensive worldwide effort is speeding up the developments in the establishment of a global surveillance network for combating pandemics of emergent and re-emergent infectious diseases. Scientists from different fields extending from medicine and molecular biology to computer science and applied mathematics have teamed up for rapid assessment of potentially urgent situations. Toward this aim mathematical modeling plays an important role in efforts that focus on predicting, assessing, and controlling potential outbreaks. To better understand and model the contagious dynamics the impact of numerous variables ranging from the micro host–pathogen level to host-to-host interactions, as well as prevailing ecological, social, economic, and demographic factors across the globe have to be analyzed and thoroughly studied. Here, we present and discuss the main approaches that are used for the surveillance and modeling of infectious disease dynamics. We present the basic concepts underpinning their implementation and practice and for each category we give an annotated list of representative works. PMID:23552814
Mixed Phase Modeling in GlennICE with Application to Engine Icing
NASA Technical Reports Server (NTRS)
Wright, William B.; Jorgenson, Philip C. E.; Veres, Joseph P.
2011-01-01
A capability for modeling ice crystals and mixed phase icing has been added to GlennICE. Modifications have been made to the particle trajectory algorithm and energy balance to model this behavior. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to four mixed phase ice accretions performed in the Cox icing tunnel in order to calibrate an ice erosion model. A sample ice ingestion case was performed using the Energy Efficient Engine (E3) model in order to illustrate current capabilities. Engine performance characteristics were supplied using the Numerical Propulsion System Simulation (NPSS) model for this test case.
A Comparison of the Forecast Skills among Three Numerical Models
NASA Astrophysics Data System (ADS)
Lu, D.; Reddy, S. R.; White, L. J.
2003-12-01
Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.
Erikson, Li H.; Wright, Scott A.; Elias, Edwin; Hanes, Daniel M.; Schoellhamer, David H.; Largier, John; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.
2013-01-01
Sediment exchange at large energetic inlets is often difficult to quantify due complex flows, massive amounts of water and sediment exchange, and environmental conditions limiting long-term data collection. In an effort to better quantify such exchange this study investigated the use of suspended sediment concentrations (SSC) measured at an offsite location as a surrogate for sediment exchange at the tidally dominated Golden Gate inlet in San Francisco, CA. A numerical model was calibrated and validated against water and suspended sediment flux measured during a spring–neap tide cycle across the Golden Gate. The model was then run for five months and net exchange was calculated on a tidal time-scale and compared to SSC measurements at the Alcatraz monitoring site located in Central San Francisco Bay ~ 5 km from the Golden Gate. Numerically modeled tide averaged flux across the Golden Gate compared well (r2 = 0.86, p-value
A comparison of WEC control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, David G.; Bacelli, Giorgio; Coe, Ryan Geoffrey
2016-04-01
The operation of Wave Energy Converter (WEC) devices can pose many challenging problems to the Water Power Community. A key research question is how to significantly improve the performance of these WEC devices through improving the control system design. This report summarizes an effort to analyze and improve the performance of WEC through the design and implementation of control systems. Controllers were selected to span the WEC control design space with the aim of building a more comprehensive understanding of different controller capabilities and requirements. To design and evaluate these control strategies, a model scale test-bed WEC was designed formore » both numerical and experimental testing (see Section 1.1). Seven control strategies have been developed and applied on a numerical model of the selected WEC. This model is capable of performing at a range of levels, spanning from a fully-linear realization to varying levels of nonlinearity. The details of this model and its ongoing development are described in Section 1.2.« less
Banger, Kamaljit; Yuan, Mingwei; Wang, Junming; Nafziger, Emerson D.; Pittelkow, Cameron M.
2017-01-01
Meeting crop nitrogen (N) demand while minimizing N losses to the environment has proven difficult despite significant field research and modeling efforts. To improve N management, several real-time N management tools have been developed with a primary focus on enhancing crop production. However, no coordinated effort exists to simultaneously address sustainability concerns related to N losses at field- and regional-scales. In this perspective, we highlight the opportunity for incorporating environmental effects into N management decision support tools for United States maize production systems by integrating publicly available crop models with grower-entered management information and gridded soil and climate data in a geospatial framework specifically designed to quantify environmental and crop production tradeoffs. To facilitate advances in this area, we assess the capability of existing crop models to provide in-season N recommendations while estimating N leaching and nitrous oxide emissions, discuss several considerations for initial framework development, and highlight important challenges related to improving the accuracy of crop model predictions. Such a framework would benefit the development of regional sustainable intensification strategies by enabling the identification of N loss hotspots which could be used to implement spatially explicit mitigation efforts in relation to current environmental quality goals and real-time weather conditions. Nevertheless, we argue that this long-term vision can only be realized by leveraging a variety of existing research efforts to overcome challenges related to improving model structure, accessing field data to enhance model performance, and addressing the numerous social difficulties in delivery and adoption of such tool by stakeholders. PMID:28804490
Banger, Kamaljit; Yuan, Mingwei; Wang, Junming; Nafziger, Emerson D; Pittelkow, Cameron M
2017-01-01
Meeting crop nitrogen (N) demand while minimizing N losses to the environment has proven difficult despite significant field research and modeling efforts. To improve N management, several real-time N management tools have been developed with a primary focus on enhancing crop production. However, no coordinated effort exists to simultaneously address sustainability concerns related to N losses at field- and regional-scales. In this perspective, we highlight the opportunity for incorporating environmental effects into N management decision support tools for United States maize production systems by integrating publicly available crop models with grower-entered management information and gridded soil and climate data in a geospatial framework specifically designed to quantify environmental and crop production tradeoffs. To facilitate advances in this area, we assess the capability of existing crop models to provide in-season N recommendations while estimating N leaching and nitrous oxide emissions, discuss several considerations for initial framework development, and highlight important challenges related to improving the accuracy of crop model predictions. Such a framework would benefit the development of regional sustainable intensification strategies by enabling the identification of N loss hotspots which could be used to implement spatially explicit mitigation efforts in relation to current environmental quality goals and real-time weather conditions. Nevertheless, we argue that this long-term vision can only be realized by leveraging a variety of existing research efforts to overcome challenges related to improving model structure, accessing field data to enhance model performance, and addressing the numerous social difficulties in delivery and adoption of such tool by stakeholders.
Temporal model of an optically pumped co-doped solid state laser
NASA Technical Reports Server (NTRS)
Wangler, T. G.; Swetits, J. J.; Buoncristiani, A. M.
1993-01-01
Currently, research is being conducted on the optical properties of materials associated with the development of solid state lasers in the two micron region. In support of this effort, a mathematical model describing the energy transfer in a holmium laser sensitized with thulium is developed. In this paper, we establish some qualitative properties of the solution of the model, such as non-negativity, boundedness, and integrability. A local stability analysis is then performed from which conditions for asymptotic stability are attained. Finally, we report on our numerical analysis of the system and how it compares with experimental results.
MODELING AND ANALYSIS OF FISSION PRODUCT TRANSPORT IN THE AGR-3/4 EXPERIMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humrickhouse, Paul W.; Collin, Blaise P.; Hawkes, Grant L.
In this work we describe the ongoing modeling and analysis efforts in support of the AGR-3/4 experiment. AGR-3/4 is intended to provide data to assess fission product retention and transport (e.g., diffusion coefficients) in fuel matrix and graphite materials. We describe a set of pre-test predictions that incorporate the results of detailed thermal and fission product release models into a coupled 1D radial diffusion model of the experiment, using diffusion coefficients reported in the literature for Ag, Cs, and Sr. We make some comparisons of the predicted Cs profiles to preliminary measured data for Cs and find these to bemore » reasonable, in most cases within an order of magnitude. Our ultimate objective is to refine the diffusion coefficients using AGR-3/4 data, so we identify an analytical method for doing so and demonstrate its efficacy via a series of numerical experiments using the model predictions. Finally, we discuss development of a post-irradiation examination plan informed by the modeling effort and simulate some of the heating tests that are tentatively planned.« less
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
NASA Astrophysics Data System (ADS)
Velazquez, Antonio; Swartz, R. Andrew
2013-04-01
Renewable energy sources like wind are important technologies, useful to alleviate for the current fossil-fuel crisis. Capturing wind energy in a more efficient way has resulted in the emergence of more sophisticated designs of wind turbines, particularly Horizontal-Axis Wind Turbines (HAWTs). To promote efficiency, traditional finite element methods have been widely used to characterize the aerodynamics of these types of multi-body systems and improve their design. Given their aeroelastic behavior, tapered-swept blades offer the potential to optimize energy capture and decrease fatigue loads. Nevertheless, modeling special complex geometries requires huge computational efforts necessitating tradeoffs between faster computation times at lower cost, and reliability and numerical accuracy. Indeed, the computational cost and the numerical effort invested, using traditional FE methods, to reproduce dependable aerodynamics of these complex-shape beams are sometimes prohibitive. A condensed Spinning Finite Element (SFE) method scheme is presented in this study aimed to alleviate this issue by means of modeling wind-turbine rotor blades properly with tapered-swept cross-section variations of arbitrary order via Lagrangian equations. Axial-flexural-torsional coupling is carried out on axial deformation, torsion, in-plane bending and out-of-plane bending using super-convergent elements. In this study, special attention is paid for the case of damped yaw effects, expressed within the described skew-symmetric damped gyroscopic matrix. Dynamics of the model are analyzed by achieving modal analysis with complex-number eigen-frequencies. By means of mass, damped gyroscopic, and stiffness (axial-flexural-torsional coupling) matrix condensation (order reduction), numerical analysis is carried out for several prototypes with different tapered, swept, and curved variation intensities, and for a practical range of spinning velocities at different rotation angles. A convergence study for the resulting natural frequencies is performed to evaluate the dynamic collateral effects of tapered-swept blade profiles in spinning motion using this new model. Stability analysis in boundary conditions of the postulated model is achieved to test the convergence and integrity of the mathematical model. The proposed framework presumes to be particularly suitable to characterize models with complex-shape cross-sections at low computation cost.
Computational methods for yeast prion curing curves.
Ridout, Martin S
2008-10-01
If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.
Using large-scale genome variation cohorts to decipher the molecular mechanism of cancer.
Habermann, Nina; Mardin, Balca R; Yakneen, Sergei; Korbel, Jan O
2016-01-01
Characterizing genomic structural variations (SVs) in the human genome remains challenging, and there is a growing interest to understand somatic SVs occurring in cancer, a disease of the genome. A havoc-causing SV process known as chromothripsis scars the genome when localized chromosome shattering and repair occur in a one-off catastrophe. Recent efforts led to the development of a set of conceptual criteria for the inference of chromothripsis events in cancer genomes and to the development of experimental model systems for studying this striking DNA alteration process in vitro. We discuss these approaches, and additionally touch upon current "Big Data" efforts that employ hybrid cloud computing to enable studies of numerous cancer genomes in an effort to search for commonalities and differences in molecular DNA alteration processes in cancer. Copyright © 2016. Published by Elsevier SAS.
Parametric modeling studies of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng
2013-11-01
The Sydney piloted jet flame series (Flames L, B, and M) feature thinner reaction zones and hence impose greater challenges to modeling than the Sanida Piloted jet flames (Flames D, E, and F). Recently, the Sydney flames received renewed interest due to these challenges. Several new modeling efforts have emerged. However, no systematic parametric modeling studies have been reported for the Sydney flames. A large set of modeling computations of the Sydney flames is presented here by using the coupled large eddy simulation (LES)/probability density function (PDF) method. Parametric studies are performed to gain insight into the model performance, its sensitivity and the effect of numerics.
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models. To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
Extra-dimensional models on the lattice
Knechtli, Francesco; Rinaldi, Enrico
2016-08-05
In this paper we summarize the ongoing effort to study extra-dimensional gauge theories with lattice simulations. In these models the Higgs field is identified with extra-dimensional components of the gauge field. The Higgs potential is generated by quantum corrections and is protected from divergences by the higher dimensional gauge symmetry. Dimensional reduction to four dimensions can occur through compactification or localization. Gauge-Higgs unification models are often studied using perturbation theory. Numerical lattice simulations are used to go beyond these perturbative expectations and to include nonperturbative effects. We describe the known perturbative predictions and their fate in the strongly-coupled regime formore » various extra-dimensional models.« less
Dynamic Analysis of Sounding Rocket Pneumatic System Revision
NASA Technical Reports Server (NTRS)
Armen, Jerald
2010-01-01
The recent fusion of decades of advancements in mathematical models, numerical algorithms and curve fitting techniques marked the beginning of a new era in the science of simulation. It is becoming indispensable to the study of rockets and aerospace analysis. In pneumatic system, which is the main focus of this paper, particular emphasis will be placed on the efforts of compressible flow in Attitude Control System of sounding rocket.
Community-Oriented Policing and Counterinsurgency: A Conceptual Model
2007-01-01
between security and reform, ideas on how to manage assistance to police forces, how to evaluate the impact of police development assistance and makes...its own history, demographics, cultural and economic mix, region, tax base, management , civic leadership, public perception, and numerous other...percent.97 Efforts to curb crime have included the training and employment of 1,500 Special Police Officers in Dehli who perform some of the functions a
Antarctic glacial history from numerical models and continental margin sediments
Barker, P.F.; Barrett, P.J.; Cooper, A. K.; Huybrechts, P.
1999-01-01
The climate record of glacially transported sediments in prograded wedges around the Antarctic outer continental shelf, and their derivatives in continental rise drifts, may be combined to produce an Antarctic ice sheet history, using numerical models of ice sheet response to temperature and sea-level change. Examination of published models suggests several preliminary conclusions about ice sheet history. The ice sheet's present high sensitivity to sea-level change at short (orbital) periods was developed gradually as its size increased, replacing a declining sensitivity to temperature. Models suggest that the ice sheet grew abruptly to 40% (or possibly more) of its present size at the Eocene-Oligocene boundary, mainly as a result of its own temperature sensitivity. A large but more gradual middle Miocene change was externally driven, probably by development of the Antarctic Circumpolar Current (ACC) and Polar Front, provided that a few million years' delay can be explained. The Oligocene ice sheet varied considerably in size and areal extent, but the late Miocene ice sheet was more stable, though significantly warmer than today's. This difference probably relates to the confining effect of the Antarctic continental margin. Present-day numerical models of ice sheet development are sufficient to guide current sampling plans, but sea-ice formation, polar wander, basal topography and ice streaming can be identified as factors meriting additional modelling effort in the future.
2015-01-01
Energetic carrying capacity of habitats for wildlife is a fundamental concept used to better understand population ecology and prioritize conservation efforts. However, carrying capacity can be difficult to estimate accurately and simplified models often depend on many assumptions and few estimated parameters. We demonstrate the complex nature of parameterizing energetic carrying capacity models and use an experimental approach to describe a necessary parameter, a foraging threshold (i.e., density of food at which animals no longer can efficiently forage and acquire energy), for a guild of migratory birds. We created foraging patches with different fixed prey densities and monitored the numerical and behavioral responses of waterfowl (Anatidae) and depletion of foods during winter. Dabbling ducks (Anatini) fed extensively in plots and all initial densities of supplemented seed were rapidly reduced to 10 kg/ha and other natural seeds and tubers combined to 170 kg/ha, despite different starting densities. However, ducks did not abandon or stop foraging in wetlands when seed reduction ceased approximately two weeks into the winter-long experiment nor did they consistently distribute according to ideal-free predictions during this period. Dabbling duck use of experimental plots was not related to initial seed density, and residual seed and tuber densities varied among plant taxa and wetlands but not plots. Herein, we reached several conclusions: 1) foraging effort and numerical responses of dabbling ducks in winter were likely influenced by factors other than total food densities (e.g., predation risk, opportunity costs, forager condition), 2) foraging thresholds may vary among foraging locations, and 3) the numerical response of dabbling ducks may be an inconsistent predictor of habitat quality relative to seed and tuber density. We describe implications on habitat conservation objectives of using different foraging thresholds in energetic carrying capacity models and suggest scientists reevaluate assumptions of these models used to guide habitat conservation. PMID:25790255
NASA Technical Reports Server (NTRS)
Schaeffler, Norman W.; Allan, Brian G.; Lienard, Caroline; LePape, Arnaud
2010-01-01
A combined computational and experimental effort has been undertaken to study fuselage drag reduction on a generic, non-proprietary rotorcraft fuselage by the application of active ow control. Fuselage drag reduction is an area of research interest to both the United States and France and this area is being worked collaboratively as a task under the United States/France Memorandum of Agreement on Helicopter Aeromechanics. In the first half of this task, emphasis is placed on the US generic fuselage, the ROBIN-mod7, with the experimental work being conducted on the US side and complementary US and French CFD analysis of the baseline and controlled cases. Fuselage simulations were made using Reynolds-averaged Navier-Stokes ow solvers and with multiple turbulence models. Comparisons were made to experimental data for numerical simulations of the isolated fuselage and for the fuselage as installed in the tunnel, which includes modeling of the tunnel contraction, walls, and support fairing. The numerical simulations show that comparisons to the experimental data are in good agreement when the tunnel and model support are included. The isolated fuselage simulations compare well to each other, however, there is a positive shift in the centerline pressure when compared to the experiment. The computed flow separation locations on the rear ramp region had only slight differences with and without the tunnel walls and model support. For the simulations, the flow control slots were placed at several locations around the flow separation lines as a series of eight slots that formed a nearly continuous U-shape. Results from the numerical simulations resulted in an estimated 35% fuselage drag reduction from a steady blowing flow control configuration and a 26% drag reduction for unsteady zero-net-mass flow control configuration. Simulations with steady blowing show a delayed flow separation at the rear ramp of the fuselage that increases the surface pressure acting on the ramp, thus decreasing the overall fuselage pressure drag.
Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.
2015-01-01
Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVoto, Douglas J.
2017-10-19
As maximum device temperatures approach 200 °Celsius, continuous operation, sintered silver materials promise to maintain bonds at these high temperatures without excessive degradation rates. A detailed characterization of the thermal performance and reliability of sintered silver materials and processes has been initiated for the next year. Future steps in crack modeling include efforts to simulate crack propagation directly using the extended finite element method (X-FEM), a numerical technique that uses the partition of unity method for modeling discontinuities such as cracks in a system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.
Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis ofmore » two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.« less
Numerical Modeling of Arsenic Mobility during Reductive Iron-Mineral Transformations.
Rawson, Joey; Prommer, Henning; Siade, Adam; Carr, Jackson; Berg, Michael; Davis, James A; Fendorf, Scott
2016-03-01
Millions of individuals worldwide are chronically exposed to hazardous concentrations of arsenic from contaminated drinking water. Despite massive efforts toward understanding the extent and underlying geochemical processes of the problem, numerical modeling and reliable predictions of future arsenic behavior remain a significant challenge. One of the key knowledge gaps concerns a refined understanding of the mechanisms that underlie arsenic mobilization, particularly under the onset of anaerobic conditions, and the quantification of the factors that affect this process. In this study, we focus on the development and testing of appropriate conceptual and numerical model approaches to represent and quantify the reductive dissolution of iron oxides, the concomitant release of sorbed arsenic, and the role of iron-mineral transformations. The initial model development in this study was guided by data and hypothesized processes from a previously reported,1 well-controlled column experiment in which arsenic desorption from ferrihydrite coated sands by variable loads of organic carbon was investigated. Using the measured data as constraints, we provide a quantitative interpretation of the processes controlling arsenic mobility during the microbial reductive transformation of iron oxides. Our analysis suggests that the observed arsenic behavior is primarily controlled by a combination of reductive dissolution of ferrihydrite, arsenic incorporation into or co-precipitation with freshly transformed iron minerals, and partial arsenic redox transformations.
Sabol, Thomas A.; Springer, Abraham E.
2013-01-01
Seepage erosion and mass failure of emergent sandy deposits along the Colorado River in Grand Canyon National Park, Arizona, are a function of the elevation of groundwater in the sandbar, fluctuations in river stage, the exfiltration of water from the bar face, and the slope of the bar face. In this study, a generalized three-dimensional numerical model was developed to predict the time-varying groundwater level, within the bar face region of a freshly deposited eddy sandbar, as a function of river stage. Model verification from two transient simulations demonstrates the ability of the model to predict groundwater levels within the onshore portion of the sandbar face across a range of conditions. Use of this generalized model is applicable across a range of typical eddy sandbar deposits in diverse settings. The ability to predict the groundwater level at the onshore end of the sandbar face is essential for both physical and numerical modeling efforts focusing on the erosion and mass failure of eddy sandbars downstream of Glen Canyon Dam along the Colorado River.
Numerical modeling of the solar wind flow with observational boundary conditions
Pogorelov, N. V.; Borovikov, S. N.; Burlaga, L. F.; ...
2012-11-20
In this paper we describe our group efforts to develop a self-consistent, data-driven model of the solar wind (SW) interaction with the local interstellar medium. The motion of plasma in this model is described with the MHD approach, while the transport of neutral atoms is addressed by either kinetic or multi-fluid equations. The model and its implementation in the Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) are continuously tested and validated by comparing our results with other models and spacecraft measurements. In particular, it was successfully applied to explain an unusual SW behavior discovered by the Voyager 1 spacecraft, i.e., the developmentmore » of a substantial negative radial velocity component, flow turning in the transverse direction, while the latitudinal velocity component goes to very small values. We explain recent SW velocity measurements at Voyager 1 in the context of our 3-D, MHD modeling. We also present a comparison of different turbulence models in their ability to reproduce the SW temperature profile from Voyager 2 measurements. Lastly, the boundary conditions obtained at 50 solar radii from data-driven numerical simulations are used to model a CME event throughout the heliosphere.« less
Numerical and Experimental Studies on Impact Loaded Concrete Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo
2006-07-01
An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less
An Uncertainty Structure Matrix for Models and Simulations
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Blattnig, Steve R.; Hemsch, Michael J.; Luckring, James M.; Tripathi, Ram K.
2008-01-01
Software that is used for aerospace flight control and to display information to pilots and crew is expected to be correct and credible at all times. This type of software is typically developed under strict management processes, which are intended to reduce defects in the software product. However, modeling and simulation (M&S) software may exhibit varying degrees of correctness and credibility, depending on a large and complex set of factors. These factors include its intended use, the known physics and numerical approximations within the M&S, and the referent data set against which the M&S correctness is compared. The correctness and credibility of an M&S effort is closely correlated to the uncertainty management (UM) practices that are applied to the M&S effort. This paper describes an uncertainty structure matrix for M&S, which provides a set of objective descriptions for the possible states of UM practices within a given M&S effort. The columns in the uncertainty structure matrix contain UM elements or practices that are common across most M&S efforts, and the rows describe the potential levels of achievement in each of the elements. A practitioner can quickly look at the matrix to determine where an M&S effort falls based on a common set of UM practices that are described in absolute terms that can be applied to virtually any M&S effort. The matrix can also be used to plan those steps and resources that would be needed to improve the UM practices for a given M&S effort.
NASA Technical Reports Server (NTRS)
Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily;
2013-01-01
The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.
Schmoldt, D.L.; Peterson, D.L.; Keane, R.E.; Lenihan, J.M.; McKenzie, D.; Weise, D.R.; Sandberg, D.V.
1999-01-01
A team of fire scientists and resource managers convened 17-19 April 1996 in Seattle, Washington, to assess the effects of fire disturbance on ecosystems. Objectives of this workshop were to develop scientific recommendations for future fire research and management activities. These recommendations included a series of numerically ranked scientific and managerial questions and responses focusing on (1) links among fire effects, fuels, and climate; (2) fire as a large-scale disturbance; (3) fire-effects modeling structures; and (4) managerial concerns, applications, and decision support. At the present time, understanding of fire effects and the ability to extrapolate fire-effects knowledge to large spatial scales are limited, because most data have been collected at small spatial scales for specific applications. Although we clearly need more large-scale fire-effects data, it will be more expedient to concentrate efforts on improving and linking existing models that simulate fire effects in a georeferenced format while integrating empirical data as they become available. A significant component of this effort should be improved communication between modelers and managers to develop modeling tools to use in a planning context. Another component of this modeling effort should improve our ability to predict the interactions of fire and potential climatic change at very large spatial scales. The priority issues and approaches described here provide a template for fire science and fire management programs in the next decade and beyond.
Joint physical and numerical modeling of water distribution networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.
2009-01-01
This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in somemore » cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.« less
NASA Astrophysics Data System (ADS)
van den Heever, S. C.; Grant, L. D.; Drager, A. J.
2017-12-01
Cold pools play a significant role in convective storm initiation, organization and longevity. Given their role in convective life cycles, recent efforts have been focused on improving the representation of cold pool processes within weather forecast models, as well as on developing cold pool parameterizations in order to better represent their impacts within global climate models. Understanding the physical processes governing cold pool formation, intensity and dissipation is therefore critical to these efforts. Cold pool characteristics are influenced by numerous factors, including those associated with precipitation formation and evaporation, variations in the environmental moisture and shear, and land surface interactions. The focus of this talk will be on the manner in which the surface characteristics and associated processes impact cold pool genesis and dissipation. In particular, the results from high-resolution modeling studies focusing on the role of sensible and latent heat fluxes, soil moisture and SST will be presented. The results from a recent field campaign examining cold pools over northern Colorado will also be discussed.
NOAA Atmospheric Sciences Modeling Division support to the US Environmental Protection Agency
NASA Astrophysics Data System (ADS)
Poole-Kober, Evelyn M.; Viebrock, Herbert J.
1991-07-01
During FY-1990, the Atmospheric Sciences Modeling Division provided meteorological research and operational support to the U.S. Environmental Protection Agency. Basic meteorological operational support consisted of applying dispersion models and conducting dispersion studies and model evaluations. The primary research effort was the development and evaluation of air quality simulation models using numerical and physical techniques supported by field studies. Modeling emphasis was on the dispersion of photochemical oxidants and particulate matter on urban and regional scales, dispersion in complex terrain, and the transport, transformation, and deposition of acidic materials. Highlights included expansion of the Regional Acid Deposition Model/Engineering Model family to consist of the Tagged Species Engineering Model, the Non-Depleting Model, and the Sulfate Tracking Model; completion of the Acid-MODES field study; completion of the RADM2.1 evaluation; completion of the atmospheric processes section of the National Acid Precipitation Assessment Program 1990 Integrated Assessment; conduct of the first field study to examine the transport and entrainment processes of convective clouds; development of a Regional Oxidant Model-Urban Airshed Model interface program; conduct of an international sodar intercomparison experiment; incorporation of building wake dispersion in numerical models; conduct of wind-tunnel simulations of stack-tip downwash; and initiation of the publication of SCRAM NEWS.
Risk-sensitivity and the mean-variance trade-off: decision making in sensorimotor control
Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.
2011-01-01
Numerous psychophysical studies suggest that the sensorimotor system chooses actions that optimize the average cost associated with a movement. Recently, however, violations of this hypothesis have been reported in line with economic theories of decision-making that not only consider the mean payoff, but are also sensitive to risk, that is the variability of the payoff. Here, we examine the hypothesis that risk-sensitivity in sensorimotor control arises as a mean-variance trade-off in movement costs. We designed a motor task in which participants could choose between a sure motor action that resulted in a fixed amount of effort and a risky motor action that resulted in a variable amount of effort that could be either lower or higher than the fixed effort. By changing the mean effort of the risky action while experimentally fixing its variance, we determined indifference points at which participants chose equiprobably between the sure, fixed amount of effort option and the risky, variable effort option. Depending on whether participants accepted a variable effort with a mean that was higher, lower or equal to the fixed effort, they could be classified as risk-seeking, risk-averse or risk-neutral. Most subjects were risk-sensitive in our task consistent with a mean-variance trade-off in effort, thereby, underlining the importance of risk-sensitivity in computational models of sensorimotor control. PMID:21208966
NASA Astrophysics Data System (ADS)
Mandache, C.; Khan, M.; Fahr, A.; Yanishevsky, M.
2011-03-01
Probability of detection (PoD) studies are broadly used to determine the reliability of specific nondestructive inspection procedures, as well as to provide data for damage tolerance life estimations and calculation of inspection intervals for critical components. They require inspections on a large set of samples, a fact that makes these statistical assessments time- and cost-consuming. Physics-based numerical simulations of nondestructive testing inspections could be used as a cost-effective alternative to empirical investigations. They realistically predict the inspection outputs as functions of the input characteristics related to the test piece, transducer and instrument settings, which are subsequently used to partially substitute and/or complement inspection data in PoD analysis. This work focuses on the numerical modelling aspects of eddy current testing for the bolt hole inspections of wing box structures typical of the Lockheed Martin C-130 Hercules and P-3 Orion aircraft, found in the air force inventory of many countries. Boundary element-based numerical modelling software was employed to predict the eddy current signal responses when varying inspection parameters related to probe characteristics, crack geometry and test piece properties. Two demonstrator exercises were used for eddy current signal prediction when lowering the driver probe frequency and changing the material's electrical conductivity, followed by subsequent discussions and examination of the implications on using simulated data in the PoD analysis. Despite some simplifying assumptions, the modelled eddy current signals were found to provide similar results to the actual inspections. It is concluded that physics-based numerical simulations have the potential to partially substitute or complement inspection data required for PoD studies, reducing the cost, time, effort and resources necessary for a full empirical PoD assessment.
NASA Astrophysics Data System (ADS)
Juez, C.; Battisacco, E.; Schleiss, A. J.; Franca, M. J.
2016-06-01
The artificial replenishment of sediment is used as a method to re-establish sediment continuity downstream of a dam. However, the impact of this technique on the hydraulics conditions, and resulting bed morphology, is yet to be understood. Several numerical tools have been developed during last years for modeling sediment transport and morphology evolution which can be used for this application. These models range from 1D to 3D approaches: the first being over simplistic for the simulation of such a complex geometry; the latter requires often a prohibitive computational effort. However, 2D models are computationally efficient and in these cases may already provide sufficiently accurate predictions of the morphology evolution caused by the sediment replenishment in a river. Here, the 2D shallow water equations in combination with the Exner equation are solved by means of a weak-coupled strategy. The classical friction approach considered for reproducing the bed channel roughness has been modified to take into account the morphological effect of replenishment which provokes a channel bed fining. Computational outcomes are compared with four sets of experimental data obtained from several replenishment configurations studied in the laboratory. The experiments differ in terms of placement volume and configuration. A set of analysis parameters is proposed for the experimental-numerical comparison, with particular attention to the spreading, covered surface and travel distance of placed replenishment grains. The numerical tool is reliable in reproducing the overall tendency shown by the experimental data. The effect of fining roughness is better reproduced with the approach herein proposed. However, it is also highlighted that the sediment clusters found in the experiment are not well numerically reproduced in the regions of the channel with a limited number of sediment grains.
NASA Technical Reports Server (NTRS)
1974-01-01
Observations and research progress of the Smithsonian Astrophysical Observatory are reported. Satellite tracking networks (ground stations) are discussed and equipment (Baker-Nunn cameras) used to observe the satellites is described. The improvement of the accuracy of a laser ranging system of the ground stations is discussed. Also, research efforts in satellite geodesy (tides, gravity anomalies, plate tectonics) is discussed. The use of data processing for geophysical data is examined, and a data base for the Earth and Ocean Physics Applications Program is proposed. Analytical models of the earth's motion (computerized simulation) are described and the computation (numerical integration and algorithms) of satellite orbits affected by the earth's albedo, using computer techniques, is also considered. Research efforts in the study of the atmosphere are examined (the effect of drag on satellite motion), and models of the atmosphere based on satellite data are described.
SIGNUM: A Matlab, TIN-based landscape evolution model
NASA Astrophysics Data System (ADS)
Refice, A.; Giachetta, E.; Capolongo, D.
2012-08-01
Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.
Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event
NASA Astrophysics Data System (ADS)
Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.
2010-12-01
The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.
Numerical Prediction of SERN Performance using WIND code
NASA Technical Reports Server (NTRS)
Engblom, W. A.
2003-01-01
Computational results are presented for the performance and flow behavior of single-expansion ramp nozzles (SERNs) during overexpanded operation and transonic flight. Three-dimensional Reynolds-Averaged Navier Stokes (RANS) results are obtained for two vehicle configurations, including the NASP Model 5B and ISTAR RBCC (a variant of X-43B) using the WIND code. Numerical predictions for nozzle integrated forces and pitch moments are directly compared to experimental data for the NASP Model 5B, and adequate-to-excellent agreement is found. The sensitivity of SERN performance and separation phenomena to freestream static pressure and Mach number is demonstrated via a matrix of cases for both vehicles. 3-D separation regions are shown to be induced by either lateral (e.g., sidewall) shocks or vertical (e.g., cowl trailing edge) shocks. Finally, the implications of this work to future preliminary design efforts involving SERNs are discussed.
Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster
Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi
2017-01-01
Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutland, Christopher J.
2009-04-26
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less
The 15 August 2007 Peru tsunami runup observations and modeling
NASA Astrophysics Data System (ADS)
Fritz, Hermann M.; Kalligeris, Nikos; Borrero, Jose C.; Broncano, Pablo; Ortega, Erick
2008-05-01
On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. The coast of Peru has experienced numerous deadly and destructive tsunamis throughout history, which highlights the importance of ongoing tsunami awareness and education efforts to ensure successful self-evacuation.
Large-Eddy Simulation of Aeroacoustic Applications
NASA Technical Reports Server (NTRS)
Pruett, C. David; Sochacki, James S.
1999-01-01
This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.
Coupling the System Analysis Module with SAS4A/SASSYS-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanning, T. H.; Hu, R.
2016-09-30
SAS4A/SASSYS-1 is a simulation tool used to perform deterministic analysis of anticipated events as well as design basis and beyond design basis accidents for advanced reactors, with an emphasis on sodium fast reactors. SAS4A/SASSYS-1 has been under development and in active use for nearly forty-five years, and is currently maintained by the U.S. Department of Energy under the Office of Advanced Reactor Technology. Although SAS4A/SASSYS-1 contains a very capable primary and intermediate system modeling component, PRIMAR-4, it also has some shortcomings: outdated data management and code structure makes extension of the PRIMAR-4 module somewhat difficult. The user input format formore » PRIMAR-4 also limits the number of volumes and segments that can be used to describe a given system. The System Analysis Module (SAM) is a fairly new code development effort being carried out under the U.S. DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM is being developed with advanced physical models, numerical methods, and software engineering practices; however, it is currently somewhat limited in the system components and phenomena that can be represented. For example, component models for electromagnetic pumps and multi-layer stratified volumes have not yet been developed. Nor is there support for a balance of plant model. Similarly, system-level phenomena such as control-rod driveline expansion and vessel elongation are not represented. This report documents fiscal year 2016 work that was carried out to couple the transient safety analysis capabilities of SAS4A/SASSYS-1 with the system modeling capabilities of SAM under the joint support of the ART and NEAMS programs. The coupling effort was successful and is demonstrated by evaluating an unprotected loss of flow transient for the Advanced Burner Test Reactor (ABTR) design. There are differences between the stand-alone SAS4A/SASSYS-1 simulations and the coupled SAS/SAM simulations, but these are mainly attributed to the limited maturity of the SAM development effort. The severe accident modeling capabilities in SAS4A/SASSYS-1 (sodium boiling, fuel melting and relocation) will continue to play a vital role for a long time. Therefore, the SAS4A/SASSYS-1 modernization effort should remain a high priority task under the ART program to ensure continued participation in domestic and international SFR safety collaborations and design optimizations. On the other hand, SAM provides an advanced system analysis tool, with improved numerical solution schemes, data management, code flexibility, and accuracy. SAM is still in early stages of development and will require continued support from NEAMS to fulfill its potential and to mature into a production tool for advanced reactor safety analysis. The effort to couple SAS4A/SASSYS-1 and SAM is the first step on the integration of these modeling capabilities.« less
Numerical Study of Flow Augmented Thermal Management for Entry and Re-Entry Environments
NASA Technical Reports Server (NTRS)
Cheng, Gary C.; Neroorkar, Kshitij D.; Chen, Yen-Sen; Wang, Ten-See; Daso, Endwell O.
2007-01-01
The use of a flow augmented thermal management system for entry and re-entr environments is one method for reducing heat and drag loads. This concept relies on jet penetration from supersonic and hypersonic counterflowing jets that could significantly weaken and disperse the shock-wave system of the spacecraft flow field. The objective of this research effort is to conduct parametric studies of the supersonic flow over a 2.6% scale model of the Apollo capsule, with and without the counterflowing jet, using time-accurate and steady-state computational fluid dynamics simulations. The numerical studies, including different freestream Mach number angle of attack counterflowing jet mass flow rate, and nozzle configurations, were performed to examine their effect on the drag and beat loads and to explore the counternowing jet condition. The numerical results were compared with the test data obtained from transonic blow-down wind-tunnel experiments conducted independently at NASA MSFC.
Numerical simulation of the non-Newtonian mixing layer
NASA Technical Reports Server (NTRS)
Azaiez, Jalel; Homsy, G. M.
1993-01-01
This work is a continuing effort to advance our understanding of the effects of polymer additives on the structures of the mixing layer. In anticipation of full nonlinear simulations of the non-Newtonian mixing layer, we examined in a first stage the linear stability of the non-Newtonian mixing layer. The results of this study show that, for a fluid described by the Oldroyd-B model, viscoelasticity reduces the instability of the inviscid mixing layer in a special limit where the ratio (We/Re) is of order 1 where We is the Weissenberg number, a measure of the elasticity of the flow, and Re is the Reynolds number. In the present study, we pursue this project with numerical simulations of the non-Newtonian mixing layer. Our primary objective is to determine the effects of viscoelasticity on the roll-up structure. We also examine the origin of the numerical instabilities usually encountered in the simulations of non-Newtonian fluids.
Localization of a variational particle smoother
NASA Astrophysics Data System (ADS)
Morzfeld, M.; Hodyss, D.; Poterjoy, J.
2017-12-01
Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.
Evaluation of simplified stream-aquifer depletion models for water rights administration
Sophocleous, Marios; Koussis, Antonis; Martin, J.L.; Perkins, S.P.
1995-01-01
We assess the predictive accuracy of Glover's (1974) stream-aquifer analytical solutions, which are commonly used in administering water rights, and evaluate the impact of the assumed idealizations on administrative and management decisions. To achieve these objectives, we evaluate the predictive capabilities of the Glover stream-aquifer depletion model against the MODFLOW numerical standard, which, unlike the analytical model, can handle increasing hydrogeologic complexity. We rank-order and quantify the relative importance of the various assumptions on which the analytical model is based, the three most important being: (1) streambed clogging as quantified by streambed-aquifer hydraulic conductivity contrast; (2) degree of stream partial penetration; and (3) aquifer heterogeneity. These three factors relate directly to the multidimensional nature of the aquifer flow conditions. From these considerations, future efforts to reduce the uncertainty in stream depletion-related administrative decisions should primarily address these three factors in characterizing the stream-aquifer process. We also investigate the impact of progressively coarser model grid size on numerically estimating stream leakage and conclude that grid size effects are relatively minor. Therefore, when modeling is required, coarser model grids could be used thus minimizing the input data requirements.
"Hot Spots" of Land Atmosphere Coupling
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Dirmeyer, Paul A.; Guo, Zhi-Chang; Bonan, Gordan; Chan, Edmond; Cox, Peter; Gordon, T. C.; Kanae, Shinjiro; Kowalczyk, Eva; Lawrence, David
2004-01-01
Previous estimates of land-atmosphere interaction (the impact of soil moisture on precipitation) have been limited by a severe paucity of relevant observational data and by the model-dependence of the various computational estimates. To counter this limitation, a dozen climate modeling groups have recently performed the same highly-controlled numerical experiment as part of a coordinated intercomparison project. This allows, for the first time ever, a superior multi-model approach to the estimation of the regions on the globe where precipitation is affected by soil moisture anomalies during Northern Hemisphere summer. Such estimation has many potential benefits; it can contribute, for example, to seasonal rainfall prediction efforts.
Models for Type Ia Supernovae and Related Astrophysical Transients
NASA Astrophysics Data System (ADS)
Röpke, Friedrich K.; Sim, Stuart A.
2018-06-01
We give an overview of recent efforts to model Type Ia supernovae and related astrophysical transients resulting from thermonuclear explosions in white dwarfs. In particular we point out the challenges resulting from the multi-physics multi-scale nature of the problem and discuss possible numerical approaches to meet them in hydrodynamical explosion simulations and radiative transfer modeling. We give examples of how these methods are applied to several explosion scenarios that have been proposed to explain distinct subsets or, in some cases, the majority of the observed events. In case we comment on some of the successes and shortcoming of these scenarios and highlight important outstanding issues.
A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
Brillouin zone grid refinement for highly resolved ab initio THz optical properties of graphene
NASA Astrophysics Data System (ADS)
Warmbier, Robert; Quandt, Alexander
2018-07-01
Optical spectra of materials can in principle be calculated within numerical frameworks based on Density Functional Theory. The huge numerical effort involved in these methods severely constraints the accuracy achievable in practice. In the case of the THz spectrum of graphene the primary limitation lays in the density of the reciprocal space sampling. In this letter we have developed a non-uniform sampling using grid refinement to achieve a high local sampling density with only moderate numerical effort. The resulting THz electron energy loss spectrum shows a plasmon signal below 50 meV with a ω(q) ∝√{ q } dispersion relation.
Martyr-Koller, R.C.; Kernkamp, H.W.J.; Van Dam, Anne A.; Mick van der Wegen,; Lucas, Lisa; Knowles, N.; Jaffe, B.; Fregoso, T.A.
2017-01-01
A linked modeling approach has been undertaken to understand the impacts of climate and infrastructure on aquatic ecology and water quality in the San Francisco Bay-Delta region. The Delft3D Flexible Mesh modeling suite is used in this effort for its 3D hydrodynamics, salinity, temperature and sediment dynamics, phytoplankton and water-quality coupling infrastructure, and linkage to a habitat suitability model. The hydrodynamic model component of the suite is D-Flow FM, a new 3D unstructured finite-volume model based on the Delft3D model. In this paper, D-Flow FM is applied to the San Francisco Bay-Delta to investigate tidal, seasonal and annual dynamics of water levels, river flows and salinity under historical environmental and infrastructural conditions. The model is driven by historical winds, tides, ocean salinity, and river flows, and includes federal, state, and local freshwater withdrawals, and regional gate and barrier operations. The model is calibrated over a 9-month period, and subsequently validated for water levels, flows, and 3D salinity dynamics over a 2 year period.Model performance was quantified using several model assessment metrics and visualized through target diagrams. These metrics indicate that the model accurately estimated water levels, flows, and salinity over wide-ranging tidal and fluvial conditions, and the model can be used to investigate detailed circulation and salinity patterns throughout the Bay-Delta. The hydrodynamics produced through this effort will be used to drive affiliated sediment, phytoplankton, and contaminant hindcast efforts and habitat suitability assessments for fish and bivalves. The modeling framework applied here will serve as a baseline to ultimately shed light on potential ecosystem change over the current century.
NASA Astrophysics Data System (ADS)
Martyr-Koller, R. C.; Kernkamp, H. W. J.; van Dam, A.; van der Wegen, M.; Lucas, L. V.; Knowles, N.; Jaffe, B.; Fregoso, T. A.
2017-06-01
A linked modeling approach has been undertaken to understand the impacts of climate and infrastructure on aquatic ecology and water quality in the San Francisco Bay-Delta region. The Delft3D Flexible Mesh modeling suite is used in this effort for its 3D hydrodynamics, salinity, temperature and sediment dynamics, phytoplankton and water-quality coupling infrastructure, and linkage to a habitat suitability model. The hydrodynamic model component of the suite is D-Flow FM, a new 3D unstructured finite-volume model based on the Delft3D model. In this paper, D-Flow FM is applied to the San Francisco Bay-Delta to investigate tidal, seasonal and annual dynamics of water levels, river flows and salinity under historical environmental and infrastructural conditions. The model is driven by historical winds, tides, ocean salinity, and river flows, and includes federal, state, and local freshwater withdrawals, and regional gate and barrier operations. The model is calibrated over a 9-month period, and subsequently validated for water levels, flows, and 3D salinity dynamics over a 2 year period. Model performance was quantified using several model assessment metrics and visualized through target diagrams. These metrics indicate that the model accurately estimated water levels, flows, and salinity over wide-ranging tidal and fluvial conditions, and the model can be used to investigate detailed circulation and salinity patterns throughout the Bay-Delta. The hydrodynamics produced through this effort will be used to drive affiliated sediment, phytoplankton, and contaminant hindcast efforts and habitat suitability assessments for fish and bivalves. The modeling framework applied here will serve as a baseline to ultimately shed light on potential ecosystem change over the current century.
NASA Astrophysics Data System (ADS)
Liu, Chi; Ye, Rui; Lian, Liping; Song, Weiguo; Zhang, Jun; Lo, Siuming
2018-05-01
In the context of global aging, how to design traffic facilities for a population with a different age composition is of high importance. For this purpose, we propose a model based on the least effort principle to simulate heterogeneous pedestrian flow. In the model, the pedestrian is represented by a three-disc shaped agent. We add a new parameter to realize pedestrians' preference to avoid changing their direction of movement too quickly. The model is validated with numerous experimental data on unidirectional pedestrian flow. In addition, we investigate the influence of corridor width and velocity distribution of crowds on unidirectional heterogeneous pedestrian flow. The simulation results reflect that widening corridors could increase the specific flow for the crowd composed of two kinds of pedestrians with significantly different free velocities. Moreover, compared with a unified crowd, the crowd composed of pedestrians with great mobility differences requires a wider corridor to attain the same traffic efficiency. This study could be beneficial in providing a better understanding of heterogeneous pedestrian flow, and quantified outcomes could be applied in traffic facility design.
A classification procedure for the effective management of changes during the maintenance process
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.
1992-01-01
During software operation, maintainers are often faced with numerous change requests. Given available resources such as effort and calendar time, changes, if approved, have to be planned to fit within budget and schedule constraints. In this paper, we address the issue of assessing the difficulty of a change based on known or predictable data. This paper should be considered as a first step towards the construction of customized economic models for maintainers. In it, we propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. The approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by NASA/GSFC which shows it was effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.
NASA Astrophysics Data System (ADS)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; Dagotto, Elbio
2015-06-01
Lattice spin-fermion models are important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the "spins," are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The "traveling cluster approximation" (TCA) is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 103 sites. In this publication, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. This allows us to solve generic spin-fermion models easily on 104 lattice sites and with some effort on 105 lattice sites, representing the record lattice sizes studied for this family of models.
Comparisons of Calculations with PARTRAC and NOREC: Transport of Electrons in Liquid Water
Dingfelder, M.; Ritchie, R. H.; Turner, J. E.; Friedland, W.; Paretzke, H. G.; Hamm, R. N.
2013-01-01
Monte Carlo computer models that simulate the detailed, event-by-event transport of electrons in liquid water are valuable for the interpretation and understanding of findings in radiation chemistry and radiation biology. Because of the paucity of experimental data, such efforts must rely on theoretical principles and considerable judgment in their development. Experimental verification of numerical input is possible to only a limited extent. Indirect support for model validity can be gained from a comparison of details between two independently developed computer codes as well as the observable results calculated with them. In this study, we compare the transport properties of electrons in liquid water using two such models, PARTRAC and NOREC. Both use interaction cross sections based on plane-wave Born approximations and a numerical parameterization of the complex dielectric response function for the liquid. The models are described and compared, and their similarities and differences are highlighted. Recent developments in the field are discussed and taken into account. The calculated stopping powers, W values, and slab penetration characteristics are in good agreement with one another and with other independent sources. PMID:18439039
Vegetation Demographics in Earth System Models: a review of progress and priorities
Fisher, Rosie A.; Koven, Charles D.; Anderegg, William R. L.; ...
2017-09-18
Numerous current efforts seek to improve the representation of ecosystem ecology and vegetation demographic processes within Earth System Models (ESMs). Furthermore, these developments are widely viewed as an important step in developing greater realism in predictions of future ecosystem states and fluxes. Increased realism, however, leads to increased model complexity, with new features raising a suite of ecological questions that require empirical constraints. We review the developments that permit the representation of plant demographics in ESMs, and identify issues raised by these developments that highlight important gaps in ecological understanding. These issues inevitably translate into uncertainty in model projections butmore » also allow models to be applied to new processes and questions concerning the dynamics of real-world ecosystems. We also argue that stronger and more innovative connections to data, across the range of scales considered, are required to address these gaps in understanding. The development of first-generation land surface models as a unifying framework for ecophysiological understanding stimulated much research into plant physiological traits and gas exchange. Constraining predictions at ecologically relevant spatial and temporal scales will require a similar investment of effort and intensified inter-disciplinary communication.« less
Vegetation Demographics in Earth System Models: a review of progress and priorities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, Rosie A.; Koven, Charles D.; Anderegg, William R. L.
Numerous current efforts seek to improve the representation of ecosystem ecology and vegetation demographic processes within Earth System Models (ESMs). Furthermore, these developments are widely viewed as an important step in developing greater realism in predictions of future ecosystem states and fluxes. Increased realism, however, leads to increased model complexity, with new features raising a suite of ecological questions that require empirical constraints. We review the developments that permit the representation of plant demographics in ESMs, and identify issues raised by these developments that highlight important gaps in ecological understanding. These issues inevitably translate into uncertainty in model projections butmore » also allow models to be applied to new processes and questions concerning the dynamics of real-world ecosystems. We also argue that stronger and more innovative connections to data, across the range of scales considered, are required to address these gaps in understanding. The development of first-generation land surface models as a unifying framework for ecophysiological understanding stimulated much research into plant physiological traits and gas exchange. Constraining predictions at ecologically relevant spatial and temporal scales will require a similar investment of effort and intensified inter-disciplinary communication.« less
Peters, J.G.
1987-01-01
The Indiana Department of Natural Resources (IDNR) is developing water-management policies designed to assess the effects of irrigation and other water uses on water supply in the basin. In support of this effort, the USGS, in cooperation with IDNR, began a study to evaluate appropriate methods for analyzing the effects of pumping on ground-water levels and streamflow in the basin 's glacial aquifer systems. Four analytical models describe drawdown for a nonleaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and partially penetrating well; and an unconfined aquifer and partially penetrating well. Analytical equations, simplifying assumptions, and methods of application are described for each model. In addition to these four models, several other analytical models were used to predict the effects of ground-water pumping on water levels in the aquifer and on streamflow in local areas with up to two pumping wells. Analytical models for a variety of other hydrogeologic conditions are cited. A digital ground-water flow model was used to describe how a numerical model can be applied to a glacial aquifer system. The numerical model was used to predict the effects of six pumping plans in 46.5 sq mi area with as many as 150 wells. Water budgets for the six pumping plans were used to estimate the effect of pumping on streamflow reduction. Results of the analytical and numerical models indicate that, in general, the glacial aquifers in the basin are highly permeable. Radial hydraulic conductivity calculated by the analytical models ranged from 280 to 600 ft/day, compared to 210 and 360 ft/day used in the numerical model. Maximum seasonal pumping for irrigation produced maximum calculated drawdown of only one-fourth of available drawdown and reduced streamflow by as much as 21%. Analytical models are useful in estimating aquifer properties and predicting local effects of pumping in areas with simple lithology and boundary conditions and with few pumping wells. Numerical models are useful in regional areas with complex hydrogeology with many pumping wells and provide detailed water budgets useful for estimating the sources of water in pumping simulations. Numerical models are useful in constructing flow nets. The choice of which type of model to use is also based on the nature and scope of questions to be answered and on the degree of accuracy required. (Author 's abstract)
Changes and challenges in the Software Engineering Laboratory
NASA Technical Reports Server (NTRS)
Pajerski, Rose
1994-01-01
Since 1976, the Software Engineering Laboratory (SEL) has been dedicated to understanding and improving the way in which one NASA organization, the Flight Dynamics Division (FDD), develops, maintains, and manages complex flight dynamics systems. The SEL is composed of three member organizations: NASA/GSFC, the University of Maryland, and Computer Sciences Corporation. During the past 18 years, the SEL's overall goal has remained the same: to improve the FDD's software products and processes in a measured manner. This requires that each development and maintenance effort be viewed, in part, as a SEL experiment which examines a specific technology or builds a model of interest for use on subsequent efforts. The SEL has undertaken many technology studies while developing operational support systems for numerous NASA spacecraft missions.
An overview of C. elegans biology.
Strange, Kevin
2006-01-01
The establishment of Caenorhabditis elegans as a "model organism" began with the efforts of Sydney Brenner in the early 1960s. Brenner's focus was to find a suitable animal model in which the tools of genetic analysis could be used to define molecular mechanisms of development and nervous system function. C. elegans provides numerous experimental advantages for such studies. These advantages include a short life cycle, production of large numbers of offspring, easy and inexpensive laboratory culture, forward and reverse genetic tractability, and a relatively simple anatomy. This chapter will provide a brief overview of C. elegans biology.
Rappel, Wouter-Jan; Loomis, William F.
2009-01-01
During eukaryotic chemotaxis, external chemical gradients guide the crawling motion of cells. This process plays an important role in a large variety of biological systems and has wide ranging medical implications. New experimental techniques including confocal microscopy and microfluidics have advanced our understanding of chemotaxis while numerical modeling efforts are beginning to offer critical insights. In this short review, we survey the current experimental status of the field by dividing chemotaxis into three distinct “modules”: directional sensing, polarity and motility. For each module, we attempt to point out potential new directions of research and discuss how modeling studies interact with experimental investigations. PMID:20648241
NASA Technical Reports Server (NTRS)
Bernhard, R. J.; Bolton, J. S.
1988-01-01
The objectives are: measurement of dynamic properties of acoustical foams and incorporation of these properties in models governing three-dimensional wave propagation in foams; tests to measure sound transmission paths in the HP137 Jetstream 3; and formulation of a finite element energy model. In addition, the effort to develop a numerical/empirical noise source identification technique was completed. The investigation of a design optimization technique for active noise control was also completed. Monthly progress reports which detail the progress made toward each of the objectives are summarized.
Analysis of neutral beam driven impurity flow reversal in PLT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malik, M.A.; Stacey, W.M. Jr.; Thomas, C.E.
1986-10-01
The Stacey-Sigmar impurity transport theory for tokamak plasmas is applied to the analysis of experimental data from the PLT tokamak with a tungsten limiter. The drag term, which is a central piece in the theory, is evaluated from the recently developed gyroviscous theory for radial momentum transfer. An effort is made to base the modeling of the experiment on measured quantities. Where measured data is not available, recourse is made to extrapolation or numerical modeling. The theoretical and the experimental tungsten fluxes are shown to agree very closely within the uncertainties of the experimental data.
Automated procedures for sizing aerospace vehicle structures /SAVES/
NASA Technical Reports Server (NTRS)
Giles, G. L.; Blackburn, C. L.; Dixon, S. C.
1972-01-01
Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
Zephyr: Open-source Parallel Seismic Waveform Inversion in an Integrated Python-based Framework
NASA Astrophysics Data System (ADS)
Smithyman, B. R.; Pratt, R. G.; Hadden, S. M.
2015-12-01
Seismic Full-Waveform Inversion (FWI) is an advanced method to reconstruct wave properties of materials in the Earth from a series of seismic measurements. These methods have been developed by researchers since the late 1980s, and now see significant interest from the seismic exploration industry. As researchers move towards implementing advanced numerical modelling (e.g., 3D, multi-component, anisotropic and visco-elastic physics), it is desirable to make use of a modular approach, minimizing the effort developing a new set of tools for each new numerical problem. SimPEG (http://simpeg.xyz) is an open source project aimed at constructing a general framework to enable geophysical inversion in various domains. In this abstract we describe Zephyr (https://github.com/bsmithyman/zephyr), which is a coupled research project focused on parallel FWI in the seismic context. The software is built on top of Python, Numpy and IPython, which enables very flexible testing and implementation of new features. Zephyr is an open source project, and is released freely to enable reproducible research. We currently implement a parallel, distributed seismic forward modelling approach that solves the 2.5D (two-and-one-half dimensional) viscoacoustic Helmholtz equation at a range modelling frequencies, generating forward solutions for a given source behaviour, and gradient solutions for a given set of observed data. Solutions are computed in a distributed manner on a set of heterogeneous workers. The researcher's frontend computer may be separated from the worker cluster by a network link to enable full support for computation on remote clusters from individual workstations or laptops. The present codebase introduces a numerical discretization equivalent to that used by FULLWV, a well-known seismic FWI research codebase. This makes it straightforward to compare results from Zephyr directly with FULLWV. The flexibility introduced by the use of a Python programming environment makes extension of the codebase with new methods much more straightforward. This enables comparison and integration of new efforts with existing results.
The History and Implications of Design Standards for Underwater Breathing Apparatus - 1954 to 2015
2015-02-11
respiratory loading using both simple models of fluid mechanics and experimental evidence. An understanding of the influence of both respiratory ventilatory... fluid dynamics of flow in divers’ airways. It allows testing laboratories to make maximum use of all of their testing data, and lo present that data in...tireless efforts of numerous military divers at Navy Experimental Diving Unit in Panama City, FL and Naval Medical Research Institute, Bethesda, MD
Nonlinear Geometric Effects in Mechanical Bistable Morphing Structures
NASA Astrophysics Data System (ADS)
Chen, Zi; Guo, Qiaohang; Majidi, Carmel; Chen, Wenzhe; Srolovitz, David J.; Haataja, Mikko P.
2012-09-01
Bistable structures associated with nonlinear deformation behavior, exemplified by the Venus flytrap and slap bracelet, can switch between different functional shapes upon actuation. Despite numerous efforts in modeling such large deformation behavior of shells, the roles of mechanical and nonlinear geometric effects on bistability remain elusive. We demonstrate, through both theoretical analysis and tabletop experiments, that two dimensionless parameters control bistability. Our work classifies the conditions for bistability, and extends the large deformation theory of plates and shells.
Hydroclimate Forecasts in Ethiopia: Benefits, Impediments, and Ways Forward
NASA Astrophysics Data System (ADS)
Block, P. J.
2014-12-01
Numerous hydroclimate forecast models, tools, and guidance exist for application across Ethiopia and East Africa in the agricultural, water, energy, disasters, and economic sectors. This has resulted from concerted local and international interdisciplinary efforts, yet little evidence exists of rapid forecast uptake and use. We will review projected benefits and gains of seasonal forecast application, impediments, and options for the way forward. Specific case studies regarding floods, agricultural-economic links, and hydropower will be reviewed.
Rapid Prediction of Unsteady Three-Dimensional Viscous Flows in Turbopump Geometries
NASA Technical Reports Server (NTRS)
Dorney, Daniel J.
1998-01-01
A program is underway to improve the efficiency of a three-dimensional Navier-Stokes code and generalize it for nozzle and turbopump geometries. Code modifications will include the implementation of parallel processing software, incorporating new physical models and generalizing the multi-block capability to allow the simultaneous simulation of nozzle and turbopump configurations. The current report contains details of code modifications, numerical results of several flow simulations and the status of the parallelization effort.
Studies of Premixed Laminar and Turbulent Flames at Microgravity
NASA Technical Reports Server (NTRS)
Kwon, O. C.; Abid, M.; Porres, J.; Liu, J. B.; Ronney, P. D.; Struk, P. M.; Weiland, K. J.
2003-01-01
Several topics relating to premixed flame behavior at reduced gravity have been studied. These topics include: (1) flame balls; (2) flame structure and stability at low Lewis number; (3) experimental simulation of buoyancy effects in premixed flames using aqueous autocatalytic reactions; and (4) premixed flame propagation in Hele-Shaw cells. Because of space limitations, only topic (1) is discussed here, emphasizing results from experiments on the recent STS-107 Space Shuttle mission, along with numerical modeling efforts.
Experimental and Numerical Study on Supersonic Ejectors Working with R-1234ze(E)
NASA Astrophysics Data System (ADS)
Kracik, Jan; Dvorak, Vaclav; Nguyen Van, Vu; Smierciew, Kamil
2018-06-01
These days, much effort is being put into lowering the consumption of electric energy and involving renewable energy sources. Many engineers and designers are trying to develop environment-friendly technologies worldwide. It is related to incorporating appropriate devices into such technologies. The object of this paper is to investigate these devices in connection with refrigeration systems. Ejectors can be considered such as these devices. The primary interest of this paper is to investigate the suitability of a numerical model for an ejector, which is incorporated into a refrigeration system. In the present paper, there have been investigated seven different test runs of working of the ejector with a working fluid R-1234ze(E). Some of the investigated cases seem to have a good agreement and there are no significant discrepancies between them, however, there are also cases that do not correspond to the experimental data at all. The ejector has been investigated in both on-design and off-design working modes. A comparison between the experimental and numerical data (CFD) performed by Ansys Fluent software is presented and discussed for both an ideal and a real gas model. In addition, an enhanced analytical model has been introduced for all runs of the ejector.
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
Benchmarking on Tsunami Currents with ComMIT
NASA Astrophysics Data System (ADS)
Sharghi vand, N.; Kanoglu, U.
2015-12-01
There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
Silvestrini, Nicolas
2017-09-01
Numerous studies have assessed cardiovascular (CV) reactivity as a measure of effort mobilization during cognitive tasks. However, psychological and neural processes underlying effort-related CV reactivity are still relatively unclear. Previous research reliably found that CV reactivity during cognitive tasks is mainly determined by one region of the brain, the dorsal anterior cingulate cortex (dACC), and that this region is systematically engaged during cognitively demanding tasks. The present integrative approach builds on the research on cognitive control and its brain correlates that shows that dACC function can be related to conflict monitoring and integration of information related to task difficulty and success importance-two key variables in determining effort mobilization. In contrast, evidence also indicates that executive cognitive functioning is processed in more lateral regions of the prefrontal cortex. The resulting model suggests that, when automatic cognitive processes are insufficient to sustain behavior, the dACC determines the amount of required and justified effort according to task difficulty and success importance, which leads to proportional adjustments in CV reactivity and executive cognitive functioning. These propositions are discussed in relation to previous findings on effort-related CV reactivity and cognitive performance, new predictions for future studies, and relevance for other self-regulatory processes. Copyright © 2016 Elsevier B.V. All rights reserved.
An optimal control strategies using vaccination and fogging in dengue fever transmission model
NASA Astrophysics Data System (ADS)
Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan
2017-08-01
This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.
Developing recreational harvest regulations for an unexploited lake trout population
Lenker, Melissa A; Weidel, Brian C.; Jensen, Olaf P.; Solomon, Christopher T.
2016-01-01
Developing fishing regulations for previously unexploited populations presents numerous challenges, many of which stem from a scarcity of baseline information about abundance, population productivity, and expected angling pressure. We used simulation models to test the effect of six management strategies (catch and release; trophy, minimum, and maximum length limits; and protected and exploited slot length limits) on an unexploited population of Lake Trout Salvelinus namaycush in Follensby Pond, a 393-ha lake located in New York State’s Adirondack Park. We combined field and literature data and mark–recapture abundance estimates to parameterize an age-structured population model and used the model to assess the effects of each management strategy on abundance, catch per unit effort (CPUE), and harvest over a range of angler effort (0–2,000 angler-days/year). Lake Trout density (3.5 fish/ha for fish ≥ age 13, the estimated age at maturity) was similar to densities observed in other unexploited systems, but growth rate was relatively slow. Maximum harvest occurred at levels of effort ≤ 1,000 angler-days/year in all the scenarios considered. Regulations that permitted harvest of large postmaturation fish, such as New York’s standard Lake Trout minimum size limit or a trophy size limit, resulted in low harvest and high angler CPUE. Regulations that permitted harvest of small and sometimes immature fish, such as a protected slot or maximum size limit, allowed high harvest but resulted in low angler CPUE and produced rapid declines in harvest with increases in effort beyond the effort consistent with maximum yield. Management agencies can use these results to match regulations to management goals and to assess the risks of different management options for unexploited Lake Trout populations and other fish species with similar life history traits.
Solution of the lossy nonlinear Tricomi equation with application to sonic boom focusing
NASA Astrophysics Data System (ADS)
Salamone, Joseph A., III
Sonic boom focusing theory has been augmented with new terms that account for mean flow effects in the direction of propagation and also for atmospheric absorption/dispersion due to molecular relaxation due to oxygen and nitrogen. The newly derived model equation was numerically implemented using a computer code. The computer code was numerically validated using a spectral solution for nonlinear propagation of a sinusoid through a lossy homogeneous medium. An additional numerical check was performed to verify the linear diffraction component of the code calculations. The computer code was experimentally validated using measured sonic boom focusing data from the NASA sponsored Superboom Caustic and Analysis Measurement Program (SCAMP) flight test. The computer code was in good agreement with both the numerical and experimental validation. The newly developed code was applied to examine the focusing of a NASA low-boom demonstration vehicle concept. The resulting pressure field was calculated for several supersonic climb profiles. The shaping efforts designed into the signatures were still somewhat evident despite the effects of sonic boom focusing.
Numerical Analysis of AHSS Fracture in a Stretch-bending Test
NASA Astrophysics Data System (ADS)
Luo, Meng; Chen, Xiaoming; Shi, Ming F.; Shih, Hua-Chu
2010-06-01
Advanced High Strength Steels (AHSS) are increasingly used in the automotive industry due to their superior strength and substantial weight reduction advantage. However, their limited ductility gives rise to numerous manufacturing issues. One of them is the so-called `shear fracture' often observed on tight radii during stamping processes. Since traditional approaches, such as the Forming Limit Diagram (FLD), are unable to predict this type of fracture, efforts have been made to develop failure criteria that can predict shear fractures. In this paper, a recently developed Modified Mohr-Coulomb (MMC) ductile fracture criterion[1] is adopted to analyze the failure behavior of a Dual Phase (DP) steel sheet during stretch bending operations. The plasticity and ductile fracture of the present sheet are fully characterized by the Hill'48 orthotropic model and the MMC fracture model respectively. Finite Element models with three different element types (3D, shell and plane strain) were built for a Stretch Forming Simulator (SFS) test and numerical simulations with four different R/t ratios (die radius normalized by sheet thickness) were performed. It has been shown that the 3D and shell element models can accurately predict the failure location/mode, the upper die load-displacement responses as well as the wall stress and wrap angle at the onset of fracture for all R/t ratios. Furthermore, a series of parametric studies were conducted on the 3D element model, and the effects of tension level (clamping distance) and tooling friction on the failure modes/locations were investigated.
Optimal harvesting policy of predator-prey model with free fishing and reserve zones
NASA Astrophysics Data System (ADS)
Toaha, Syamsuddin; Rustam
2017-03-01
The present paper deals with an optimal harvesting of predator-prey model in an ecosystem that consists of two zones, namely the free fishing and prohibited zones. The dynamics of prey population in the ecosystem can migrate from the free fishing to the prohibited zone and vice versa. The predator and prey populations in the free fishing zone are then harvested with constant efforts. The existence of the interior equilibrium point is analyzed and its stability is determined using Routh-Hurwitz stability test. The stable interior equilibrium point is then related to the problem of maximum profit and the problem of present value of net revenue. We follow the Pontryagin's maximal principle to get the optimal harvesting policy of the present value of the net revenue. From the analysis, we found a critical point of the efforts that makes maximum profit. There also exists certain conditions of the efforts that makes the present value of net revenue becomes maximal. In addition, the interior equilibrium point is locally asymptotically stable which means that the optimal harvesting is reached and the unharvested prey, harvested prey, and harvested predator populations remain sustainable. Numerical examples are given to verify the analytical results.
NASA Astrophysics Data System (ADS)
Koch, Stefan; Mitlöhner, Johann
2010-08-01
ERP implementation projects have received enormous attention in the last years, due to their importance for organisations, as well as the costs and risks involved. The estimation of effort and costs associated with new projects therefore is an important topic. Unfortunately, there is still a lack of models that can cope with the special characteristics of these projects. As the main focus lies in adapting and customising a complex system, and even changing the organisation, traditional models like COCOMO can not easily be applied. In this article, we will apply effort estimation based on social choice in this context. Social choice deals with aggregating the preferences of a number of voters into a collective preference, and we will apply this idea by substituting the voters by project attributes. Therefore, instead of supplying numeric values for various project attributes, a new project only needs to be placed into rankings per attribute, necessitating only ordinal values, and the resulting aggregate ranking can be used to derive an estimation. We will describe the estimation process using a data set of 39 projects, and compare the results to other approaches proposed in the literature.
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai
2016-09-01
The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.
Marko, Matthew David; Kyle, Jonathan P; Wang, Yuanyuan Sabrina; Terrell, Elon J
2017-01-01
An effort was made to study and characterize the evolution of transient tribological wear in the presence of sliding contact. Sliding contact is often characterized experimentally via the standard ASTM D4172 four-ball test, and these tests were conducted for varying times ranging from 10 seconds to 1 hour, as well as at varying temperatures and loads. A numerical model was developed to simulate the evolution of wear in the elastohydrodynamic regime. This model uses the results of a Monte Carlo study to develop novel empirical equations for wear rate as a function of asperity height and lubricant thickness; these equations closely represented the experimental data and successfully modeled the sliding contact.
Modelling low velocity impact induced damage in composite laminates
NASA Astrophysics Data System (ADS)
Shi, Yu; Soutis, Constantinos
2017-12-01
The paper presents recent progress on modelling low velocity impact induced damage in fibre reinforced composite laminates. It is important to understand the mechanisms of barely visible impact damage (BVID) and how it affects structural performance. To reduce labour intensive testing, the development of finite element (FE) techniques for simulating impact damage becomes essential and recent effort by the composites research community is reviewed in this work. The FE predicted damage initiation and propagation can be validated by Non Destructive Techniques (NDT) that gives confidence to the developed numerical damage models. A reliable damage simulation can assist the design process to optimise laminate configurations, reduce weight and improve performance of components and structures used in aircraft construction.
Model predictive control for spacecraft rendezvous in elliptical orbit
NASA Astrophysics Data System (ADS)
Li, Peng; Zhu, Zheng H.
2018-05-01
This paper studies the control of spacecraft rendezvous with attitude stable or spinning targets in an elliptical orbit. The linearized Tschauner-Hempel equation is used to describe the motion of spacecraft and the problem is formulated by model predictive control. The control objective is to maximize control accuracy and smoothness simultaneously to avoid unexpected change or overshoot of trajectory for safe rendezvous. It is achieved by minimizing the weighted summations of control errors and increments. The effects of two sets of horizons (control and predictive horizons) in the model predictive control are examined in terms of fuel consumption, rendezvous time and computational effort. The numerical results show the proposed control strategy is effective.
Numerical methods for the weakly compressible Generalized Langevin Model in Eulerian reference frame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azarnykh, Dmitrii, E-mail: d.azarnykh@tum.de; Litvinov, Sergey; Adams, Nikolaus A.
2016-06-01
A well established approach for the computation of turbulent flow without resolving all turbulent flow scales is to solve a filtered or averaged set of equations, and to model non-resolved scales by closures derived from transported probability density functions (PDF) for velocity fluctuations. Effective numerical methods for PDF transport employ the equivalence between the Fokker–Planck equation for the PDF and a Generalized Langevin Model (GLM), and compute the PDF by transporting a set of sampling particles by GLM (Pope (1985) [1]). The natural representation of GLM is a system of stochastic differential equations in a Lagrangian reference frame, typically solvedmore » by particle methods. A representation in a Eulerian reference frame, however, has the potential to significantly reduce computational effort and to allow for the seamless integration into a Eulerian-frame numerical flow solver. GLM in a Eulerian frame (GLMEF) formally corresponds to the nonlinear fluctuating hydrodynamic equations derived by Nakamura and Yoshimori (2009) [12]. Unlike the more common Landau–Lifshitz Navier–Stokes (LLNS) equations these equations are derived from the underdamped Langevin equation and are not based on a local equilibrium assumption. Similarly to LLNS equations the numerical solution of GLMEF requires special considerations. In this paper we investigate different numerical approaches to solving GLMEF with respect to the correct representation of stochastic properties of the solution. We find that a discretely conservative staggered finite-difference scheme, adapted from a scheme originally proposed for turbulent incompressible flow, in conjunction with a strongly stable (for non-stochastic PDE) Runge–Kutta method performs better for GLMEF than schemes adopted from those proposed previously for the LLNS. We show that equilibrium stochastic fluctuations are correctly reproduced.« less
NASA Technical Reports Server (NTRS)
Zobler, L.; Lewis, R.
1988-01-01
The long-term purpose was to contribute to scientific understanding of the role of the planet's land surfaces in modulating the flows of energy and matter which influence the climate, and to quantify and monitor human-induced changes to the land environment that may affect global climate. Highlights of the effort include the following: production of geo-coded, digitized World Soil Data file for use with the Goddard Institute for Space Studies (GISS) climate model; contribution to the development of a numerical physically-based model of ground hydrology; and assessment of the utility of remote sensing for providing data on hydrologically significant land surface variables.
NASA Technical Reports Server (NTRS)
Marchese, Anthony J.; Dryer, Frederick L.
1997-01-01
This program supports the engineering design, data analysis, and data interpretation requirements for the study of initially single component, spherically symmetric, isolated droplet combustion studies. Experimental emphasis is on the study of simple alcohols (methanol, ethanol) and alkanes (n-heptane, n-decane) as fuels with time dependent measurements of drop size, flame-stand-off, liquid-phase composition, and finally, extinction. Experiments have included bench-scale studies at Princeton, studies in the 2.2 and 5.18 drop towers at NASA-LeRC, and both the Fiber Supported Droplet Combustion (FSDC-1, FSDC-2) and the free Droplet Combustion Experiment (DCE) studies aboard the shuttle. Test matrix and data interpretation are performed through spherically-symmetric, time-dependent numerical computations which embody detailed sub-models for physical and chemical processes. The computed burning rate, flame stand-off, and extinction diameter are compared with the respective measurements for each individual experiment. In particular, the data from FSDC-1 and subsequent space-based experiments provide the opportunity to compare all three types of data simultaneously with the computed parameters. Recent numerical efforts are extending the computational tools to consider time dependent, axisymmetric 2-dimensional reactive flow situations.
Yeung, S; Genaidy, A; Deddens, J; Shoaf, C; Leung, P
2003-01-01
Aims: To investigate the use of a worker based methodology to assess the physical stresses of lifting tasks on effort expended, and to associate this loading with musculoskeletal outcomes (MO). Methods: A cross sectional study was conducted on 217 male manual handling workers from the Hong Kong area. The effects of four lifting variables (weight of load, horizontal distance, twisting angle, and vertical travel distance) on effort were examined using a linguistic approach (that is, characterising variables in descriptors such as "heavy" for weight of load). The numerical interpretations of linguistic descriptors were established. In addition, the associations between on the job effort and MO were investigated for 10 body regions including the spine, and both upper and lower extremities. Results: MO were prevalent in multiple body regions (range 12–58%); effort was significantly associated with MO in 8 of 10 body regions (odds ratios with age adjusted ranged from 1.31 for low back to 1.71 for elbows and forearm). The lifting task variables had significant effects on effort, with the weight of load having twice the effect of other variables; each linguistic descriptor was better described by a range of numerical values rather than a single numerical value. Conclusions: The participatory worker based approach on musculoskeletal outcomes is a promising methodology. Further testing of this approach is recommended. PMID:14504360
Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review
NASA Astrophysics Data System (ADS)
Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal
2017-08-01
Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.
Assessment of the National Combustion Code
NASA Technical Reports Server (NTRS)
Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing
2007-01-01
The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.
NASA Technical Reports Server (NTRS)
Pallmann, A. J.
1977-01-01
The paper presents some guidelines of an improved numerical modeling effort developed to investigate the effect of an absorbing and scattering particulate phase on the temperature field of the Mars atmosphere and soil in its diurnal cycle and in response to a time-dependent convective heat transfer. Some guidelines are also formulated for the re-evaluation of Mariner 9 infrared radiometer or spectrometer inverted temperature measurements of the dust-laden atmosphere.
MODELING THE ENDOCRINE CONTROL OF VITELLOGENIN PRODUCTION IN FEMALE RAINBOW TROUT
Sundling, Kaitlin; Craciun, Gheorghe; Schultz, Irvin; Hook, Sharon; Nagler, James; Cavileer, Tim; Verducci, Joseph; Liu, Yushi; Kim, Jonghan; Hayton, William
2015-01-01
The rainbow trout endocrine system is sensitive to changes in annual day length, which is likely the principal environmental cue controlling its reproductive cycle. This study focuses on the endocrine regulation of vitellogenin (Vg) protein synthesis, which is the major egg yolk precursor in this fish species. We present a model of Vg production in female rainbow trout which incorporates a biological pathway beginning with sex steroid estradiol-17β levels in the plasma and concluding with Vg secretion by the liver and sequestration in the oocytes. Numerical simulation results based on this model are compared with experimental data for estrogen receptor mRNA, Vg mRNA, and Vg in the plasma from female rainbow trout over a normal annual reproductive cycle. We also analyze the response of the model to parameter changes. The model is subsequently tested against experimental data from female trout under a compressed photoperiod regime. Comparison of numerical and experimental results suggests the possibility of a time-dependent change in oocyte Vg uptake rate. This model is part of a larger effort that is developing a mathematical description of the endocrine control of reproduction in female rainbow trout. We anticipate that these mathematical and computational models will play an important role in future regulatory toxicity assessments and in the prediction of ecological risk. PMID:24506554
Orion Exploration Flight Test Post-Flight Inspection and Analysis
NASA Technical Reports Server (NTRS)
Miller, J. E.; Berger, E. L.; Bohl, W. E.; Christiansen, E. L.; Davis, B. A.; Deighton, K. D.; Enriquez, P. A.; Garcia, M. A.; Hyde, J. L.; Oliveras, O. M.
2017-01-01
The principal mechanism for developing orbital debris environment models, is to make observations of larger pieces of debris in the range of several centimeters and greater using radar and optical techniques. For particles that are smaller than this threshold, breakup and migration models of particles to returned surfaces in lower orbit are relied upon to quantify the flux. This reliance on models to derive spatial densities of particles that are of critical importance to spacecraft make the unique nature of the EFT-1's return surface a valuable metric. To this end detailed post-flight inspections have been performed of the returned EFT-1 backshell, and the inspections identified six candidate impact sites that were not present during the pre-flight inspections. This paper describes the post-flight analysis efforts to characterize the EFT-1 mission craters. This effort included ground based testing to understand small particle impact craters in the thermal protection material, the pre- and post-flight inspection, the crater analysis using optical, X-ray computed tomography (CT) and scanning electron microscope (SEM) techniques, and numerical simulations.
Habitat suitability index models: Black crappie
Edwards, Elizabeth A.; Krieger, Douglas A.; Bacteller, Mary; Maughan, O. Eugene
1982-01-01
Characteristics and habitat requirements of the black crappie (Pomoxis nigromaculatus) are described in a review of Habitat Suitability Index models. This is one in a series of publications to provide information on the habitat requirements of selected fish and wildlife species. Numerous literature sources have been consulted in an effort to consolidate scientific data on species-habitat relationships. These data have subsequently been synthesized into explicit Habitat Suitability Index (HSI) models. The models are based on suitability indices indicating habitat preferences. Indices have been formulated for variables found to affect the life cycle and survival of each species. Habitat Suitability Index (HSI) models are designed to provide information for use in impact assessment and habitat management activities. The HSI technique is a corollary to the U.S. Fish and Wildlife Service's Habitat Evaluation Procedures.
NASA Technical Reports Server (NTRS)
Schmidt, H.; Tango, G. J.; Werby, M. F.
1985-01-01
A new matrix method for rapid wave propagation modeling in generalized stratified media, which has recently been applied to numerical simulations in diverse areas of underwater acoustics, solid earth seismology, and nondestructive ultrasonic scattering is explained and illustrated. A portion of recent efforts jointly undertaken at NATOSACLANT and NORDA Numerical Modeling groups in developing, implementing, and testing a new fast general-applications wave propagation algorithm, SAFARI, formulated at SACLANT is summarized. The present general-applications SAFARI program uses a Direct Global Matrix Approach to multilayer Green's function calculation. A rapid and unconditionally stable solution is readily obtained via simple Gaussian ellimination on the resulting sparsely banded block system, precisely analogous to that arising in the Finite Element Method. The resulting gains in accuracy and computational speed allow consideration of much larger multilayered air/ocean/Earth/engineering material media models, for many more source-receiver configurations than previously possible. The validity and versatility of the SAFARI-DGM method is demonstrated by reviewing three practical examples of engineering interest, drawn from ocean acoustics, engineering seismology and ultrasonic scattering.
NASA Astrophysics Data System (ADS)
Guo, Qiaona; Li, Hailong; Boufadel, Michel C.; Liu, Jin
2014-12-01
Oil from the 1989 Exxon Valdez oil spill persists in many gravel beaches in Prince William Sound (Alaska, USA), despite great remedial efforts. A tracer study using lithium at a gravel beach on Knight Island, Prince William Sound, during the summer of 2008 is reported. The tracer injection and transport along a transect were simulated using the two-dimensional numerical model MARUN. Model results successfully reproduced the tracer concentrations observed at wells along the transect. A sensitivity analysis revealed that the estimated parameters are well determined. The simulated spatial distribution of tracer indicated that nutrients applied along the transect for bioremediation purposes would be washed to the sea very quickly (within a semi-diurnal tidal cycle) by virtue of the combination of the two-layered beach structure, the tidal fluctuation and the freshwater flow from inland. Thus, pore-water samples in the transect were found to be clean due to factors other than bioremediation. This may explain why the oil did not persist within the transect.
Finite element formulation of viscoelastic sandwich beams using fractional derivative operators
NASA Astrophysics Data System (ADS)
Galucio, A. C.; Deü, J.-F.; Ohayon, R.
This paper presents a finite element formulation for transient dynamic analysis of sandwich beams with embedded viscoelastic material using fractional derivative constitutive equations. The sandwich configuration is composed of a viscoelastic core (based on Timoshenko theory) sandwiched between elastic faces (based on Euler-Bernoulli assumptions). The viscoelastic model used to describe the behavior of the core is a four-parameter fractional derivative model. Concerning the parameter identification, a strategy to estimate the fractional order of the time derivative and the relaxation time is outlined. Curve-fitting aspects are focused, showing a good agreement with experimental data. In order to implement the viscoelastic model into the finite element formulation, the Grünwald definition of the fractional operator is employed. To solve the equation of motion, a direct time integration method based on the implicit Newmark scheme is used. One of the particularities of the proposed algorithm lies in the storage of displacement history only, reducing considerably the numerical efforts related to the non-locality of fractional operators. After validations, numerical applications are presented in order to analyze truncation effects (fading memory phenomena) and solution convergence aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitrescu, Eugene; Humble, Travis S.
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
The Effects of Magnetic Nozzle Configurations on Plasma Thrusters
NASA Technical Reports Server (NTRS)
Turchi, P. J.
1997-01-01
Over the course of eight years, the Ohio State University has performed research in support of electric propulsion development efforts at the NASA Lewis Research Center, Cleveland, OH. This research has been largely devoted to plasma propulsion systems including MagnetoPlasmaDynamic (MPD) thrusters with externally-applied, solenoidal magnetic fields, hollow cathodes, and Pulsed Plasma Microthrusters (PPT's). Both experimental and theoretical work has been performed, as documented in four master's theses, two doctoral dissertations, and numerous technical papers. The present document is the final report for the grant period 5 December 1987 to 31 December 1995, and summarizes all activities. Detailed discussions of each area of activity are provided in appendices: Appendix 1 - Experimental studies of magnetic nozzle effects on plasma thrusters; Appendix 2 - Numerical modeling of applied-field MPD thrusters; Appendix 3 - Theoretical and experimental studies of hollow cathodes; and Appendix 4 -Theoretical, numerical and experimental studies of pulsed plasma thrusters. Especially notable results include the efficacy of using a solenoidal magnetic field downstream of a plasma thruster to collimate the exhaust flow, the development of a new understanding of applied-field MPD thrusters (based on experimentally-validated results from state-of-the art, numerical simulation) leading to predictions of improved performance, an experimentally-validated, first-principles model for orificed, hollow-cathode behavior, and the first time-dependent, two-dimensional calculations of ablation-fed, pulsed plasma thrusters.
Model of dissolution in the framework of tissue engineering and drug delivery.
Sanz-Herrera, J A; Soria, L; Reina-Romo, E; Torres, Y; Boccaccini, A R
2018-05-22
Dissolution phenomena are ubiquitously present in biomaterials in many different fields. Despite the advantages of simulation-based design of biomaterials in medical applications, additional efforts are needed to derive reliable models which describe the process of dissolution. A phenomenologically based model, available for simulation of dissolution in biomaterials, is introduced in this paper. The model turns into a set of reaction-diffusion equations implemented in a finite element numerical framework. First, a parametric analysis is conducted in order to explore the role of model parameters on the overall dissolution process. Then, the model is calibrated and validated versus a straightforward but rigorous experimental setup. Results show that the mathematical model macroscopically reproduces the main physicochemical phenomena that take place in the tests, corroborating its usefulness for design of biomaterials in the tissue engineering and drug delivery research areas.
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo
2018-05-10
Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies.
Numerical Modeling of Surface and Volumetric Cooling using Optimal T- and Y-shaped Flow Channels
NASA Astrophysics Data System (ADS)
Kosaraju, Srinivas
2017-11-01
The layout of T- and V-shaped flow channel networks on a surface can be optimized for minimum pressure drop and pumping power. The results of the optimization are in the form of geometric parameters such as length and diameter ratios of the stem and branch sections. While these flow channels are optimized for minimum pressure drop, they can also be used for surface and volumetric cooling applications such as heat exchangers, air conditioning and electronics cooling. In this paper, an effort has been made to study the heat transfer characteristics of multiple T- and Y-shaped flow channel configurations using numerical simulations. All configurations are subjected to same input parameters and heat generation constraints. Comparisons are made with similar results published in literature.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehoczky, Sandor L.
1996-01-01
A numerical model of heat transfer using combined conduction, radiation and convection in AADSF was used to evaluate temperature gradients in the vicinity of the crystal/melt interface for variety of hot and cold zone set point temperatures specifically for the growth of mercury cadmium telluride (MCT). Reverse usage of hot and cold zones was simulated to aid the choice of proper orientation of crystal/melt interface regarding residual acceleration vector without actual change of furnace location on board the orbiter. It appears that an additional booster heater will be extremely helpful to ensure desired temperature gradient when hot and cold zones are reversed. Further efforts are required to investigate advantages/disadvantages of symmetrical furnace design (i.e. with similar length of hot and cold zones).
Computational strategies for tire monitoring and analysis
NASA Technical Reports Server (NTRS)
Danielson, Kent T.; Noor, Ahmed K.; Green, James S.
1995-01-01
Computational strategies are presented for the modeling and analysis of tires in contact with pavement. A procedure is introduced for simple and accurate determination of tire cross-sectional geometric characteristics from a digitally scanned image. Three new strategies for reducing the computational effort in the finite element solution of tire-pavement contact are also presented. These strategies take advantage of the observation that footprint loads do not usually stimulate a significant tire response away from the pavement contact region. The finite element strategies differ in their level of approximation and required amount of computer resources. The effectiveness of the strategies is demonstrated by numerical examples of frictionless and frictional contact of the space shuttle Orbiter nose-gear tire. Both an in-house research code and a commercial finite element code are used in the numerical studies.
Modeling quasi-static poroelastic propagation using an asymptotic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, D.W.
2007-11-01
Since the formulation of poroelasticity (Biot(1941)) and its reformulation (Rice & Cleary(1976)), there have been many efforts to solve the coupled system of equations. Perhaps because of the complexity of the governing equations, most of the work has been directed towards finding numerical solutions. For example, Lewis and co-workers published early papers (Lewis & Schrefler(1978); Lewis et al.(1991)Lewis, Schrefler, & Simoni) concerned with finite-element methods for computing consolidation, subsidence, and examining the importance of coupling. Other early work dealt with flow in a deformable fractured medium (Narasimhan & Witherspoon 1976); Noorishad et al.(1984)Noorishad, Tsang, & Witherspoon. This effort eventually evolvedmore » into a general numerical approach for modeling fluid flow and deformation (Rutqvist et al.(2002)Rutqvist, Wu, Tsang, & Bodvarsson). As a result of this and other work, numerous coupled, computer-based algorithms have emerged, typically falling into one of three categories: one-way coupling, loose coupling, and full coupling (Minkoff et al.(2003)Minkoff, Stone, Bryant, Peszynska, & Wheeler). In one-way coupling the fluid flow is modeled using a conventional numerical simulator and the resulting change in fluid pressures simply drives the deformation. In loosely coupled modeling distinct geomechanical and fluid flow simulators are run for a sequence of time steps and at the conclusion of each step information is passed between the simulators. In full coupling, the fluid flow and geomechanics equations are solved simultaneously at each time step (Lewis & Sukirman(1993); Lewis & Ghafouri(1997); Gutierrez & Lewis(2002)). One disadvantage of a purely numerical approach to solving the governing equations of poroelasticity is that it is not clear how the various parameters interact and influence the solution. Analytic solutions have an advantage in that respect; the relationship between the medium and fluid properties is clear from the form of the solution. Unfortunately, analytic solutions are only available for highly idealized conditions, such as a uniform (Rudnicki(1986)) or one-dimensional (Simon et al.(1984)Simon, Zienkiewicz, & Paul; Gajo & Mongiovi(1995); Wang & Kumpel(2003)) medium. In this paper I derive an asymptotic, semi-analytic solution for coupled deformation and flow. The approach is similar to trajectory- or ray-based methods used to model elastic and electromagnetic wave propagation (Aki & Richards(1980); Kline & Kay(1979); Kravtsov & Orlov(1990); Keller & Lewis(1995)) and, more recently, diffusive propagation (Virieux et al.(1994)Virieux, Flores-Luna, & Gibert; Vasco et al.(2000)Vasco, Karasaki, & Keers; Shapiro et al.(2002)Shapiro, Rothert, Rath, & Rindschwentner; Vasco(2007)). The asymptotic solution is valid in the presence of smoothly-varying, heterogeneous flow properties. The situation I am modeling is that of a formation with heterogeneous flow properties and uniform mechanical properties. The boundaries of the layer may vary arbitrary and can define discontinuities in both flow and mechanical properties. Thus, using the techniques presented here, it is possible to model a stack of irregular layers with differing mechanical properties. Within each layer the hydraulic conductivity and porosity can vary smoothly but with an arbitrarily large magnitude. The advantages of this approach are that it produces explicit, semi-analytic expressions for the arrival time and amplitude of the Biot slow and fast waves, expressions which are valid in a medium with heterogeneous properties. As shown here, the semi-analytic expressions provide insight into the nature of pressure and deformation signals recorded at an observation point. Finally, the technique requires considerably fewer computer resources than does a fully numerical treatment.« less
Hindcasting of Storm Surges, Currents, and Waves at Lower Delaware Bay during Hurricane Isabel
NASA Astrophysics Data System (ADS)
Salehi, M.
2017-12-01
Hurricanes are a major threat to coastal communities and infrastructures including nuclear power plants located in low-lying coastal zones. In response, their sensitive elements should be protected by smart design to withstand against drastic impact of such natural phenomena. Accurate and reliable estimate of hurricane attributes is the first step to that effort. Numerical models have extensively grown over the past few years and are effective tools in modeling large scale natural events such as hurricane. The impact of low probability hurricanes on the lower Delaware Bay is investigated using dynamically coupled meteorological, hydrodynamic, and wave components of Delft3D software. Efforts are made to significantly reduce the computational overburden of performing such analysis for the industry, yet keeping the same level of accuracy at the area of study (AOS). The model is comprised of overall and nested domains. The overall model domain includes portion of Atlantic Ocean, Delaware, and Chesapeake bays. The nested model domain includes Delaware Bay, its floodplain, and portion of the continental shelf. This study is portion of a larger modeling effort to study the impact of low probability hurricanes on sensitive infrastructures located at the coastal zones prone to hurricane activity. The AOS is located on the east bank of Delaware Bay almost 16 miles upstream of its mouth. Model generated wind speed, significant wave height, water surface elevation, and current are calibrated for hurricane Isabel (2003). The model calibration results agreed reasonably well with field observations. Furthermore, sensitivity of surge and wave responses to various hurricane parameters was tested. In line with findings from other researchers, accuracy of wind field played a major role in hindcasting the hurricane attributes.
Pozin, N; Montesantos, S; Katz, I; Pichelin, M; Grandmont, C; Vignon-Clementel, I
2017-07-26
In spite of numerous clinical studies, there is no consensus on the benefit Heliox mixtures can bring to asthmatic patients in terms of work of breathing and ventilation distribution. In this article we use a 3D finite element mathematical model of the lung to study the impact of asthma on effort and ventilation distribution along with the effect of Heliox compared to air. Lung surface displacement fields extracted from computed tomography medical images are used to prescribe realistic boundary conditions to the model. Asthma is simulated by imposing bronchoconstrictions to some airways of the tracheo-bronchial tree based on statistical laws deduced from the literature. This study illuminates potential mechanisms for patient responsiveness to Heliox when affected by obstructive pulmonary diseases. Responsiveness appears to be function of the pathology severity, as well as its distal position in the tracheo-bronchial tree and geometrical position within the lung. Copyright © 2017 Elsevier Ltd. All rights reserved.
Transform methods for precision continuum and control models of flexible space structures
NASA Technical Reports Server (NTRS)
Lupi, Victor D.; Turner, James D.; Chun, Hon M.
1991-01-01
An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copping, Andrea E.; Yang, Zhaoqing; Voisin, Nathalie
2013-12-01
Final Report for the EPA-sponsored project Snow Caps to White Caps that provides data products and insight for water resource managers to support their predictions and management actions to address future changes in water resources (fresh and marine) in the Puget Sound basin. This report details the efforts of a team of scientists and engineers from Pacific Northwest National Laboratory (PNNL) and the University of Washington (UW) to examine the movement of water in the Snohomish Basin, within the watershed and the estuary, under present and future conditions, using a set of linked numerical models.
Direct numerical simulations and modeling of a spatially-evolving turbulent wake
NASA Technical Reports Server (NTRS)
Cimbala, John M.
1994-01-01
Understanding of turbulent free shear flows (wakes, jets, and mixing layers) is important, not only for scientific interest, but also because of their appearance in numerous practical applications. Turbulent wakes, in particular, have recently received increased attention by researchers at NASA Langley. The turbulent wake generated by a two-dimensional airfoil has been selected as the test-case for detailed high-resolution particle image velocimetry (PIV) experiments. This same wake has also been chosen to enhance NASA's turbulence modeling efforts. Over the past year, the author has completed several wake computations, while visiting NASA through the 1993 and 1994 ASEE summer programs, and also while on sabbatical leave during the 1993-94 academic year. These calculations have included two-equation (K-omega and K-epsilon) models, algebraic stress models (ASM), full Reynolds stress closure models, and direct numerical simulations (DNS). Recently, there has been mutually beneficial collaboration of the experimental and computational efforts. In fact, these projects have been chosen for joint presentation at the NASA Turbulence Peer Review, scheduled for September 1994. DNS calculations are presently underway for a turbulent wake at Re(sub theta) = 1000 and at a Mach number of 0.20. (Theta is the momentum thickness, which remains constant in the wake of a two dimensional body.) These calculations utilize a compressible DNS code written by M. M. Rai of NASA Ames, and modified for the wake by J. Cimbala. The code employs fifth-order accurate upwind-biased finite differencing for the convective terms, fourth-order accurate central differencing for the viscous terms, and an iterative-implicit time-integration scheme. The computational domain for these calculations starts at x/theta = 10, and extends to x/theta = 610. Fully developed turbulent wake profiles, obtained from experimental data from several wake generators, are supplied at the computational inlet, along with appropriate noise. After some adjustment period, the flow downstream of the inlet develops into a fully three-dimensional turbulent wake. Of particular interest in the present study is the far wake spreading rate and the self-similar mean and turbulence profiles. At the time of this writing, grid resolution studies are underway, and a code is being written to calculate turbulence statistics from these wake calculations; the statistics will be compared to those from the ongoing PIV wake measurements, those of previous experiments, and those predicted by the various turbulence models. These calculations will lead to significant long-term benefits for the turbulence modeling effort. In particular, quantities such as the pressure-strain correlation and the dissipation rate tensor can be easily calculated from the DNS results, whereas these quantities are nearly impossible to measure experimentally. Improvements to existing turbulence models (and development of new models) require knowledge about flow quantities such as these. Present turbulence models do a very good job at prediction of the shape of the mean velocity and Reynolds stress profiles in a turbulent wake, but significantly underpredict the magnitude of the stresses and the spreading rate of the wake. Thus, the turbulent wake is an ideal flow for turbulence modeling research. By careful comparison and analysis of each term in the modeled Reynolds stress equations, the DNS data can show where deficiencies in the models exist; improvements to the models can then be attempted.
Design of a High-Energy, Two-Stage Pulsed Plasma Thruster
NASA Technical Reports Server (NTRS)
Markusic, T. E.; Thio, Y. C. F.; Cassibry, J. T.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Design details of a proposed high-energy (approx. 50 kJ/pulse), two-stage pulsed plasma thruster are presented. The long-term goal of this project is to develop a high-power (approx. 500 kW), high specific impulse (approx. 7500 s), highly efficient (approx. 50%),and mechanically simple thruster for use as primary propulsion in a high-power nuclear electric propulsion system. The proposed thruster (PRC-PPT1) utilizes a valveless, liquid lithium-fed thermal plasma injector (first stage) followed by a high-energy pulsed electromagnetic accelerator (second stage). A numerical circuit model coupled with one-dimensional current sheet dynamics, as well as a numerical MHD simulation, are used to qualitatively predict the thermal plasma injection and current sheet dynamics, as well as to estimate the projected performance of the thruster. A set of further modelling efforts, and the experimental testing of a prototype thruster, is suggested to determine the feasibility of demonstrating a full scale high-power thruster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Pengcheng; Mcclure, Mark; Shiozawa, Sogo
A series of experiments performed at the Fenton Hill hot dry rock site after stage 2 drilling of Phase I reservoir provided intriguing field observations on the reservoir’s responses to injection and venting under various conditions. Two teams participating in the US DOE Geothermal Technologies Office (GTO)’s Code Comparison Study (CCS) used different numerical codes to model these five experiments with the objective of inferring the hydraulic stimulation mechanism involved. The codes used by the two teams are based on different numerical principles, and the assumptions made were also different, due to intrinsic limitations in the codes and the modelers’more » personal interpretations of the field observations. Both sets of models were able to produce the most important field observations and both found that it was the combination of the vertical gradient of the fracture opening pressure, injection volume, and the use/absence of proppant that yielded the different outcomes of the five experiments.« less
Peru 2007 tsunami runup observations and modeling
NASA Astrophysics Data System (ADS)
Fritz, H. M.; Kalligeris, N.; Borrero, J. C.
2008-05-01
On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed in the immediate aftermath and investigated the tsunami effects at 51 sites. The largest runup heights were measured in a sparsely populated desert area south of the Paracas Peninsula resulting in only 3 tsunami fatalities. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the presence of the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. The coast of Peru has experienced numerous deadly and destructive tsunamis throughout history, which highlights the importance of ongoing tsunami awareness and education efforts in the region. The Peru tsunami is compared against recent mega-disasters such as the 2004 Indian Ocean tsunami and Hurricane Katrina.
3D numerical simulations of oblique droplet impact onto a deep liquid pool
NASA Astrophysics Data System (ADS)
Gelderblom, Hanneke; Reijers, Sten A.; Gielen, Marise; Sleutel, Pascal; Lohse, Detlef; Xie, Zhihua; Pain, Christopher C.; Matar, Omar K.
2017-11-01
We study the fluid dynamics of three-dimensional oblique droplet impact, which results in phenomena that include splashing and cavity formation. An adaptive, unstructured mesh modelling framework is employed here, which can modify and adapt unstructured meshes to better represent the underlying physics of droplet dynamics, and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a volume-of-fluid-type method for the interface-capturing based on a compressive control-volume advection method. The framework also features second-order finite-element methods, and a force-balanced algorithm for the surface tension implementation, minimising the spurious velocities often found in many simulations involving capillary-driven flows. The numerical results generated using this framework are compared with high-speed images of the interfacial shapes of the deformed droplet, and the cavity formed upon impact, yielding good agreement. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
How to Overcome Numerical Challenges to Modeling Stirling Engines
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.
2004-01-01
Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for a range of missions, including both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent in current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-FI technique is presented in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
The System Analysis Module (SAM) is an advanced and modern system analysis tool being developed at Argonne National Laboratory under the U.S. DOE Office of Nuclear Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM development aims for advances in physical modeling, numerical methods, and software engineering to enhance its user experience and usability for reactor transient analyses. To facilitate the code development, SAM utilizes an object-oriented application framework (MOOSE), and its underlying meshing and finite-element library (libMesh) and linear and non-linear solvers (PETSc), to leverage modern advanced software environments and numerical methods. SAM focuses on modeling advanced reactormore » concepts such as SFRs (sodium fast reactors), LFRs (lead-cooled fast reactors), and FHRs (fluoride-salt-cooled high temperature reactors) or MSRs (molten salt reactors). These advanced concepts are distinguished from light-water reactors in their use of single-phase, low-pressure, high-temperature, and low Prandtl number (sodium and lead) coolants. As a new code development, the initial effort has been focused on modeling and simulation capabilities of heat transfer and single-phase fluid dynamics responses in Sodium-cooled Fast Reactor (SFR) systems. The system-level simulation capabilities of fluid flow and heat transfer in general engineering systems and typical SFRs have been verified and validated. This document provides the theoretical and technical basis of the code to help users understand the underlying physical models (such as governing equations, closure models, and component models), system modeling approaches, numerical discretization and solution methods, and the overall capabilities in SAM. As the code is still under ongoing development, this SAM Theory Manual will be updated periodically to keep it consistent with the state of the development.« less
Synergies Between Grace and Regional Atmospheric Modeling Efforts
NASA Astrophysics Data System (ADS)
Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.
2014-12-01
In the meteorological community, efforts converge towards implementation of high-resolution (< 12km) data-assimilating regional climate modelling/monitoring systems based on numerical weather prediction (NWP) cores. This is driven by requirements of improving process understanding, better representation of land surface interactions, atmospheric convection, orographic effects, and better forecasting on shorter timescales. This is relevant for the GRACE community since (1) these models may provide improved atmospheric mass separation / de-aliasing and smaller topography-induced errors, compared to global (ECMWF-Op, ERA-Interim) data, (2) they inherit high temporal resolution from NWP models, (3) parallel efforts towards improving the land surface component and coupling groundwater models; this may provide realistic hydrological mass estimates with sub-diurnal resolution, (4) parallel efforts towards re-analyses, with the aim of providing consistent time series. (5) On the other hand, GRACE can help validating models and aids in the identification of processes needing improvement. A coupled atmosphere - land surface - groundwater modelling system is currently being implemented for the European CORDEX region at 12.5 km resolution, based on the TerrSysMP platform (COSMO-EU NWP, CLM land surface and ParFlow groundwater models). We report results from Springer et al. (J. Hydromet., accept.) on validating the water cycle in COSMO-EU using GRACE and precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.
Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz
2014-01-01
Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412
The field representation language.
Tsafnat, Guy
2008-02-01
The complexity of quantitative biomedical models, and the rate at which they are published, is increasing to a point where managing the information has become all but impossible without automation. International efforts are underway to standardise representation languages for a number of mathematical entities that represent a wide variety of physiological systems. This paper presents the Field Representation Language (FRL), a portable representation of values that change over space and/or time. FRL is an extensible mark-up language (XML) derivative with support for large numeric data sets in Hierarchical Data Format version 5 (HDF5). Components of FRL can be reused through unified resource identifiers (URI) that point to external resources such as custom basis functions, boundary geometries and numerical data. To demonstrate the use of FRL as an interchange we present three models that study hyperthermia cancer treatment: a fractal model of liver tumour microvasculature; a probabilistic model simulating the deposition of magnetic microspheres throughout it; and a finite element model of hyperthermic treatment. The microsphere distribution field was used to compute the heat generation rate field around the tumour. We used FRL to convey results from the microsphere simulation to the treatment model. FRL facilitated the conversion of the coordinate systems and approximated the integral over regions of the microsphere deposition field.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; ...
2015-06-08
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
On sequential data assimilation for scalar macroscopic traffic flow models
NASA Astrophysics Data System (ADS)
Blandin, Sébastien; Couque, Adrien; Bayen, Alexandre; Work, Daniel
2012-09-01
We consider the problem of sequential data assimilation for transportation networks using optimal filtering with a scalar macroscopic traffic flow model. Properties of the distribution of the uncertainty on the true state related to the specific nonlinearity and non-differentiability inherent to macroscopic traffic flow models are investigated, derived analytically and analyzed. We show that nonlinear dynamics, by creating discontinuities in the traffic state, affect the performances of classical filters and in particular that the distribution of the uncertainty on the traffic state at shock waves is a mixture distribution. The non-differentiability of traffic dynamics around stationary shock waves is also proved and the resulting optimality loss of the estimates is quantified numerically. The properties of the estimates are explicitly studied for the Godunov scheme (and thus the Cell-Transmission Model), leading to specific conclusions about their use in the context of filtering, which is a significant contribution of this article. Analytical proofs and numerical tests are introduced to support the results presented. A Java implementation of the classical filters used in this work is available on-line at http://traffic.berkeley.edu for facilitating further efforts on this topic and fostering reproducible research.
NASA Technical Reports Server (NTRS)
Kent, James; Holdaway, Daniel
2015-01-01
A number of geophysical applications require the use of the linearized version of the full model. One such example is in numerical weather prediction, where the tangent linear and adjoint versions of the atmospheric model are required for the 4DVAR inverse problem. The part of the model that represents the resolved scale processes of the atmosphere is known as the dynamical core. Advection, or transport, is performed by the dynamical core. It is a central process in many geophysical applications and is a process that often has a quasi-linear underlying behavior. However, over the decades since the advent of numerical modelling, significant effort has gone into developing many flavors of high-order, shape preserving, nonoscillatory, positive definite advection schemes. These schemes are excellent in terms of transporting the quantities of interest in the dynamical core, but they introduce nonlinearity through the use of nonlinear limiters. The linearity of the transport schemes used in Goddard Earth Observing System version 5 (GEOS-5), as well as a number of other schemes, is analyzed using a simple 1D setup. The linearized version of GEOS-5 is then tested using a linear third order scheme in the tangent linear version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris
Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less
ECCD-induced tearing mode stabilization via active control in coupled NIMROD/GENRAY HPC simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.
2012-10-01
Actively controlled electron cyclotron current drive (ECCD) applied within magnetic islands formed by neoclassical tearing modes (NTMs) has been shown to control or suppress these modes. In conjunction with ongoing experimental efforts, the development and verification of integrated numerical models of this mode stabilization process is of paramount importance in determining optimal NTM stabilization strategies for ITER. In the advanced model developed by the SWIM Project, the equations/closures of extended (not reduced) MHD contain new terms arising from 3D (not toroidal or bounce-averaged) RF-induced quasilinear diffusion. The quasilinear operator formulation models the equilibration of driven current within the island using the same extended MHD dynamics which govern the physics of island formation, yielding a more accurate and self-consistent picture of 3D island response to RF drive. Results of computations which model ECRF deposition using ray tracing, assemble the 3D quasilinear operator from ray/profile data, and calculate the resultant forces within the extended MHD code will be presented. We also discuss the efficacy of various numerical active feedback control systems, which gather data from synthetic diagnostics to dynamically trigger and spatially align RF fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.
2014-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less
Eiben, Bjoern; Hipwell, John H.; Williams, Norman R.; Keshtgar, Mo; Hawkes, David J.
2016-01-01
Surgical treatment for early-stage breast carcinoma primarily necessitates breast conserving therapy (BCT), where the tumour is removed while preserving the breast shape. To date, there have been very few attempts to develop accurate and efficient computational tools that could be used in the clinical environment for pre-operative planning and oncoplastic breast surgery assessment. Moreover, from the breast cancer research perspective, there has been very little effort to model complex mechano-biological processes involved in wound healing. We address this by providing an integrated numerical framework that can simulate the therapeutic effects of BCT over the extended period of treatment and recovery. A validated, three-dimensional, multiscale finite element procedure that simulates breast tissue deformations and physiological wound healing is presented. In the proposed methodology, a partitioned, continuum-based mathematical model for tissue recovery and angiogenesis, and breast tissue deformation is considered. The effectiveness and accuracy of the proposed numerical scheme is illustrated through patient-specific representative examples. Wound repair and contraction numerical analyses of real MRI-derived breast geometries are investigated, and the final predictions of the breast shape are validated against post-operative follow-up optical surface scans from four patients. Mean (standard deviation) breast surface distance errors in millimetres of 3.1 (±3.1), 3.2 (±2.4), 2.8 (±2.7) and 4.1 (±3.3) were obtained, demonstrating the ability of the surgical simulation tool to predict, pre-operatively, the outcome of BCT to clinically useful accuracy. PMID:27466815
Transboundary impacts on regional ground water modeling in Texas
Rainwater, K.; Stovall, J.; Frailey, S.; Urban, L.
2005-01-01
Recent legislation required regional grassroots water resources planning across the entire state of Texas. The Texas Water Development Board (TWDB), the state's primary water resource planning agency, divided the state into 16 planning regions. Each planning group developed plans to manage both ground water and surface water sources and to meet future demands of various combinations of domestic, agricultural, municipal, and industrial water consumers. This presentation describes the challenges in developing a ground water model for the Llano Estacado Regional Water Planning Group (LERWPG), whose region includes 21 counties in the Southern High Plains of Texas. While surface water is supplied to several cities in this region, the vast majority of the regional water use comes from the High Plains aquifer system, often locally referred to as the Ogallala Aquifer. Over 95% of the ground water demand is for irrigated agriculture. The LERWPG had to predict the impact of future TWDB-projected water demands, as provided by the TWDB, on the aquifer for the period 2000 to 2050. If detrimental impacts were noted, alternative management strategies must be proposed. While much effort was spent on evaluating the current status of the ground water reserves, an appropriate numerical model of the aquifer system was necessary to demonstrate future impacts of the predicted withdrawals as well as the effects of the alternative strategies. The modeling effort was completed in the summer of 2000. This presentation concentrates on the political, scientific, and nontechnical issues in this planning process that complicated the modeling effort. Uncertainties in data, most significantly in distribution and intensity of recharge and withdrawals, significantly impacted the calibration and predictive modeling efforts. Four predictive scenarios, including baseline projections, recurrence of the drought of record, precipitation enhancement, and reduced irrigation demand, were simulated to identify counties at risk of low final ground water storage volume or low levels of satisfied demand by 2050. Copyright ?? 2005 National Ground Water Association.
Population modeling for furbearer management
Johnson, D.H.; Sanderson, G.C.
1982-01-01
The management of furbearers has become increasingly complex as greater demands are placed on their populations. Correspondingly, needs for information to use in management have increased. Inadequate information leads the manager to err on the conservative side; unless the size of the 'harvestable surplus' is known, the population cannot be fully exploited. Conversely, information beyond what is needed becomes an unaffordable luxury. Population modeling has proven useful for organizing information on numerous game animals. Modeling serves to determine if information of the right kind and proper amount is being gathered; systematizes data collection, data interpretation, and decision making; and permits more effective management and better utilization of game populations. This report briefly reviews the principles of population modeling, describes what has been learned from previous modeling efforts on furbearers, and outlines the potential role of population modeling in furbearer management.
Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process
NASA Astrophysics Data System (ADS)
Macias, Jorge
2017-04-01
In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnack, Dalton D.
Final technical report for research performed by Dr. Thomas G. Jenkins in collaboration with Professor Dalton D. Schnack on SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodyanics, DE-FC02-06ER54899, for the period of 8/15/06 - 8/14/11. This report centers on the Slow MHD physics campaign work performed by Dr. Jenkins while at UW-Madison and then at Tech-X Corporation. To make progress on the problem of RF induced currents affect magnetic island evolution in toroidal plasmas, a set of research approaches are outlined. Three approaches can be addressed in parallel. These are: (1) Analytically prescribed additional term in Ohm's law tomore » model the effect of localized ECCD current drive; (2) Introduce an additional evolution equation for the Ohm's law source term. Establish a RF source 'box' where information from the RF code couples to the fluid evolution; and (3) Carry out a more rigorous analytic calculation treating the additional RF terms in a closure problem. These approaches rely on the necessity of reinvigorating the computation modeling efforts of resistive and neoclassical tearing modes with present day versions of the numerical tools. For the RF community, the relevant action item is - RF ray tracing codes need to be modified so that general three-dimensional spatial information can be obtained. Further, interface efforts between the two codes require work as well as an assessment as to the numerical stability properties of the procedures to be used.« less
Genome Diversity and Evolution in the Budding Yeasts (Saccharomycotina)
Dujon, Bernard A.; Louis, Edward J.
2017-01-01
Considerable progress in our understanding of yeast genomes and their evolution has been made over the last decade with the sequencing, analysis, and comparisons of numerous species, strains, or isolates of diverse origins. The role played by yeasts in natural environments as well as in artificial manufactures, combined with the importance of some species as model experimental systems sustained this effort. At the same time, their enormous evolutionary diversity (there are yeast species in every subphylum of Dikarya) sparked curiosity but necessitated further efforts to obtain appropriate reference genomes. Today, yeast genomes have been very informative about basic mechanisms of evolution, speciation, hybridization, domestication, as well as about the molecular machineries underlying them. They are also irreplaceable to investigate in detail the complex relationship between genotypes and phenotypes with both theoretical and practical implications. This review examines these questions at two distinct levels offered by the broad evolutionary range of yeasts: inside the best-studied Saccharomyces species complex, and across the entire and diversified subphylum of Saccharomycotina. While obviously revealing evolutionary histories at different scales, data converge to a remarkably coherent picture in which one can estimate the relative importance of intrinsic genome dynamics, including gene birth and loss, vs. horizontal genetic accidents in the making of populations. The facility with which novel yeast genomes can now be studied, combined with the already numerous available reference genomes, offer privileged perspectives to further examine these fundamental biological questions using yeasts both as eukaryotic models and as fungi of practical importance. PMID:28592505
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Leung, Martin S. K.
1995-01-01
The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.
NASA Astrophysics Data System (ADS)
Keshtpoor, M.; Carnacina, I.; Yablonsky, R. M.
2016-12-01
Extratropical cyclones (ETCs) are the primary driver of storm surge events along the UK and northwest mainland Europe coastlines. In an effort to evaluate the storm surge risk in coastal communities in this region, a stochastic catalog is developed by perturbing the historical storm seeds of European ETCs to account for 10,000 years of possible ETCs. Numerical simulation of the storm surge generated by the full 10,000-year stochastic catalog, however, is computationally expensive and may take several months to complete with available computational resources. A new statistical regression model is developed to select the major surge-generating events from the stochastic ETC catalog. This regression model is based on the maximum storm surge, obtained via numerical simulations using a calibrated version of the Delft3D-FM hydrodynamic model with a relatively coarse mesh, of 1750 historical ETC events that occurred over the past 38 years in Europe. These numerically-simulated surge values were regressed to the local sea level pressure and the U and V components of the wind field at the location of 196 tide gauge stations near the UK and northwest mainland Europe coastal areas. The regression model suggests that storm surge values in the area of interest are highly correlated to the U- and V-component of wind speed, as well as the sea level pressure. Based on these correlations, the regression model was then used to select surge-generating storms from the 10,000-year stochastic catalog. Results suggest that roughly 105,000 events out of 480,000 stochastic storms are surge-generating events and need to be considered for numerical simulation using a hydrodynamic model. The selected stochastic storms were then simulated in Delft3D-FM, and the final refinement of the storm population was performed based on return period analysis of the 1750 historical event simulations at each of the 196 tide gauges in preparation for Delft3D-FM fine mesh simulations.
NASA Astrophysics Data System (ADS)
Sholiyi, Olusegun Samuel
As the demand for smaller size, lighter weight, lower loss and cost of communications transmit and receive (T/R) modules increases, there is an urgent need to focus investigation to the major subsystem or components that can improve these parameters. Phase shifters contribute greatly to the cost of T/R modules, and thus this research investigation examines a new way to reduce the weight and cost by miniaturizing the phaser design. Characterization of hexaferrite powders compatible with the sequential multilayer micro-fabrication technology and numerical simulations of a novel rectangular micro-coaxial phase shifter are investigated. This effort aims to integrate ferrite material into a rectangular micro-coaxial waveguide at Ka-band using electromagnetic finite element numerical tools. The proposed technique exploits rectangular coaxial waveguide with a symmetrically placed inner signal conductor inside an outer conductor connected to the ground. Strontium ferrite-SU8 composite is used as an anisotropic material of choice in the modelled design. Numerical modeling is employed using High Frequency Structure Simulator, HFSS, a 3-D full wave electromagnetic solver for analyzing the performance of the device. Two model structures were designed for reciprocal and non-reciprocal applications. The first model (Model A) produced a tunable phase shift of almost 60 degrees /cm across 0 to 400 kA/m applied field and at 1800 Gauss. In model B, a non-reciprocal phase shift performance of 20 degrees /cm from a reference phase of 24 degrees at 0 A/m was realized at the same saturation magnetization. A return loss better than 20 dB and an insertion loss less than 1.5 dB were obtained for both models.
Modeling and Analysis of Realistic Fire Scenarios in Spacecraft
NASA Technical Reports Server (NTRS)
Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.
2015-01-01
An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise Paul
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less
Modelling the effects of treatment and quarantine on measles
NASA Astrophysics Data System (ADS)
Beay, Lazarus Kalvein
2018-03-01
Treatment and quarantine are efforts to cure as well as to overcome the spread of diseases including measles. The spread of measles can be expressed by mathematical modelling in the form of nonlinear dynamical systems. In this study was conducted on the spread of measles by considering the effect of treatment and quarantine on the infected individuals. By using the basic reproduction number of the model, can be analyzed the effects of treatment and quarantine to reduce the spread of measles. Basic reproduction number of models is monotonically descreasing as treatment and quarantine increasing. Numerical simulations conducted on the analysis of the results. The results showed that treatment and quarantine was given to infected individuals who were infectious has a major influence to eliminate measles from the system.
Modeling Code Is Helping Cleveland Develop New Products
NASA Technical Reports Server (NTRS)
1998-01-01
Master Builders, Inc., is a 350-person company in Cleveland, Ohio, that develops and markets specialty chemicals for the construction industry. Developing new products involves creating many potential samples and running numerous tests to characterize the samples' performance. Company engineers enlisted NASA's help to replace cumbersome physical testing with computer modeling of the samples' behavior. Since the NASA Lewis Research Center's Structures Division develops mathematical models and associated computation tools to analyze the deformation and failure of composite materials, its researchers began a two-phase effort to modify Lewis' Integrated Composite Analyzer (ICAN) software for Master Builders' use. Phase I has been completed, and Master Builders is pleased with the results. The company is now working to begin implementation of Phase II.
NASA Astrophysics Data System (ADS)
Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent
2014-10-01
Onera, the French aerospace lab, develops and models active imaging systems to understand the relevant physical phenomena affecting these systems performance. As a consequence, efforts have been done on the propagation of a pulse through the atmosphere and on target geometries and surface properties. These imaging systems must operate at night in all ambient illumination and weather conditions in order to perform strategic surveillance for various worldwide operations. We have implemented codes for 2D and 3D laser imaging systems. As we aim to image a scene in the presence of rain, snow, fog or haze, we introduce such light-scattering effects in our numerical models and compare simulated images with measurements provided by commercial laser scanners.
Numerical modelling of cryogenic propellant behavior in low-G
NASA Technical Reports Server (NTRS)
Hochstein, John I.
1987-01-01
A partial survey is presented of recent research, sponsored by the NASA Lewis Research Center, into the computational modelling of cryogenic propellant behavior in a low gravity environment. This presentation is intended to provide insight into some of the specific problems being studied and into how these studies are part of an integrated plan to develop predictive capabilities. A brief description of the computational models developed to analyze jet induced mixing in cryogenic propellant tankage is presented along with representative results. Similar information is presented for a recent examination of on-orbit self-pressurization. A study of propellant reorientation has recently been initiated and preliminary results are included. The presentation concludes with a list of ongoing efforts and projected goals.
NASA Astrophysics Data System (ADS)
Chapman, Steven W.; Parker, Beth L.; Sale, Tom C.; Doner, Lee Ann
2012-08-01
It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making.
A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.
Yao, Yijun; Verginelli, Iason; Suuberg, Eric M
2017-05-01
In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.
A Comprehensive Review of Existing Risk Assessment Models in Cloud Computing
NASA Astrophysics Data System (ADS)
Amini, Ahmad; Jamil, Norziana
2018-05-01
Cloud computing is a popular paradigm in information technology and computing as it offers numerous advantages in terms of economical saving and minimal management effort. Although elasticity and flexibility brings tremendous benefits, it still raises many information security issues due to its unique characteristic that allows ubiquitous computing. Therefore, the vulnerabilities and threats in cloud computing have to be identified and proper risk assessment mechanism has to be in place for better cloud computing management. Various quantitative and qualitative risk assessment models have been proposed but up to our knowledge, none of them is suitable for cloud computing environment. This paper, we compare and analyse the strengths and weaknesses of existing risk assessment models. We then propose a new risk assessment model that sufficiently address all the characteristics of cloud computing, which was not appeared in the existing models.
Active control of ECCD-induced tearing mode stabilization in coupled NIMROD/GENRAY HPC simulations
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Kruger, Scott; Held, Eric
2013-10-01
Actively controlled ECCD applied in or near magnetic islands formed by NTMs has been successfully shown to control/suppress these modes, despite uncertainties in island O-point locations (where induced current is most stabilizing) relative to the RF deposition region. Integrated numerical models of the mode stabilization process can resolve these uncertainties and augment experimental efforts to determine optimal ITER NTM stabilization strategies. The advanced SWIM model incorporates RF effects in the equations/closures of extended MHD as 3D (not toroidal or bounce-averaged) quasilinear diffusion coefficients. Equilibration of driven current within the island geometry is modeled using the same extended MHD dynamics governing the physics of island formation, yielding a more accurate/self-consistent picture of island response to RF drive. Additionally, a numerical active feedback control system gathers data from synthetic diagnostics to dynamically trigger & spatially align the RF fields. Computations which model the RF deposition using ray tracing, assemble the 3D QL operator from ray & profile data, calculate the resultant xMHD forces, and dynamically realign the RF to more efficiently stabilize modes are presented; the efficacy of various control strategies is also discussed. Supported by the SciDAC Center for Extended MHD Modeling (CEMM); see also https://cswim.org.
Modeling Giant Sawtooth Modes in DIII-D using the NIMROD code
NASA Astrophysics Data System (ADS)
Kruger, Scott; Jenkins, Thomas; Held, Eric; King, Jacob; NIMROD Team
2014-10-01
Ongoing efforts to model giant sawtooth cycles in DIII-D shot 96043 using NIMROD are summarized. In this discharge, an energetic ion population induced by RF heating modifies the sawtooth stability boundary, supplanting the conventional sawtooth cycle with longer-period giant sawtooth oscillations of much larger amplitude. NIMROD has the unique capability of being able to use both continuum kinetic and particle-in-cell numerical schemes to model the RF-induced hot-particle distribution effects on the sawtooth stability. This capability is used to numerically investigate the role played by the form of the energetic particle distribution, including a possible high-energy tail drawn out by the RF, to study the sawtooth threshold and subsequent nonlinear evolution. Equilibrium reconstructions from the experimental data are used to enable these detailed validation studies. Effects of other parameters on the sawtooth behavior (such as the plasma Lundquist number and hot-particle β-fraction) are also considered. Ultimately, we hope to assess the degree to which NIMROD's extended MHD model correctly simulates the observed linear onset and nonlinear behavior of the giant sawtooth, and to establish its reliability as a predictive modeling tool for these modes. This work was initiated by the late Dr. Dalton Schnack. Equilibria were provided by Dr. A. Turnbull of General Atomics.
Modeling nuclear processes by Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my
2015-04-29
Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox softwaremore » that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westbrook, C K; Mizobuchi, Y; Poinsot, T J
2004-08-26
Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surfacemore » and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.« less
Presenting Numerical Modelling of Explosive Volcanic Eruption to a General Public
NASA Astrophysics Data System (ADS)
Demaria, C.; Todesco, M.; Neri, A.; Blasi, G.
2001-12-01
Numerical modeling of explosive volcanic eruptions has been widely applied, during the last decades, to study pyroclastic flows dispersion along volcano's flanks and to evaluate their impact on urban areas. Results from these transient multi-phase and multi-component simulations are often reproduced in form of computer animations, representing the spatial and temporal evolution of relevant flow variables (such as temperature, or particle concentration). Despite being a sophisticated, technical tool to analyze and share modeling results within the scientific community, these animations truly look like colorful cartoons showing an erupting volcano and are especially suited to be shown to a general public. Thanks to their particular appeal, and to the large interest usually risen by exploding volcanoes, these animations have been presented several times on television and magazines and are currently displayed in a permanent exposition, at the Vesuvius Observatory in Naples. This work represents an effort to produce an accompanying tool for these animations, capable of explaining to a large audience the scientific meaning of what can otherwise look as a graphical exercise. Dealing with research aimed at the study of dangerous, explosive volcanoes, improving the general understanding of these scientific results plays an important role as far as risk perception is concerned. An educated population has better chances to follow an appropriate behavior, i.e.: one that could lead, on the long period, to a reduction of the potential risk. In this sense, a correct divulgation of scientific results, while improving the confidence of the population in the scientific community, should belong to the strategies adopted to mitigate volcanic risk. Due to the relevance of the long term final goal of such divulgation experiment, this work represents an interdisciplinary effort, combining scientific expertise and specific competence from the modern communication science and risk perception studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sovinec, Carl R.
The University of Wisconsin-Madison component of the Plasma Science and Innovation Center (PSI Center) contributed to modeling capabilities and algorithmic efficiency of the Non-Ideal Magnetohydrodynamics with Rotation (NIMROD) Code, which is widely used to model macroscopic dynamics of magnetically confined plasma. It also contributed to the understanding of direct-current (DC) injection of electrical current for initiating and sustaining plasma in three spherical torus experiments: the Helicity Injected Torus-II (HIT-II), the Pegasus Toroidal Experiment, and the National Spherical Torus Experiment (NSTX). The effort was funded through the PSI Center's cooperative agreement with the University of Washington and Utah State University overmore » the period of March 1, 2005 - August 31, 2016. In addition to the computational and physics accomplishments, the Wisconsin effort contributed to the professional education of four graduate students and two postdoctoral research associates. The modeling for HIT-II and Pegasus was directly supported by the cooperative agreement, and contributions to the NSTX modeling were in support of work by Dr. Bickford Hooper, who was funded through a separate grant. Our primary contribution to model development is the implementation of detailed closure relations for collisional plasma. Postdoctoral associate Adam Bayliss implemented the temperature-dependent effects of Braginskii's parallel collisional ion viscosity. As a graduate student, John O'Bryan added runtime options for Braginskii's models and Ji's K2 models of thermal conduction with magnetization effects and thermal equilibration. As a postdoctoral associate, O'Bryan added the magnetization effects for ion viscosity. Another area of model development completed through the PSI-Center is the implementation of Chodura's phenomenological resistivity model. Finally, we investigated and tested linear electron parallel viscosity, leveraged by support from the Center for Extended Magnetohydrodynamic Modeling (CEMM). Work on algorithmic efficiency improved NIMROD's element-based computations. We reordered arrays and eliminated a level of looping for computations over the data points that are used for numerical integration over elements. Moreover, the reordering allows fewer and larger communication calls when using distributed-memory parallel computation, thereby avoiding a data starvation problem that limited parallel scaling over NIMROD's Fourier components for the periodic coordinate. Together with improved parallel preconditioning, work that was supported by CEMM, these developments allowed NIMROD's first scaling to over 10,000 processor cores. Another algorithm improvement supported by the PSI Center is nonlinear numerical diffusivities for implicit advection. We also developed the Stitch code to enhance the flexibility of NIMROD's preprocessing. Our simulations of HIT-II considered conditions with and without fluctuation-induced amplification of poloidal flux, but our validation efforts focused on conditions without amplification. A significant finding is that NIMROD reproduces the dependence of net plasma current as the imposed poloidal flux is varied. The modeling of Pegasus startup from localized DC injectors predicted that development of a tokamak-like configuration occurs through a sequence of current-filament merger events. Comparison of experimentally measured and numerically computed cross-power spectra enhance confidence in NIMROD's simulation of magnetic fluctuations; however, energy confinement remains an open area for further research. Our contributions to the NSTX study include adaptation of the helicity-injection boundary conditions from the HIT-II simulations and support for linear analysis and computation of 3D current-driven instabilities.« less
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla
2015-11-01
Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit from using SEOBNRv2 waveform templates when focused on neutron star-black hole systems with q ≲7 and χBH≈[-0.9 ,+0.6 ] . For larger black hole spins and/or binary mass ratios, we recommend the models be further investigated as NR simulations in that region of the parameter space become available.
NASA Astrophysics Data System (ADS)
Bacchi, Vito; Duluc, Claire-Marie; Bertrand, Nathalie; Bardet, Lise
2017-04-01
In recent years, in the context of hydraulic risk assessment, much effort has been put into the development of sophisticated numerical model systems able reproducing surface flow field. These numerical models are based on a deterministic approach and the results are presented in terms of measurable quantities (water depths, flow velocities, etc…). However, the modelling of surface flows involves numerous uncertainties associated both to the numerical structure of the model, to the knowledge of the physical parameters which force the system and to the randomness inherent to natural phenomena. As a consequence, dealing with uncertainties can be a difficult task for both modelers and decision-makers [Ioss, 2011]. In the context of nuclear safety, IRSN assesses studies conducted by operators for different reference flood situations (local rain, small or large watershed flooding, sea levels, etc…), that are defined in the guide ASN N°13 [ASN, 2013]. The guide provides some recommendations to deal with uncertainties, by proposing a specific conservative approach to cover hydraulic modelling uncertainties. Depending of the situation, the influencing parameter might be the Strickler coefficient, levee behavior, simplified topographic assumptions, etc. Obviously, identifying the most influencing parameter and giving it a penalizing value is challenging and usually questionable. In this context, IRSN conducted cooperative (Compagnie Nationale du Rhone, I-CiTy laboratory of Polytech'Nice, Atomic Energy Commission, Bureau de Recherches Géologiques et Minières) research activities since 2011 in order to investigate feasibility and benefits of Uncertainties Analysis (UA) and Global Sensitivity Analysis (GSA) when applied to hydraulic modelling. A specific methodology was tested by using the computational environment Promethee, developed by IRSN, which allows carrying out uncertainties propagation study. This methodology was applied with various numerical models and in different contexts, as river flooding on the Rhône River (Nguyen et al., 2015) and on the Garonne River, for the studying of local rainfall (Abily et al., 2016) or for tsunami generation, in the framework of the ANR-research project TANDEM. The feedback issued from these previous studies is analyzed (technical problems, limitations, interesting results, etc…) and the perspectives and a discussion on how a probabilistic approach of uncertainties should improve the actual deterministic methodology for risk assessment (also for other engineering applications) will be finally given.
Hemispheric Asymmetries of Magnetosphere-Ionosphere-Thermosphere Dynamics
NASA Astrophysics Data System (ADS)
Perlongo, Nicholas James
The geospace environment, comprised of the magnetosphere-ionosphere-thermosphere system, is a highly variable and non-linearly coupled region. The dynamics of the system are driven primarily by electromagnetic and particle radiation emanating from the Sun that occasionally intensify into what are known as solar storms. Understanding the interaction of these storms with the near Earth space environment is essential for predicting and mitigating the risks associated with space weather that can irreparably damage spacecraft, harm astronauts, disrupt radio and GPS communications, and even cause widespread power outages. The geo-effectiveness of solar storms has hemispheric, seasonal, local time, universal time, and latitudinal dependencies. This dissertation investigates those dependencies through a series of four concentrated modeling efforts. The first study focuses on how variations in the solar wind electric field impact the thermosphere at different times of the day. Idealized simulations using the Global Ionosphere Thermosphere Model (GITM) revealed that perturbations in thermospheric temperature and density were greater when the universal time of storm onset was such that the geomagnetic pole was pointed more towards the sun. This universal time effect was greater in the southern hemisphere where the offset of the geomagnetic pole is larger. The second study presents a model validation effort using GITM and the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIE-GCM) compared to GPS Total Electron Content (TEC) observations. The results were divided into seasonal, regional, and local time bins finding that the models performed best near the poles and on the dayside. Diffuse aurora created by electron loss in the inner magnetosphere is an important input to GITM that has primarily been modeled using empirical relationships. In the third study, this was addressed by developing the Hot Election Ion Drift Integrator (HEIDI) ring current model to include a self-consistent description of the aurora and electric field. The model was then coupled to GITM, allowing for a more physical aurora. Using this new configuration in the fourth study, the ill-constrained electron scattering rate was shown to have a large impact on auroral results. This model was applied to simulate a geomagnetic storm during each solstice. The hemispheric asymmetry and seasonal dependence of the storm-time TEC was investigated, finding that northern hemisphere winter storms are most geo-effective when the North American sector is on the dayside. Overall, the research presented in this thesis strives to accomplish two major goals. First, it describes an advancement of a numerical model of the ring current that can be further developed and used to improve our understanding of the interactions between the ionosphere and magnetosphere. Second, the time and spatial dependencies of the geospace response to solar forcing were discovered through a series of modeling efforts. Despite these advancements, there are still numerous open questions, which are also discussed.
Discrimination of correlated and entangling quantum channels with selective process tomography
Dumitrescu, Eugene; Humble, Travis S.
2016-10-10
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
Mental energy: Assessing the motivation dimension.
Barbuto, John E
2006-07-01
Content-based theories of motivation may best uti lize the meta-theory of work motivation. Process-based theories may benefit most from adopting Locke and Latham's goal-setting approaches and measures. Decision-making theories should utilize the measurement approach operationalized by Ilgen et al. Sustained effort theories should utilize similar approaches to those used in numerous studies of intrinsic motivation, but the measurement of which is typically observational or attitudinal. This paper explored the implications of the four approaches to studying motivation on the newly estab ished model of mental energy. The approach taken for examining motivation informs the measurement of mental energy. Specific recommendations for each approach were developed and provided. As a result of these efforts, it will now be possible to diagnose, measure, and experimentally test for changes in human motivation, which is one of the three major components of mental energy.
Terrestrial Planet Finder: Technology Development Plans
NASA Technical Reports Server (NTRS)
Lindensmith, Chris
2004-01-01
One of humanity's oldest questions is whether life exists elsewhere in the universe. The Terrestrial Planet Finder (TPF) mission will survey stars in our stellar neighborhood to search for planets and perform spectroscopic measurements to identify potential biomarkers in their atmospheres. In response to the recently published President's Plan for Space Exploration, TPF has plans to launch a visible-light coronagraph in 2014, and a separated-spacecraft infrared interferometer in 2016. Substantial funding has been committed to the development of the key technologies that are required to meet these goals for launch in the next decade. Efforts underway through industry and university contracts and at JPL include a number of system and subsystem testbeds, as well as components and numerical modeling capabilities. The science, technology, and design efforts are closely coupled to ensure that requirements and capabilities will be consistent and meet the science goals.
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia G.
1999-01-01
The NASA Lewis Research Center is developing an environment for analyzing and designing aircraft engines-the Numerical Propulsion System Simulation (NPSS). NPSS will integrate multiple disciplines, such as aerodynamics, structure, and heat transfer, and will make use of numerical "zooming" on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS uses the latest computing and communication technologies to capture complex physical processes in a timely, cost-effective manner. The vision of NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Through the NASA/Industry Cooperative Effort agreement, NASA Lewis and industry partners are developing a new engine simulation called the National Cycle Program (NCP). NCP, which is the first step toward NPSS and is its initial framework, supports the aerothermodynamic system simulation process for the full life cycle of an engine. U.S. aircraft and airframe companies recognize NCP as the future industry standard common analysis tool for aeropropulsion system modeling. The estimated potential payoff for NCP is a $50 million/yr savings to industry through improved engineering productivity.
Taylor bubbles at high viscosity ratios: experiments and numerical simulations
NASA Astrophysics Data System (ADS)
Hewakandamby, Buddhika; Hasan, Abbas; Azzopardi, Barry; Xie, Zhihua; Pain, Chris; Matar, Omar
2015-11-01
The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube, often occurring in gas-liquid slug flows in many industrial applications, particularly oil and gas production. The objective of this study is to investigate the fluid dynamics of three-dimensional Taylor bubble rising in highly viscous silicone oil in a vertical pipe. An adaptive unstructured mesh modelling framework is adopted here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rising and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control volume and finite element formulation, a `volume of fluid'-type method for the interface-capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Experimental results for the Taylor bubble shape and rise velocity are presented, together with numerical results for the dynamics of the bubbles. A comparison of the simulation predictions with experimental data available in the literature is also presented to demonstrate the capabilities of our numerical method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.
NASA Astrophysics Data System (ADS)
Wright, D. J.; O'Dea, E.; Cushing, J. B.; Cuny, J. E.; Toomey, D. R.; Hackett, K.; Tikekar, R.
2001-12-01
The East Pacific Rise (EPR) from 9-10deg. N is currently our best-studied section of fast-spreading mid-ocean ridge. During several decades of investigation it has been explored by the full spectrum of ridge investigators, including chemists, biologists, geologists and geophysicists. These studies, and those that are ongoing, provide a wealth of observational data, results and data-driven theoretical (often numerical) studies that have not yet been fully utilized either by research scientists or by professional educators. While the situation is improving, a large amount of data, results, and related theoretical models still exist either in an inert, non-interactive form (e.g., journal publications) or as unlinked and currently incompatible computer data or algorithms. Infrastructure is needed not just for ready access to data, but linkage of disparate data sets (data to data) as well as data to models in order quantitatively evaluate hypotheses, refine numerical simulations, and explore new relations between observables. The prototype of a computational environment and toolset, called the Virtual Research Vessel (VRV), is being developed to provide scientists and educators with ready access to data, results and numerical models. While this effort is focused on the EPR 9N region, the resulting software tools and infrastructure should be helpful in establishing similar systems for other sections of the global mid-ocean ridge. Work in progress includes efforts to develop: (1) virtual database to incorporate diverse data types with domain-specific metadata into a global schema that allows web-query across different marine geology data sets, and an analogous declarative (database available) description of tools and models; (2) the ability to move data between GIS and the above DBMS, and tools to encourage data submission to archivesl (3) tools for finding and viewing archives, and translating between formats; (4) support for "computational steering" (tool composition) and model coupling (e.g., ability to run tool composition locally but access input data from the web, APIs to support coupling such as invoking programs that are running remotely, and help in writing data wrappers to publish programs); (5) support of migration paths for prototyped model coupling; and (6) export of marine geological data and data analysis to the undergraduate classroom (VRV-ET, "Educational Tool"). See the main VRV web site at http://oregonstate.edu/dept/vrv and the VRV-ET web site at: http://www.cs.uoregon.edu/research/vrv-et.
Electrostatic atomization--Experiment, theory and industrial applications
NASA Astrophysics Data System (ADS)
Okuda, H.; Kelly, Arnold J.
1996-05-01
Experimental and theoretical research has been initiated at the Princeton Plasma Physics Laboratory on the electrostatic atomization process in collaboration with Charged Injection Corporation. The goal of this collaboration is to set up a comprehensive research and development program on the electrostatic atomization at the Princeton Plasma Physics Laboratory so that both institutions can benefit from the collaboration. Experimental, theoretical and numerical simulation approaches are used for this purpose. An experiment consisting of a capillary sprayer combined with a quadrupole mass filter and a charge detector was installed at the Electrostatic Atomization Laboratory to study fundamental properties of the charged droplets such as the distribution of charges with respect to the droplet radius. In addition, a numerical simulation model is used to study interaction of beam electrons with atmospheric pressure water vapor, supporting an effort to develop an electrostatic water mist fire-fighting nozzle.
Numerical analysis of wet separation of particles by density differences
NASA Astrophysics Data System (ADS)
Markauskas, D.; Kruggel-Emden, H.
2017-07-01
Wet particle separation is widely used in mineral processing and plastic recycling to separate mixtures of particulate materials into further usable fractions due to density differences. This work presents efforts aiming to numerically analyze the wet separation of particles with different densities. In the current study the discrete element method (DEM) is used for the solid phase while the smoothed particle hydrodynamics (SPH) is used for modeling of the liquid phase. The two phases are coupled by the use of a volume averaging technique. In the current study, simulations of spherical particle separation were performed. In these simulations, a set of generated particles with two different densities is dropped into a rectangular container filled with liquid. The results of simulations with two different mixtures of particles demonstrated how separation depends on the densities of particles.
NASA Astrophysics Data System (ADS)
Wang, Jia Jie; Wriedt, Thomas; Han, Yi Ping; Mädler, Lutz; Jiao, Yong Chang
2018-05-01
Light scattering of a radially inhomogeneous droplet, which is modeled by a multilayered sphere, is investigated within the framework of Generalized Lorenz-Mie Theory (GLMT), with particular efforts devoted to the analysis of the internal field distribution in the cases of shaped beam illumination. To circumvent numerical difficulties in the computation of internal field for an absorbing/non-absorbing droplet with pretty large size parameter, a recursive algorithm is proposed by reformulation of the equations for the expansion coefficients. Two approaches are proposed for the prediction of the internal field distribution, namely a rigorous method and an approximation method. The developed computer code is tested to be stable in a wide range of size parameters. Numerical computations are implemented to simulate the internal field distributions of a radially inhomogeneous droplet illuminated by a focused Gaussian beam.
LES, DNS and RANS for the analysis of high-speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Adumitroaie, V.; Colucci, P. J.; Taulbee, D. B.; Givi, P.
1995-01-01
The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Aug. 1994 - 31 Jul. 1995, we have focused our efforts on two programs: (1) developments of explicit algebraic moment closures for statistical descriptions of compressible reacting flows and (2) development of Monte Carlo numerical methods for LES of chemically reacting flows.
Validated Numerical Models for the Convective Extinction of Fuel Droplets (CEFD)
NASA Technical Reports Server (NTRS)
Gogos, George; Bowen, Brent; Nickerson, Jocelyn S.
2002-01-01
The NASA Nebraska Space Grant (NSGC) & EPSCoR programs have continued their effort to support outstanding research endeavors by funding the Numerical Simulation of the Combustion of Fuel Droplets study at the University of Nebraska at Lincoln (UNL). This team of researchers has developed a transient numerical model to study the combustion of suspended and moving droplets. The engines that propel missiles, jets, and many other devices are dependent upon combustion. Therefore, data concerning the combustion of fuel droplets is of immediate relevance to aviation and aeronautical personnel, especially those involved in flight operations. The experiments being conducted by Dr. Gogos and Dr. Nayagam s research teams, allow investigators to gather data for comparison with theoretical predictions of burning rates, flame structures, and extinction conditions. The consequent improved fundamental understanding of droplet combustion may contribute to the clean and safe utilization of fossil fuels (Williams, Dryer, Haggard & Nayagam, 1997, f 2). The present state of knowledge on convective extinction of fuel droplets derives from experiments conducted under normal gravity conditions. However, any data obtained with suspended droplets under normal gravity are grossly affected by gravity. The need to obtain experimental data under microgravity conditions is therefore well justified and addresses one of the goals of NASA's Human Exploration and Development of Space (HEDS) microgravity combustion experiment.
Remote Infrared Thermography for In-Flight Flow Diagnostics
NASA Technical Reports Server (NTRS)
Shiu, H. J.; vanDam, C. P.
1999-01-01
The feasibility of remote in-flight boundary layer visualization via infrared in incompressible flow was established in earlier flight experiments. The past year's efforts focused on refining and determining the extent and accuracy of this technique of remote in-flight flow visualization via infrared. Investigations were made into flow separation visualization, visualization at transonic conditions, shock visualization, post-processing to mitigate banding noise in the NITE Hawk's thermograms, and a numeric model to predict surface temperature distributions. Although further flight tests are recommended, this technique continues to be promising.
MHD Modeling of the Interaction of the Solar Wind With Venus
NASA Technical Reports Server (NTRS)
Steinolfson, R. S.
1996-01-01
The primary objective of this research program is to improve our understanding of the physical processes occurring in the interaction of the solar wind with Venus. This will be accomplished through the use of numerical solutions of the two- and three-dimensional magnetohydrodynamic (MHD) equations and through comparisons of the computed results with available observations. A large portion of this effort involves the study of processes due to the presence of the magnetic field and the effects of mass loading. Published papers are included in the appendix.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shi, J.; Chen, S. S>
2007-01-01
Advances in computing power allow atmospheric prediction models to be mn at progressively finer scales of resolution, using increasingly more sophisticated physical parameterizations and numerical methods. The representation of cloud microphysical processes is a key component of these models, over the past decade both research and operational numerical weather prediction models have started using more complex microphysical schemes that were originally developed for high-resolution cloud-resolving models (CRMs). A recent report to the United States Weather Research Program (USWRP) Science Steering Committee specifically calls for the replacement of implicit cumulus parameterization schemes with explicit bulk schemes in numerical weather prediction (NWP) as part of a community effort to improve quantitative precipitation forecasts (QPF). An improved Goddard bulk microphysical parameterization is implemented into a state-of the-art of next generation of Weather Research and Forecasting (WRF) model. High-resolution model simulations are conducted to examine the impact of microphysical schemes on two different weather events (a midlatitude linear convective system and an Atllan"ic hurricane). The results suggest that microphysics has a major impact on the organization and precipitation processes associated with a summer midlatitude convective line system. The 31CE scheme with a cloud ice-snow-hail configuration led to a better agreement with observation in terms of simulated narrow convective line and rainfall intensity. This is because the 3ICE-hail scheme includes dense ice precipitating (hail) particle with very fast fall speed (over 10 m/s). For an Atlantic hurricane case, varying the microphysical schemes had no significant impact on the track forecast but did affect the intensity (important for air-sea interaction)
Using SpF to Achieve Petascale for Legacy Pseudospectral Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Jiang, Weiyuan
2014-01-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang; Yang, Zhaoqing; Kim, Tae Yun
2011-07-20
Through extensive field data collection and analysis efforts conducted since the 1950s, researchers have established an understanding of the characteristic features of circulation in Puget Sound. The pattern ranges from the classic fjordal behavior in some basins, with shallow brackish outflow and compensating inflow immediately below, to the typical two-layer flow observed in many partially mixed estuaries with saline inflow at depth. An attempt at reproducing this behavior by fitting an analytical formulation to past data is presented, followed by the application of a three-dimensional circulation and transport numerical model. The analytical treatment helped identify key physical processes and parameters,more » but quickly reconfirmed that response is complex and would require site-specific parameterization to include effects of sills and interconnected basins. The numerical model of Puget Sound, developed using unstructured-grid finite volume method, allowed resolution of the sub-basin geometric features, including presence of major islands, and site-specific strong advective vertical mixing created by bathymetry and multiple sills. The model was calibrated using available recent short-term oceanographic time series data sets from different parts of the Puget Sound basin. The results are compared against (1) recent velocity and salinity data collected in Puget Sound from 2006 and (2) a composite data set from previously analyzed historical records, mostly from the 1970s. The results highlight the ability of the model to reproduce velocity and salinity profile characteristics, their variations among Puget Sound subbasins, and tidally averaged circulation. Sensitivity of residual circulation to variations in freshwater inflow and resulting salinity gradient in fjordal sub-basins of Puget Sound is examined.« less
NASA Astrophysics Data System (ADS)
Rosolem, R.; Pritchard, J.
2017-12-01
An important aspect for the new generation of hydrologists and water resources managers is the understanding of hydrological processes through the application of numerical environmental models. Despite its importance, teaching numerical modeling subjects to young students in our MSc Water and Environment Management programme has been difficult, for instance, due to the wide range of student background and lack or poor contact with numerical modeling tools in the past. In previous years, this numerical skills concept has been introduced as a project assignment in our Terrestrial Hydrometeorology unit. However, previous efforts have shown non-optimal engagement by students with often signs of lack of interest or anxiety. Given our initial experience with this unit, we decided to make substantial changes to the coursework format with the aim to introduce a more efficient learning environment to the students. The proposed changes include: (1) a clear presentation and discussion of the assessment criteria at the beginning of the unit, (2) a stepwise approach in which students use our learning environment to acquire knowledge for individual components of the model step-by-step, and (3) access to timely and detailed feedback allowing for particular steps to be retraced or retested. In order to understand the overall impact on assessment and feedback, we carried out two surveys at the beginning and end of the module. Our results indicate a positive impact to student learning experience, as the students have clearly benefited from the early discussion on assignment criteria and appeared to have correctly identified the skills and knowledge required to carry out the assignment. In addition, we have observed a substantial increase in the quality of the reports. Our results results support that student engagement has increased since changes to the format of the coursework were introduced. Interestingly, we also observed a positive impact on the assignment to the final exam marks, even for students who did not particularly performed well in the coursework. This indicates that despite not reaching ideal marks, students were able to use this new learning environment to acquire their knowledge of key concepts which are needed for their final exam.
NSR&D FY17 Report: CartaBlanca Capability Enhancements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Christopher Curtis; Dhakal, Tilak Raj; Zhang, Duan Zhong
Over the last several years, particle technology in the CartaBlanca code has been matured and has been successfully applied to a wide variety of physical problems. It has been shown that the particle methods, especially Los Alamos's dual domain material point method, is capable of computing many problems involves complex physics, chemistries accompanied by large material deformations, where the traditional finite element or Eulerian method encounter significant difficulties. In FY17, the CartaBlanca code has been enhanced with physical models and numerical algorithms. We started out to compute penetration and HE safety problems. Most of the year we focused on themore » TEPLA model improvement testing against the sweeping wave experiment by Gray et al., because it was found that pore growth and material failure are essentially important for our tasks and needed to be understood for modeling the penetration and the can experiments efficiently. We extended the TEPLA mode from the point view of ensemble phase average to include the effects of nite deformation. It is shown that the assumed pore growth model in TEPLA is actually an exact result from the theory. Alone this line, we then generalized the model to include finite deformations to consider nonlinear dynamics of large deformation. The interaction between the HE product gas and the solid metal is based on the multi-velocity formation. Our preliminary numerical results suggest good agreement between the experiment and the numerical results, pending further verification. To improve the parallel processing capabilities of the CartaBlanca code, we are actively working with the Next Generation Code (NGC) project to rewrite selected packages using C++. This work is expected to continue in the following years. This effort also makes the particle technology developed with CartaBlanca project available to other part of the laboratory. Working with the NGC project and rewriting some parts of the code also given us an opportunity to improve our numerical implementations of the method and to take advantage of recently advances in the numerical methods, such as multiscale algorithms.« less
Tide Corrections for Coastal Altimetry: Status and Prospects
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Egbert, Gary D.
2008-01-01
Knowledge of global oceanic tides has markedly advanced over the last two decades, in no small part because of the near-global measurements provided by satellite altimeters, and especially the long and precise Topex/Poseidon time series e.g. [2]. Satellite altimetry in turn places very severe demands on the accuracy of tidal models. The reason is clear: tides are by far the largest contributor to the variance of sea-surface elevation, so any study of non-tidal ocean signals requires removal of this dominant tidal component. Efforts toward improving models for altimetric tide corrections have understandably focused on deep-water, open-ocean regions. These efforts have produced models thought to be generally accurate to about 2 cm rms. Corresponding tide predictions in shelf and near-coastal regions, however, are far less accurate. This paper discusses the status of our current abilities to provide near-global tidal predictions in shelf and near-coastal waters, highlights some of the difficulties that must be overcome, and attempts to divine a path toward some degree of progress. There are, of course, many groups worldwide who model tides over fairly localized shallow-water regions, and such work is extremely valuable for any altimeter study limited to those regions, but this paper considers the more global models necessary for the general user. There have indeed been efforts to patch local and global models together, but such work is difficult to maintain over many updates and can often encounter problems of proprietary or political nature. Such a path, however, might yet prove the most fruitful, and there are now new plans afoot to try again. As is well known, tides in shallow waters tend to be large, possibly nonlinear, and high wavenumber. The short spatial scales mean that current mapping capabilities with (multiple) nadir-oriented altimeters often yield inadequate coverage. This necessitates added reliance on numerical hydrodynamic models and data assimilation, which in turn necessitates very accurate bathymetry with high spatial resolution. Nonlinearity means that many additional compound tides and overtides must be accounted for in our predictions, which increases the degree of modeling effort and increases the amounts of data required to disentangle closely aliased tides.
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Vandoormaal, J. P.; Turan, A.; Raithby, G. D.
1986-01-01
The objective of the present study is to improve both the accuracy and computational efficiency of existing numerical techniques used to predict viscous recirculating flows in combustors. A review of the status of the study is presented along with some illustrative results. The effort to improve the numerical techniques consists of the following technical tasks: (1) selection of numerical techniques to be evaluated; (2) two dimensional evaluation of selected techniques; and (3) three dimensional evaluation of technique(s) recommended in Task 2.
Djukic, Maja; Fulmer, Terry; Adams, Jennifer G; Lee, Sabrina; Triola, Marc M
2012-09-01
Interprofessional education is a critical precursor to effective teamwork and the collaboration of health care professionals in clinical settings. Numerous barriers have been identified that preclude scalable and sustainable interprofessional education (IPE) efforts. This article describes NYU3T: Teaching, Technology, Teamwork, a model that uses novel technologies such as Web-based learning, virtual patients, and high-fidelity simulation to overcome some of the common barriers and drive implementation of evidence-based teamwork curricula. It outlines the program's curricular components, implementation strategy, evaluation methods, and lessons learned from the first year of delivery and describes implications for future large-scale IPE initiatives. Copyright © 2012 Elsevier Inc. All rights reserved.
A Unified Model of Geostrophic Adjustment and Frontogenesis
NASA Astrophysics Data System (ADS)
Taylor, John; Shakespeare, Callum
2013-11-01
Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.
A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).
NASA Astrophysics Data System (ADS)
Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.
2015-04-01
Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.
NASA Technical Reports Server (NTRS)
Moore, James; Marty, Dave; Cody, Joe
2000-01-01
SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.
NASA Technical Reports Server (NTRS)
Moore, James; Marty, Dave; Cody, Joe
2000-01-01
SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.
Outdoor Education and the Peel Board of Education.
ERIC Educational Resources Information Center
Shaw, Katherine
1994-01-01
Describes efforts of an advocacy group of parents, outdoor educators, and classroom teachers to preserve outdoor education in Peel in the face of budget cuts. Despite efforts, a task force recommended the elimination of numerous teaching positions, resulting in reduced programming at outdoor education centers. (LP)
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
NASA Astrophysics Data System (ADS)
Paik, Seung Hoon; Kim, Ji Yeon; Shin, Sang Joon; Kim, Seung Jo
2004-07-01
Smart structures incorporating active materials have been designed and analyzed to improve aerospace vehicle performance and its vibration/noise characteristics. Helicopter integral blade actuation is one example of those efforts using embedded anisotropic piezoelectric actuators. To design and analyze such integrally-actuated blades, beam approach based on homogenization methodology has been traditionally used. Using this approach, the global behavior of the structures is predicted in an averaged sense. However, this approach has intrinsic limitations in describing the local behaviors in the level of the constituents. For example, the failure analysis of the individual active fibers requires the knowledge of the local behaviors. Microscopic approach for the analysis of integrally-actuated structures is established in this paper. Piezoelectric fibers and matrices are modeled individually and finite element method using three-dimensional solid elements is adopted. Due to huge size of the resulting finite element meshes, high performance computing technology is required in its solution process. The present methodology is quoted as Direct Numerical Simulation (DNS) of the smart structure. As an initial validation effort, present analytical results are correlated with the experiments from a small-scaled integrally-actuated blade, Active Twist Rotor (ATR). Through DNS, local stress distribution around the interface of fiber and matrix can be analyzed.
Study of the Mutual Interaction Between a Wing Wake and an Encountering Airplane
NASA Technical Reports Server (NTRS)
Walden, A. B.; vanDam, C. P.
1996-01-01
In an effort to increase airport productivity, several wind-tunnel and flight-test programs are currently underway to determine safe reductions in separation standards between aircraft. These programs are designed to study numerous concepts from the characteristics and detection of wake vortices to the wake-vortex encounter phenomenon. As part of this latter effort, computational tools are being developed and utilized as a means of modeling and verifying wake-vortex hazard encounters. The objective of this study is to assess the ability of PMARC, a low-order potential-flow panel method, to predict the forces and moments imposed on a following business-jet configuration by a vortex interaction. Other issues addressed include the investigation of several wake models and their ability to predict wake shape and trajectory, the validity of the velocity field imposed on the following configuration, modeling techniques and the effect of the high-lift system and the empennage. Comparisons with wind-tunnel data reveal that PMARC predicts the characteristics for the clean wing-body following configuration fairly well. Non-linear effects produced by the addition of the high-lift system and empennage, however, are not so well predicted.
NASA Astrophysics Data System (ADS)
Tseng, Chien-Hsun
2015-02-01
The technique of multidimensional wave digital filtering (MDWDF) that builds on traveling wave formulation of lumped electrical elements, is successfully implemented on the study of dynamic responses of symmetrically laminated composite plate based on the first order shear deformation theory. The philosophy applied for the first time in this laminate mechanics relies on integration of certain principles involving modeling and simulation, circuit theory, and MD digital signal processing to provide a great variety of outstanding features. Especially benefited by the conservation of passivity gives rise to a nonlinear programming problem (NLP) for the issue of numerical stability of a MD discrete system. Adopting the augmented Lagrangian genetic algorithm, an effective optimization technique for rapidly achieving solution spaces of NLP models, numerical stability of the MDWDF network is well received at all time by the satisfaction of the Courant-Friedrichs-Levy stability criterion with the least restriction. In particular, optimum of the NLP has led to the optimality of the network in terms of effectively and accurately predicting the desired fundamental frequency, and thus to give an insight into the robustness of the network by looking at the distribution of system energies. To further explore the application of the optimum network, more numerical examples are engaged in efforts to achieve a qualitative understanding of the behavior of the laminar system. These are carried out by investigating various effects based on different stacking sequences, stiffness and span-to-thickness ratios, mode shapes and boundary conditions. Results are scrupulously validated by cross referencing with early published works, which show that the present method is in excellent agreement with other numerical and analytical methods.
Metal Ion Modeling Using Classical Mechanics
2017-01-01
Metal ions play significant roles in numerous fields including chemistry, geochemistry, biochemistry, and materials science. With computational tools increasingly becoming important in chemical research, methods have emerged to effectively face the challenge of modeling metal ions in the gas, aqueous, and solid phases. Herein, we review both quantum and classical modeling strategies for metal ion-containing systems that have been developed over the past few decades. This Review focuses on classical metal ion modeling based on unpolarized models (including the nonbonded, bonded, cationic dummy atom, and combined models), polarizable models (e.g., the fluctuating charge, Drude oscillator, and the induced dipole models), the angular overlap model, and valence bond-based models. Quantum mechanical studies of metal ion-containing systems at the semiempirical, ab initio, and density functional levels of theory are reviewed as well with a particular focus on how these methods inform classical modeling efforts. Finally, conclusions and future prospects and directions are offered that will further enhance the classical modeling of metal ion-containing systems. PMID:28045509
NASA Technical Reports Server (NTRS)
Poulos, Gregory S.; Stamus, Peter A.; Snook, John S.
2005-01-01
The Cold Land Processes Experiment (CLPX) experiment emphasized the development of a strong synergism between process-oriented understanding, land surface models and microwave remote sensing. Our work sought to investigate which topographically- generated atmospheric phenomena are most relevant to the CLPX MSA's for the purpose of evaluating their climatic importance to net local moisture fluxes and snow transport through the use of high-resolution data assimilation/atmospheric numerical modeling techniques. Our task was to create three long-term, scientific quality atmospheric datasets for quantitative analysis (for all CLPX researchers) and provide a summary of the meteorologically-relevant phenomena of the three MSAs (see Figure) over northern Colorado. Our efforts required the ingest of a variety of CLPX datasets and the execution an atmospheric and land surface data assimilation system based on the Navier-Stokes equations (the Local Analysis and Prediction System, LAPS, and an atmospheric numerical weather prediction model, as required) at topographically- relevant grid spacing (approx. 500 m). The resulting dataset will be analyzed by the CLPX community as a part of their larger research goals to determine the relative influence of various atmospheric phenomena on processes relevant to CLPX scientific goals.
Glacier calving, dynamics, and sea-level rise. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, M.F.; Pfeffer, W.T.; Amadei, B.
1998-08-01
The present-day calving flux from Greenland and Antarctica is poorly known, and this accounts for a significant portion of the uncertainty in the current mass balance of these ice sheets. Similarly, the lack of knowledge about the role of calving in glacier dynamics constitutes a major uncertainty in predicting the response of glaciers and ice sheets to changes in climate and thus sea level. Another fundamental problem has to do with incomplete knowledge of glacier areas and volumes, needed for analyses of sea-level change due to changing climate. The authors proposed to develop an improved ability to predict the futuremore » contributions of glaciers to sea level by combining work from four research areas: remote sensing observations of calving activity and iceberg flux, numerical modeling of glacier dynamics, theoretical analysis of the calving process, and numerical techniques for modeling flow with large deformations and fracture. These four areas have never been combined into a single research effort on this subject; in particular, calving dynamics have never before been included explicitly in a model of glacier dynamics. A crucial issue that they proposed to address was the general question of how calving dynamics and glacier flow dynamics interact.« less
Development of computational methods for heavy lift launch vehicles
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Ryan, James S.
1993-01-01
The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.
NASA Technical Reports Server (NTRS)
Rerko, Rodney S.; deGroh, Henry C., III; Beckermann, Christoph; Gray, Hugh R. (Technical Monitor)
2002-01-01
Macrosegregation in metal casting can be caused by thermal and solutal melt convection, and the transport of unattached solid crystals. These free grains can be a result of, for example, nucleation in the bulk liquid or dendrite fragmentation. In an effort to develop a comprehensive numerical model for the casting of alloys, an experimental study has been conducted to generate benchmark data with which such a solidification model could be tested. The specific goal of the experiments was to examine equiaxed solidification in situations where sinking of grains is (and is not) expected. The objectives were: 1) experimentally study the effects of solid transport and thermosolutal convection on macrosegregation and grain size distribution patterns; and 2) provide a complete set of controlled thermal boundary conditions, temperature data, segregation data, and grain size data, to validate numerical codes. The alloys used were Al-1 wt. pct. Cu, and Al-10 wt. pct. Cu with various amounts of the grain refiner TiB2 added. Cylindrical samples were either cooled from the top, or the bottom. Several trends in the data stand out. In attempting to model these experiments, concentrating on experiments that show clear trends or differences is recommended.
High-resolution modeling assessment of tidal stream resource in Western Passage of Maine, USA
NASA Astrophysics Data System (ADS)
Yang, Zhaoqing; Wang, Taiping; Feng, Xi; Xue, Huijie; Kilcher, Levi
2017-04-01
Although significant efforts have been taken to assess the maximum potential of tidal stream energy at system-wide scale, accurate assessment of tidal stream energy resource at project design scale requires detailed hydrodynamic simulations using high-resolution three-dimensional (3-D) numerical models. Extended model validation against high quality measured data is essential to minimize the uncertainties of the resource assessment. Western Passage in the State of Maine in U.S. has been identified as one of the top ranking sites for tidal stream energy development in U.S. coastal waters, based on a number of criteria including tidal power density, market value and transmission distance. This study presents an on-going modeling effort for simulating the tidal hydrodynamics in Western Passage using the 3-D unstructured-grid Finite Volume Community Ocean Model (FVCOM). The model domain covers a large region including the entire the Bay of Fundy with grid resolution varies from 20 m in the Western Passage to approximately 1000 m along the open boundary near the mouth of Bay of Fundy. Preliminary model validation was conducted using existing NOAA measurements within the model domain. Spatial distributions of tidal power density were calculated and extractable tidal energy was estimated using a tidal turbine module embedded in FVCOM under different tidal farm scenarios. Additional field measurements to characterize resource and support model validation were discussed. This study provides an example of high resolution resource assessment based on the guidance recommended by the International Electrotechnical Commission Technical Specification.
Climate, bleaching and connectivity in the Coral Triangle.
NASA Astrophysics Data System (ADS)
Curchitser, E. N.; Kleypas, J. A.; Castruccio, F. S.; Drenkard, E.; Thompson, D. M.; Pinsky, M. L.
2016-12-01
The Coral Triangle (CT) is the apex of marine biodiversity and supports the livelihoods of millions of people. It is also one of the most threatened of all reef regions in the world. We present results from a series of high-resolution, numerical ocean models designed to address physical and ecological questions relevant to the region's coral communities. The hierarchy of models was designed to optimize the model performance in addressing questions ranging from the role of internal tides in larval connectivity to distinguishing the role of interannual variability from decadal trends in thermal stress leading to mass bleaching events. In this presentation we will show how combining ocean circulation with models of larval dispersal leads to new insights into the interplay of physics and ecology in this complex oceanographic region, which can ultimately be used to inform conservation efforts.
Nuclear masses far from stability: the interplay of theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.
1985-01-01
Mass models seek, by a variety of theoretical approaches, to reproduce the measured mass surface and to predict unmeasured masses beyond it. Subsequent measurements of these predicted nuclear masses permit an assessment of the quality of the mass predictions from the various models. Since the last comprehensive revision of the mass predictions (in the mid-to-late 1970's) over 300 new masses have been reported. Global analyses of these data have been performed by several numerical and graphical methods. These have identified both the strengths and weaknesses of the models. In some cases failures in individual models are distinctly apparent when themore » new mass data are plotted as functions of one or more selected physical parameters. Several examples will be given. Future theoretical efforts will also be discussed.« less
A Semantic Web-Based Methodology for Describing Scientific Research Efforts
ERIC Educational Resources Information Center
Gandara, Aida
2013-01-01
Scientists produce research resources that are useful to future research and innovative efforts. In a typical scientific scenario, the results created by a collaborative team often include numerous artifacts, observations and relationships relevant to research findings, such as programs that generate data, parameters that impact outputs, workflows…
How to Move Away from the Silos of Business Management Education?
ERIC Educational Resources Information Center
Nisula, Karoliina; Pekkola, Samuli
2018-01-01
Business management education is criticized for being too theoretical and fractional. Despite the numerous efforts to build integrated and experiential business curricula, learning is still organized in disciplinary silos. The curriculum integration efforts are carried out in separate sections of the curriculum rather than the core. There are…
Efficient Computation of Info-Gap Robustness for Finite Element Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less
Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L
2015-03-01
Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Role of Wakes in Modelling Tidal Current Turbines
NASA Astrophysics Data System (ADS)
Conley, Daniel; Roc, Thomas; Greaves, Deborah
2010-05-01
The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.
Development and validation of a numerical model of the swine head subjected to open-field blasts
NASA Astrophysics Data System (ADS)
Kalra, A.; Zhu, F.; Feng, K.; Saif, T.; Kallakuri, S.; Jin, X.; Yang, K.; King, A.
2017-11-01
A finite element model of the head of a 55-kg Yucatan pig was developed to calculate the incident pressure and corresponding intracranial pressure due to the explosion of 8 lb (3.63 kg) of C4 at three different distances. The results from the model were validated by comparing findings with experimentally obtained data from five pigs at three different blast overpressure levels: low (150 kPa), medium (275 kPa), and high (400 kPa). The peak values of intracranial pressures from numerical model at different locations of the brain such as the frontal, central, left temporal, right temporal, parietal, and occipital regions were compared with experimental values. The model was able to predict the peak pressure with reasonable percentage differences. The differences for peak incident and intracranial pressure values between the simulation results and the experimental values were found to be less than 2.2 and 29.3%, respectively, at all locations other than the frontal region. Additionally, a series of parametric studies shows that the intracranial pressure was very sensitive to sensor locations, the presence of air bubbles, and reflections experienced during the experiments. Further efforts will be undertaken to correlate the different biomechanical response parameters, such as the intracranial pressure gradient, stress, and strain results obtained from the validated model with injured brain locations once the histology data become available.
Validation of numerical simulations for nano-aluminum composite solid propellants
NASA Astrophysics Data System (ADS)
Yan, Allen H.
2011-12-01
Nano-aluminum is of interest as an energetic additive in composite solid propellant formulations for its demonstrated ability to increase combustion efficiency and burning rate. However, due to the current cost of nano-aluminum and the associated safety risks associated with propellant testing, it may not always be practical to spend the time and effort to mix, cast, and thoroughly evaluate the burning rate of a new formulation. To provide an alternative method of determining this parameter, numerical methods have been developed to predict the performance of nano-aluminum composite propellants, but these codes still require thorough validation before application. For this purpose, six propellant compositions were formulated, fully characterized, and burn rates were measured at several pressures between 34.0 and 129.3 atmospheres at room temperature, 20°C, and at an elevated temperature of 71.1°C in order to test the code's ability to predict pressure dependent burn rate and temperature sensitivity. To ensure the most accurate model possible, special emphasis was placed on characterizing the size distribution of the constituent nano-aluminum and ammonium perchlorate powders through optical diffraction or optical imaging techniques. Experimental burn rate is compared to the propellant combustion model and shows excellent agreement within 5% for a range of formulations and pressures, however under other conditions the model deviates by as much as 21%. An analysis of the results suggests that the current framework of the numerical model is unable to accurately simulate all the combustion physics of high aluminum content propellants, and suggestions for improvements are identified.
NASA Astrophysics Data System (ADS)
Pickett, Derek Kyle
Due to an increased interest in sustainable energy, biodiesel has become much more widely used in the last several years. Glycerin, one major waste component in biodiesel production, can be converted into a hydrogen rich synthesis gas to be used in an engine generator to recover energy from the biodiesel production process. This thesis contains information detailing the production, testing, and analysis of a unique synthesis generator rig at the University of Kansas. Chapter 2 gives a complete background of all major components, as well as how they are operated. In addition to component descriptions, methods for operating the system on pure propane, reformed propane, reformed glycerin along with the methodology of data acquisition is described. This chapter will serve as a complete operating manual for future students to continue research on the project. Chapter 3 details the literature review that was completed to better understand fuel reforming of propane and glycerin. This chapter also describes the numerical model produced to estimate the species produced during reformation activities. The model was applied to propane reformation in a proof of concept and calibration test before moving to glycerin reformation and its subsequent combustion. Chapter 4 first describes the efforts to apply the numerical model to glycerin using the calibration tools from propane reformation. It then discusses catalytic material preparation and glycerin reformation tests. Gas chromatography analysis of the reformer effluent was completed to compare to theoretical values from the numerical model. Finally, combustion of reformed glycerin was completed for power generation. Tests were completed to compare emissions from syngas combustion and propane combustion.
Simplified galaxy formation with mesh-less hydrodynamics
NASA Astrophysics Data System (ADS)
Lupi, Alessandro; Volonteri, Marta; Silk, Joseph
2017-09-01
Numerical simulations have become a necessary tool to describe the complex interactions among the different processes involved in galaxy formation and evolution, unfeasible via an analytic approach. The last decade has seen a great effort by the scientific community in improving the sub-grid physics modelling and the numerical techniques used to make numerical simulations more predictive. Although the recently publicly available code gizmo has proven to be successful in reproducing galaxy properties when coupled with the model of the MUFASA simulations and the more sophisticated prescriptions of the Feedback In Realistic Environment (FIRE) set-up, it has not been tested yet using delayed cooling supernova feedback, which still represent a reasonable approach for large cosmological simulations, for which detailed sub-grid models are prohibitive. In order to limit the computational cost and to be able to resolve the disc structure in the galaxies we perform a suite of zoom-in cosmological simulations with rather low resolution centred around a sub-L* galaxy with a halo mass of 3 × 1011 M⊙ at z = 0, to investigate the ability of this simple model, coupled with the new hydrodynamic method of gizmo, to reproduce observed galaxy scaling relations (stellar to halo mass, stellar and baryonic Tully-Fisher, stellar mass-metallicity and mass-size). We find that the results are in good agreement with the main scaling relations, except for the total stellar mass, larger than that predicted by the abundance matching technique, and the effective sizes for the most massive galaxies in the sample, which are too small.
A Conceptual Framework for SAHRA Integrated Multi-resolution Modeling in the Rio Grande Basin
NASA Astrophysics Data System (ADS)
Liu, Y.; Gupta, H.; Springer, E.; Wagener, T.; Brookshire, D.; Duffy, C.
2004-12-01
The sustainable management of water resources in a river basin requires an integrated analysis of the social, economic, environmental and institutional dimensions of the problem. Numerical models are commonly used for integration of these dimensions and for communication of the analysis results to stakeholders and policy makers. The National Science Foundation Science and Technology Center for Sustainability of semi-Arid Hydrology and Riparian Areas (SAHRA) has been developing integrated multi-resolution models to assess impacts of climate variability and land use change on water resources in the Rio Grande Basin. These models not only couple natural systems such as surface and ground waters, but will also include engineering, economic and social components that may be involved in water resources decision-making processes. This presentation will describe the conceptual framework being developed by SAHRA to guide and focus the multiple modeling efforts and to assist the modeling team in planning, data collection and interpretation, communication, evaluation, etc. One of the major components of this conceptual framework is a Conceptual Site Model (CSM), which describes the basin and its environment based on existing knowledge and identifies what additional information must be collected to develop technically sound models at various resolutions. The initial CSM is based on analyses of basin profile information that has been collected, including a physical profile (e.g., topographic and vegetative features), a man-made facility profile (e.g., dams, diversions, and pumping stations), and a land use and ecological profile (e.g., demographics, natural habitats, and endangered species). Based on the initial CSM, a Conceptual Physical Model (CPM) is developed to guide and evaluate the selection of a model code (or numerical model) for each resolution to conduct simulations and predictions. A CPM identifies, conceptually, all the physical processes and engineering and socio-economic activities occurring (or to occur) in the real system that the corresponding numerical models are required to address, such as riparian evapotranspiration responses to vegetation change and groundwater pumping impacts on soil moisture contents. Simulation results from different resolution models and observations of the real system will then be compared to evaluate the consistency among the CSM, the CPMs, and the numerical models, and feedbacks will be used to update the models. In a broad sense, the evaluation of the models (conceptual or numerical), as well as the linkages between them, can be viewed as a part of the overall conceptual framework. As new data are generated and understanding improves, the models will evolve, and the overall conceptual framework is refined. The development of the conceptual framework becomes an on-going process. We will describe the current state of this framework and the open questions that have to be addressed in the future.
Inter-Parietal White Matter Development Predicts Numerical Performance in Young Children
ERIC Educational Resources Information Center
Cantlon, Jessica F.; Davis, Simon W.; Libertus, Melissa E.; Kahane, Jill; Brannon, Elizabeth M.; Pelphrey, Kevin A.
2011-01-01
In an effort to understand the role of interhemispheric transfer in numerical development, we investigated the relationship between children's developing knowledge of numbers and the integrity of their white matter connections between the cerebral hemispheres (the corpus callosum). We used diffusion tensor imaging (DTI) tractography analyses to…
Hybrid CFD/CAA Modeling for Liftoff Acoustic Predictions
NASA Technical Reports Server (NTRS)
Strutzenberg, Louise L.; Liever, Peter A.
2011-01-01
This paper presents development efforts at the NASA Marshall Space flight Center to establish a hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) simulation system for launch vehicle liftoff acoustics environment analysis. Acoustic prediction engineering tools based on empirical jet acoustic strength and directivity models or scaled historical measurements are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. CFD based modeling approaches are now able to capture the important details of vehicle specific plume flow environment, identifY the noise generation sources, and allow assessment of the influence of launch pad geometric details and sound mitigation measures such as water injection. However, CFD methodologies are numerically too dissipative to accurately capture the propagation of the acoustic waves in the large CFD models. The hybrid CFD/CAA approach combines the high-fidelity CFD analysis capable of identifYing the acoustic sources with a fast and efficient Boundary Element Method (BEM) that accurately propagates the acoustic field from the source locations. The BEM approach was chosen for its ability to properly account for reflections and scattering of acoustic waves from launch pad structures. The paper will present an overview of the technology components of the CFD/CAA framework and discuss plans for demonstration and validation against test data.
NASA Technical Reports Server (NTRS)
Grugel, R. N.; Fedoseyev, A. I.; Kim, S.; Curreri, Peter A. (Technical Monitor)
2002-01-01
Gravity-driven thermosolutal convection that arises during controlled directional solidification (DS) of dendritic alloys promotes detrimental macro-segregation (e.g. freckles and steepling) in products such as turbine blades. Considerable time and effort has been spent to experimentally and theoretically investigate this phenomena; although our knowledge has advanced to the point where convection can be modeled and accurately compared to experimental results, little has been done to minimize its onset and deleterious effects. The experimental work demonstrates that segregation can be. minimized and microstructural uniformity promoted when a slow axial rotation is applied to the sample crucible during controlled directional solidification processing. Numerical modeling utilizing continuation and bifurcation methods have been employed to develop accurate physical and mathematical models with the intent of identifying and optimizing processing parameters.
Quantum tunneling with friction
NASA Astrophysics Data System (ADS)
Tokieda, M.; Hagino, K.
2017-05-01
Using the phenomenological quantum friction models introduced by P. Caldirola [Nuovo Cimento 18, 393 (1941), 10.1007/BF02960144] and E. Kanai [Prog. Theor. Phys. 3, 440 (1948), 10.1143/ptp/3.4.440], M. D. Kostin [J. Chem. Phys. 57, 3589 (1972), 10.1063/1.1678812], and K. Albrecht [Phys. Lett. B 56, 127 (1975), 10.1016/0370-2693(75)90283-X], we study quantum tunneling of a one-dimensional potential in the presence of energy dissipation. To this end, we calculate the tunneling probability using a time-dependent wave-packet method. The friction reduces the tunneling probability. We show that the three models provide similar penetrabilities to each other, among which the Caldirola-Kanai model requires the least numerical effort. We also discuss the effect of energy dissipation on quantum tunneling in terms of barrier distributions.
Life cycles of transient planetary waves
NASA Technical Reports Server (NTRS)
Nathan, Terrence
1993-01-01
In recent years there has been an increasing effort devoted to understanding the physical and dynamical processes that govern the global-scale circulation of the atmosphere. This effort has been motivated, in part, from: (1) a wealth of new satellite data; (2) an urgent need to assess the potential impact of chlorofluorocarbons on our climate; (3) an inadequate understanding of the interactions between the troposphere and stratosphere and the role that such interactions play in short and long-term climate variability; and (4) the realization that addressing changes in our global climate requires understanding the interactions among various components of the earth system. The research currently being carried out represents an effort to address some of these issues by carrying out studies that combine radiation, ozone, seasonal thermal forcing and dynamics. Satellite and ground-based data that is already available is being used to construct basic states for our analytical and numerical models. Significant accomplishments from 1991-1992 are presented and include the following: ozone-dynamics interaction; (2) periodic local forcing and low frequency variability; and (3) steady forcing and low frequency variability.
Genome medicine: gene therapy for the millennium, 30 September-3 October 2001, Rome, Italy.
Gruenert, D C; Novelli, G; Dallapiccola, B; Colosimo, A
2002-06-01
The recent surge of DNA sequence information resulting from the efforts of agencies interested in deciphering the human genetic code has facilitated technological developments that have been critical in the identification of genes associated with numerous disease pathologies. In addition, these efforts have opened the door to the opportunity to develop novel genetic therapies to treat a broad range of inherited disorders. Through a joint effort by the University of Vermont, the University of Rome, Tor Vergata, University of Rome, La Sapienza, and the CSS Mendel Institute, Rome, an international meeting, 'Genome Medicine: Gene Therapy for the Millennium' was organized. This meeting provided a forum for the discussion of scientific and clinical advances stimulated by the explosion of sequence information generated by the Human Genome Project and the implications these advances have for gene therapy. The meeting had six sessions that focused on the functional evaluation of specific genes via biochemical analysis and through animal models, the development of novel therapeutic strategies involving gene targeting, artificial chromsomes, DNA delivery systems and non-embryonic stem cells, and on the ethical and social implications of these advances.
Swain, Eric; Decker, Jeremy
2010-01-01
Numerical modeling is needed to predict environmental temperatures, which affect a number of biota in southern Florida, U.S.A., such as the West Indian manatee (Trichechus manatus), which uses thermal basins for refuge from lethal winter cold fronts. To numerically simulate heat-transport through a dynamic coastal wetland region, an algorithm was developed for the FTLOADDS coupled hydrodynamic surface-water/ground-water model that uses formulations and coefficients suited to the coastal wetland thermal environment. In this study, two field sites provided atmospheric data to develop coefficients for the heat flux terms representing this particular study area. Several methods were examined to represent the heat-flux components used to compute temperature. A Dalton equation was compared with a Penman formulation for latent heat computations, producing similar daily-average temperatures. Simulation of heat-transport in the southern Everglades indicates that the model represents the daily fluctuation in coastal temperatures better than at inland locations; possibly due to the lack of information on the spatial variations in heat-transport parameters such as soil heat capacity and surface albedo. These simulation results indicate that the new formulation is suitable for defining the existing thermohydrologic system and evaluating the ecological effect of proposed restoration efforts in the southern Everglades of Florida.
Cultural and Technological Issues and Solutions for Geodynamics Software Citation
NASA Astrophysics Data System (ADS)
Heien, E. M.; Hwang, L.; Fish, A. E.; Smith, M.; Dumit, J.; Kellogg, L. H.
2014-12-01
Computational software and custom-written codes play a key role in scientific research and teaching, providing tools to perform data analysis and forward modeling through numerical computation. However, development of these codes is often hampered by the fact that there is no well-defined way for the authors to receive credit or professional recognition for their work through the standard methods of scientific publication and subsequent citation of the work. This in turn may discourage researchers from publishing their codes or making them easier for other scientists to use. We investigate the issues involved in citing software in a scientific context, and introduce features that should be components of a citation infrastructure, particularly oriented towards the codes and scientific culture in the area of geodynamics research. The codes used in geodynamics are primarily specialized numerical modeling codes for continuum mechanics problems; they may be developed by individual researchers, teams of researchers, geophysicists in collaboration with computational scientists and applied mathematicians, or by coordinated community efforts such as the Computational Infrastructure for Geodynamics. Some but not all geodynamics codes are open-source. These characteristics are common to many areas of geophysical software development and use. We provide background on the problem of software citation and discuss some of the barriers preventing adoption of such citations, including social/cultural barriers, insufficient technological support infrastructure, and an overall lack of agreement about what a software citation should consist of. We suggest solutions in an initial effort to create a system to support citation of software and promotion of scientific software development.
SUMMA and Model Mimicry: Understanding Differences Among Land Models
NASA Astrophysics Data System (ADS)
Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.
2016-12-01
Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.
Numerical simulation of long-duration blast wave evolution in confined facilities
NASA Astrophysics Data System (ADS)
Togashi, F.; Baum, J. D.; Mestreau, E.; Löhner, R.; Sunshine, D.
2010-10-01
The objective of this research effort was to investigate the quasi-steady flow field produced by explosives in confined facilities. In this effort we modeled tests in which a high explosive (HE) cylindrical charge was hung in the center of a room and detonated. The HEs used for the tests were C-4 and AFX 757. While C-4 is just slightly under-oxidized and is typically modeled as an ideal explosive, AFX 757 includes a significant percentage of aluminum particles, so long-time afterburning and energy release must be considered. The Lawrence Livermore National Laboratory (LLNL)-produced thermo-chemical equilibrium algorithm, “Cheetah”, was used to estimate the remaining burnable detonation products. From these remaining species, the afterburning energy was computed and added to the flow field. Computations of the detonation and afterburn of two HEs in the confined multi-room facility were performed. The results demonstrate excellent agreement with available experimental data in terms of blast wave time of arrival, peak shock amplitude, reverberation, and total impulse (and hence, total energy release, via either the detonation or afterburn processes.
Large calculation of the flow over a hypersonic vehicle using a GPU
NASA Astrophysics Data System (ADS)
Elsen, Erich; LeGresley, Patrick; Darve, Eric
2008-12-01
Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.
NASA Astrophysics Data System (ADS)
Opitz, Florian; Treffinger, Peter
2016-04-01
Electric arc furnaces (EAF) are complex industrial plants whose actual behavior depends upon numerous factors. Due to its energy intensive operation, the EAF process has always been subject to optimization efforts. For these reasons, several models have been proposed in literature to analyze and predict different modes of operation. Most of these models focused on the processes inside the vessel itself. The present paper introduces a dynamic, physics-based model of a complete EAF plant which consists of the four subsystems vessel, electric system, electrode regulation, and off-gas system. Furthermore the solid phase is not treated to be homogenous but a simple spatial discretization is employed. Hence it is possible to simulate the energy input by electric arcs and fossil fuel burners depending on the state of the melting progress. The model is implemented in object-oriented, equation-based language Modelica. The simulation results are compared to literature data.
Wang, Sen; Feng, Qihong; Han, Xiaodong
2013-01-01
Due to the long-term fluid-solid interactions in waterflooding, the tremendous variation of oil reservoir formation parameters will lead to the widespread evolution of preferential flow paths, thereby preventing the further enhancement of recovery efficiency because of unstable fingering and premature breakthrough. To improve oil recovery, the characterization of preferential flow paths is essential and imperative. In efforts that have been previously documented, fluid flow characteristics within preferential paths are assumed to obey Darcy's equation. However, the occurrence of non-Darcy flow behavior has been increasingly suggested. To examine this conjecture, the Forchheimer number with the inertial coefficient estimated from different empirical formulas is applied as the criterion. Considering a 10% non-Darcy effect, the fluid flow in a preferential path may do experience non-Darcy behavior. With the objective of characterizing the preferential path with non-Darcy flow, a hybrid analytical/numerical model has been developed to investigate the pressure transient response, which dynamically couples a numerical model describing the non-Darcy effect of a preferential flow path with an analytical reservoir model. The characteristics of the pressure transient behavior and the sensitivities of corresponding parameters have also been discussed. In addition, an interpretation approach for pressure transient testing is also proposed, in which the Gravitational Search Algorithm is employed as a non-linear regression technology to match measured pressure with this hybrid model. Examples of applications from different oilfields are also presented to illustrate this method. This cost-effective approach provides more accurate characterization of a preferential flow path with non-Darcy flow, which will lay a solid foundation for the design and operation of conformance control treatments, as well as several other Enhanced Oil Recovery projects. PMID:24386224
Magnetohydrodynamic Convection in the Outer Core and its Geodynamic Consequences
NASA Technical Reports Server (NTRS)
Kuang, Weijia; Chao, Benjamin F.; Fang, Ming
2004-01-01
The Earth's fluid outer core is in vigorous convection through much of the Earth's history. In addition to generating and maintaining Earth s time-varying magnetic field (geodynamo), the core convection also generates mass redistribution in the core and a dynamical pressure field on the core-mantle boundary (CMB). All these shall result in various core-mantle interactions, and contribute to surface geodynamic observables. For example, electromagnetic core-mantle coupling arises from finite electrically conducting lower mantle; gravitational interaction occurs between the cores and the heterogeneous mantle; mechanical coupling may also occur when the CMB topography is aspherical. Besides changing the mantle rotation via the coupling torques, the mass-redistribution in the core shall produce a spatial-temporal gravity anomaly. Numerical modeling of the core dynamical processes contributes in several geophysical disciplines. It helps explain the physical causes of surface geodynamic observables via space geodetic techniques and other means, e.g. Earth's rotation variation on decadal time scales, and secular time-variable gravity. Conversely, identification of the sources of the observables can provide additional insights on the dynamics of the fluid core, leading to better constraints on the physics in the numerical modeling. In the past few years, our core dynamics modeling efforts, with respect to our MoSST model, have made significant progress in understanding individual geophysical consequences. However, integrated studies are desirable, not only because of more mature numerical core dynamics models, but also because of inter-correlation among the geophysical phenomena, e.g. mass redistribution in the outer core produces not only time-variable gravity, but also gravitational core-mantle coupling and thus the Earth's rotation variation. They are expected to further facilitate multidisciplinary studies of core dynamics and interactions of the core with other components of the Earth.
Chapman, Steven W; Parker, Beth L; Sale, Tom C; Doner, Lee Ann
2012-08-01
It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making. Copyright © 2012 Elsevier B.V. All rights reserved.
Prerequisites for understanding climate-change impacts on northern prairie wetlands
Anteau, Michael J.; Wiltermuth, Mark T.; Post van der Burg, Max; Pearse, Aaron T.
2016-01-01
The Prairie Pothole Region (PPR) contains ecosystems that are typified by an extensive matrix of grasslands and depressional wetlands, which provide numerous ecosystem services. Over the past 150 years the PPR has experienced numerous landscape modifications resulting in agricultural conversion of 75–99 % of native prairie uplands and drainage of 50–90 % of wetlands. There is concern over how and where conservation dollars should be spent within the PPR to protect and restore wetland basins to support waterbird populations that will be robust to a changing climate. However, while hydrological impacts of landscape modifications appear substantial, they are still poorly understood. Previous modeling efforts addressing impacts of climate change on PPR wetlands have yet to fully incorporate interacting or potentially overshadowing impacts of landscape modification. We outlined several information needs for building more informative models to predict climate change effects on PPR wetlands. We reviewed how landscape modification influences wetland hydrology and present a conceptual model to describe how modified wetlands might respond to climate variability. We note that current climate projections do not incorporate cyclical variability in climate between wet and dry periods even though such dynamics have shaped the hydrology and ecology of PPR wetlands. We conclude that there are at least three prerequisite steps to making meaningful predictions about effects of climate change on PPR wetlands. Those evident to us are: 1) an understanding of how physical and watershed characteristics of wetland basins of similar hydroperiods vary across temperature and moisture gradients; 2) a mechanistic understanding of how wetlands respond to climate across a gradient of anthropogenic modifications; and 3) improved climate projections for the PPR that can meaningfully represent potential changes in climate variability including intensity and duration of wet and dry periods. Once these issues are addressed, we contend that modeling efforts will better inform and quantify ecosystem services provided by wetlands to meet needs of waterbird conservation and broader societal interests such as flood control and water quality.
Early warnings of hazardous thunderstorms over Lake Victoria
NASA Astrophysics Data System (ADS)
Thiery, Wim; Gudmundsson, Lukas; Bedka, Kristopher; Semazzi, Fredrick H. M.; Lhermitte, Stef; Willems, Patrick; van Lipzig, Nicole P. M.; Seneviratne, Sonia I.
2017-07-01
Weather extremes have harmful impacts on communities around Lake Victoria in East Africa. Every year, intense nighttime thunderstorms cause numerous boating accidents on the lake, resulting in thousands of deaths among fishermen. Operational storm warning systems are therefore crucial. Here we complement ongoing early warning efforts based on numerical weather prediction, by presenting a new satellite data-driven storm prediction system, the prototype Lake Victoria Intense storm Early Warning System (VIEWS). VIEWS derives predictability from the correlation between afternoon land storm activity and nighttime storm intensity on Lake Victoria, and relies on logistic regression techniques to forecast extreme thunderstorms from satellite observations. Evaluation of the statistical model reveals that predictive power is high and independent of the type of input dataset. We then optimise the configuration and show that false alarms also contain valuable information. Our results suggest that regression-based models that are motivated through process understanding have the potential to reduce the vulnerability of local fishing communities around Lake Victoria. The experimental prediction system is publicly available under the MIT licence at http://github.com/wthiery/VIEWS.
Kar, T K; Ghosh, Bapan
2012-08-01
In the present paper, we develop a simple two species prey-predator model in which the predator is partially coupled with alternative prey. The aim is to study the consequences of providing additional food to the predator as well as the effects of harvesting efforts applied to both the species. It is observed that the provision of alternative food to predator is not always beneficial to the system. A complete picture of the long run dynamics of the system is discussed based on the effort pair as control parameters. Optimal augmentations of prey and predator biomass at final time have been investigated by optimal control theory. Also the short and large time effects of the application of optimal control have been discussed. Finally, some numerical illustrations are given to verify our analytical results with the help of different sets of parameters. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Duarte, Adam; Wolcott, Daniel M.; Chow, T. Edwin
2012-01-01
The Aleutian shield fern Polystichum aleuticum is endemic to the Aleutian archipelago of Alaska and is listed as endangered pursuant to the U.S. Endangered Species Act. Despite numerous efforts to discover new populations of this species, only four known populations are documented to date, and information is needed to prioritize locations for future surveys. Therefore, we incorporated topographical habitat characteristics (elevation, slope, aspect, distance from coastline, and anthropogenic footprint) found at known Aleutian shield fern locations into a Geographical Information System (GIS) model to create a habitat suitability map for the entirety of the Andreaonof Islands. A total of 18 islands contained 489.26 km2 of highly suitable and moderately suitable habitat when weighting each factor equally. This study reports a habitat suitability map for the endangered Aleutian shield fern using topographical characteristics, which can be used to assist current and future recovery efforts for the species.
Adapting California’s ecosystems to a changing climate
Elizabeth Chornesky,; David Ackerly,; Paul Beier,; Frank Davis,; Flint, Lorraine E.; Lawler, Joshua J.; Moyle, Peter B.; Moritz, Max A.; Scoonover, Mary; Byrd, Kristin B.; Alvarez, Pelayo; Heller, Nicole E.; Micheli, Elisabeth; Weiss, Stuart
2017-01-01
Significant efforts are underway to translate improved understanding of how climate change is altering ecosystems into practical actions for sustaining ecosystem functions and benefits. We explore this transition in California, where adaptation and mitigation are advancing relatively rapidly, through four case studies that span large spatial domains and encompass diverse ecological systems, institutions, ownerships, and policies. The case studies demonstrate the context specificity of societal efforts to adapt ecosystems to climate change and involve applications of diverse scientific tools (e.g., scenario analyses, downscaled climate projections, ecological and connectivity models) tailored to specific planning and management situations (alternative energy siting, wetland management, rangeland management, open space planning). They illustrate how existing institutional and policy frameworks provide numerous opportunities to advance adaptation related to ecosystems and suggest that progress is likely to be greatest when scientific knowledge is integrated into collective planning and when supportive policies and financing enable action.
On the Reprocessing and Reanalysis of Observations for Climate
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Kennedy, John; Dee, Dick; ONeill, Alan
2012-01-01
The long observational record is critical to our understanding of the Earth s climate, but most observing systems were not developed with a climate objective in mind. As a result, tremendous efforts have gone into assessing and reprocessing the data records to improve their usefulness in climate studies. Many challenges remain, such as tracking the improvement of processing algorithms and limited spatial coverage. Reanalyses have fostered significant research, yet reliable global trends in many physical fields are not yet attainable, despite significant advances in data assimilation and numerical modeling. Communication of the strengths, limitations and uncertainties of reprocessed observations and reanalysis data, not only among the community of developers, but also with the extended research community, including the new generations of researchers and the decision makers is crucial for further advancement of the observational data records. WCRP provides the means to bridge the different motivating objectives on which national efforts focus.
Device research task (processing and high-efficiency solar cells)
NASA Technical Reports Server (NTRS)
1986-01-01
This task has been expanded since the last 25th Project Integration Meeting (PIM) to include process research in addition to device research. The objective of this task is to assist the Flat-plate Solar Array (FSA) Project in meeting its near- and long-term goals by identifying and implementing research in the areas of device physics, device structures, measurement techniques, material-device interactions, and cell processing. The research efforts of this task are described and reflect the deversity of device research being conducted. All of the contracts being reported are either completed or near completion and culminate the device research efforts of the FSA Project. Optimazation methods and silicon solar cell numerical models, carrier transport and recombination parameters in heavily doped silicon, development and analysis of silicon solar cells of near 20% efficiency, and SiN sub x passivation of silicon surfaces are discussed.
NASA Astrophysics Data System (ADS)
Yang, Xiang I. A.; Park, George Ilhwan; Moin, Parviz
2017-10-01
Log-layer mismatch refers to a chronic problem found in wall-modeled large-eddy simulation (WMLES) or detached-eddy simulation, where the modeled wall-shear stress deviates from the true one by approximately 15 % . Many efforts have been made to resolve this mismatch. The often-used fixes, which are generally ad hoc, include modifying subgrid-scale stress models, adding a stochastic forcing, and moving the LES-wall-model matching location away from the wall. An analysis motivated by the integral wall-model formalism suggests that log-layer mismatch is resolved by the built-in physics-based temporal filtering. In this work we investigate in detail the effects of local filtering on log-layer mismatch. We show that both local temporal filtering and local wall-parallel filtering resolve log-layer mismatch without moving the LES-wall-model matching location away from the wall. Additionally, we look into the momentum balance in the near-wall region to provide an alternative explanation of how LLM occurs, which does not necessarily rely on the numerical-error argument. While filtering resolves log-layer mismatch, the quality of the wall-shear stress fluctuations predicted by WMLES does not improve with our remedy. The wall-shear stress fluctuations are highly underpredicted due to the implied use of LES filtering. However, good agreement can be found when the WMLES data are compared to the direct numerical simulation data filtered at the corresponding WMLES resolutions.
On the Forward Scattering of Microwave Breast Imaging
Lui, Hoi-Shun; Fhager, Andreas; Persson, Mikael
2012-01-01
Microwave imaging for breast cancer detection has been of significant interest for the last two decades. Recent studies focus on solving the imaging problem using an inverse scattering approach. Efforts have mainly been focused on the development of the inverse scattering algorithms, experimental setup, antenna design and clinical trials. However, the success of microwave breast imaging also heavily relies on the quality of the forward data such that the tumor inside the breast volume is well illuminated. In this work, a numerical study of the forward scattering data is conducted. The scattering behavior of simple breast models under different polarization states and aspect angles of illumination are considered. Numerical results have demonstrated that better data contrast could be obtained when the breast volume is illuminated using cross-polarized components in linear polarization basis or the copolarized components in the circular polarization basis. PMID:22611371
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
The Finite Strain Johnson Cook Plasticity and Damage Constitutive Model in ALEGRA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Jason James
A finite strain formulation of the Johnson Cook plasticity and damage model and it's numerical implementation into the ALEGRA code is presented. The goal of this work is to improve the predictive material failure capability of the Johnson Cook model. The new implementation consists of a coupling of damage and the stored elastic energy as well as the minimum failure strain criteria for spall included in the original model development. This effort establishes the necessary foundation for a thermodynamically consistent and complete continuum solid material model, for which all intensive properties derive from a common energy. The motivation for developingmore » such a model is to improve upon ALEGRA's present combined model framework. Several applications of the new Johnson Cook implementation are presented. Deformation driven loading paths demonstrate the basic features of the new model formulation. Use of the model produces good comparisons with experimental Taylor impact data. Localized deformation leading to fragmentation is produced for expanding ring and exploding cylinder applications.« less
Numerical simulations of relativistic heavy-ion reactions
NASA Astrophysics Data System (ADS)
Daffin, Frank Cecil
Bulk quantities of nuclear matter exist only in the compact bodies of the universe. There the crushing gravitational forces overcome the Coulomb repulsion in massive stellar collapses. Nuclear matter is subjected to high pressures and temperatures as shock waves propagate and burn their way through stellar cores. The bulk properties of nuclear matter are important parameters in the evolution of these collapses, some of which lead to nucleosynthesis. The nucleus is rich in physical phenomena. Above the Coulomb barrier, complex interactions lead to the distortion of, and as collision energies increase, the destruction of the nuclear volume. Of critical importance to the understanding of these events is an understanding of the aggregate microscopic processes which govern them. In an effort to understand relativistic heavy-ion reactions, the Boltzmann-Uehling-Uhlenbeck (Ueh33) (BUU) transport equation is used as the framework for a numerical model. In the years since its introduction, the numerical model has been instrumental in providing a coherent, microscopic, physical description of these complex, highly non-linear events. This treatise describes the background leading to the creation of our numerical model of the BUU transport equation, details of its numerical implementation, its application to the study of relativistic heavy-ion collisions, and some of the experimental observables used to compare calculated results to empirical results. The formalism evolves the one-body Wigner phase-space distribution of nucleons in time under the influence of a single-particle nuclear mean field interaction and a collision source term. This is essentially the familiar Boltzmann transport equation whose source term has been modified to address the Pauli exclusion principle. Two elements of the model allow extrapolation from the study of nuclear collisions to bulk quantities of nuclear matter: the modification of nucleon scattering cross sections in nuclear matter, and the compressibility of nuclear matter. Both are primarily subject to the short- range portion of the inter-nucleon potential, and do not show strong finite-size effects. To that end, several useful observables are introduced and their behavior, as BUU model parameters are changed, explored. The average, directed, in-plane, transverse momentum distribution in rapidity is the oldest of the observables presented in this work. Its slope at mid- rapidity is called the flow of the event, and well characterizes the interplay of repulsive and attractive elements of the dynamics of the events. The BUU model has been quite successful in its role of illuminating the physics of intermediate energy heavy-ion collisions. Though current numerical implementations suffer from some shortcomings they have nonetheless served the community well.
NASA Astrophysics Data System (ADS)
Oetjen, Jan; Engel, Max; Prasad Pudasaini, Shiva; Schüttrumpf, Holger; Brückner, Helmut
2017-04-01
Coasts around the world are affected by high-energy wave events like storm surges or tsunamis depending on their regional climatological and geological settings. By focusing on tsunami impacts, we combine the abilities and experiences of different scientific fields aiming at improved insights of near- and onshore tsunami hydrodynamics. We investigate the transport of coarse clasts - so called boulders - due to tsunami impacts by a multi-methodology approach of numerical modelling, laboratory experiments, and sedimentary field records. Coupled numerical hydrodynamic and boulder transport models (BTM) are widely applied for analysing the impact characteristics of the transport by tsunami, such as wave height and flow velocity. Numerical models able to simulate past tsunami events and the corresponding boulder transport patterns with high accuracy and acceptable computational effort can be utilized as powerful forecasting models predicting the impact of a coast approaching tsunami. We have conducted small-scale physical experiments in the tilting flume with real shaped boulder models. Utilizing the structure from motion technique (Westoby et al., 2012) we reconstructed real boulders from a field study on the Island of Bonaire (Lesser Antilles, Caribbean Sea, Engel & May, 2012). The obtained three-dimensional boulder meshes are utilized for creating downscaled replica of the real boulder for physical experiments. The results of the irregular shaped boulder are compared to experiments with regular shaped boulder models to achieve a better insight about the shape related influence on transport patterns. The numerical model is based on the general two-phase mass flow model by Pudasaini (2012) enhanced for boulder transport simulations. The boulder is implemented using the immersed boundary technique (Peskin, 2002) and the direct forcing approach. In this method Cartesian grids (fluid and particle phase) and Lagrangian meshes (boulder) are combined. By applying the immersed boundary method we can compute the interactions between fluid, particles and arbitrary boulder shape. We are able to reproduce the exact physical experiment for calibration and verification of the tsunami boulder transport phenomena. First results of the study will be presented. Engel, M.; May, S.M.: Bonaire's boulder fields revisited: evidence for Holocene tsunami impact on the Leeward, Antilles. Quaternary Science Reviews 54, 126-141, 2012. Peskin, C.S.: The immersed boundary method. Acta Numerica, 479 - 517, 2002. Pudasaini, S. P.: A general two-phase debris flow model. J. Geophys. Res. Earth Surf., 117, F03010, 2012. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M.: 'Structure-from-Motion' photogrammetry - a low-cost, effective tool for geoscience applications. Geomorphology 179, 300-314, 2012.
The effect of mathematics anxiety on the processing of numerical magnitude.
Maloney, Erin A; Ansari, Daniel; Fugelsang, Jonathan A
2011-01-01
In an effort to understand the origins of mathematics anxiety, we investigated the processing of symbolic magnitude by high mathematics-anxious (HMA) and low mathematics-anxious (LMA) individuals by examining their performance on two variants of the symbolic numerical comparison task. In two experiments, a numerical distance by mathematics anxiety (MA) interaction was obtained, demonstrating that the effect of numerical distance on response times was larger for HMA than for LMA individuals. These data support the claim that HMA individuals have less precise representations of numerical magnitude than their LMA peers, suggesting that MA is associated with low-level numerical deficits that compromise the development of higher level mathematical skills.
Observational and numerical studies of extreme frontal scale contraction
NASA Technical Reports Server (NTRS)
Koch, Steven E.
1995-01-01
The general objective of this effort is to increase understanding of how frontal scale contraction processes may create and sustain intense mesoscale precipitation along intensifying cold fronts. The five-part project (an expansion of the originally proposed two-part project) employed conventional meteorological data, special mesoscale data, remote sensing measurements, and various numerical models. First an idealized hydrostatic modeling study of the scale contraction effects of differential cloud cover on low-level frontal structure and dynamics was completed and published in a peer-reviewed journal. The second objective was to complete and publish the results from a three dimensional numerical model simulation of a cold front in which differential sensible heating related to cloud coverage patterns was apparently crucial in the formation of a severe frontal squall line. The third objective was to use a nonhydrostatic model to examine the nonlinear interactions between the transverse circulation arising from inhomogeneous cloud cover, the adiabatic frontal circulation related to semi-geostrophic forcing, and diabatic effects related to precipitation processes, in the development of a density current-like microstructure at the leading edge of cold fronts. Although the development of a frontal model that could be used to initialize such a primitive equation model was begun, we decided to focus our efforts instead on a project that could be successfully completed in this short time, due to the lack of prospects for continued NASA funding beyond this first year (our proposal was not accepted for future funding). Thus, a fourth task was added, which was to use the nonhydrostatic model to test tentative hypotheses developed from the most detailed observations ever obtained on a density current (primarily sodar and wind profiler data). These simulations were successfully completed, the findings were reported at a scientific conference, and the results have recently been submitted to a peer-reviewed journal. The fifth objective was to complete the analysis of data collected during the Cooperative Oklahoma Profiler Studies (COPS-91) field project, which was supported by NASA. The analysis of the mesoscale surface and sounding data, Doppler radar imagery, and other remote sensing data from multi frequency wind profiler, microwave radiometer, and the Radio Acoustic Sounding System has been completed. This study is a unique investigation of processes that caused the contraction of a cold front to a microscale zone exhibiting an undular bore-like structure. Results were reported at a scientific conference and are being prepared for publication. In summary, considerable progress has been achieved under NASA funding in furthering our understanding of frontal scale contraction and density current - gravity wave interaction processes, and in utilizing models and remotely sensed data in such studies.
NASA Astrophysics Data System (ADS)
Pulkkinen, A.
2012-12-01
Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble modeling (e.g. taking into account uncertainty in the inflow boundary conditions).
Numerical Simulation of a High Mach Number Jet Flow
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.
1993-01-01
The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.
Modeling Momentum Transfer from Kinetic Impacts: Implications for Redirecting Asteroids
Stickle, A. M.; Atchison, J. A.; Barnouin, O. S.; ...
2015-05-19
Kinetic impactors are one way to deflect a potentially hazardous object headed for Earth. The Asteroid Impact and Deflection Assessment (AIDA) mission is designed to test the effectiveness of this approach and is a joint effort between NASA and ESA. The NASA-led portion is the Double Asteroid Redirect Test (DART) and is composed of a ~300-kg spacecraft designed to impact the moon of the binary system 65803 Didymos. The deflection of the moon will be measured by the ESA-led Asteroid Impact Mission (AIM) (which will characterize the moon) and from ground-based observations. Because the material properties and internal structure ofmore » the target are poorly constrained, however, analytical models and numerical simulations must be used to understand the range of potential outcomes. Here, we describe a modeling effort combining analytical models and CTH simulations to determine possible outcomes of the DART impact. We examine a wide parameter space and provide predictions for crater size, ejecta mass, and momentum transfer following the impact into the moon of the Didymos system. For impacts into “realistic” asteroid types, these models produce craters with diameters on the order of 10 m, an imparted Δv of 0.5–2 mm/s and a momentum enhancement of 1.07 to 5 for a highly porous aggregate to a fully dense rock.« less
Analyzing extreme sea levels for broad-scale impact and adaptation studies
NASA Astrophysics Data System (ADS)
Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.
2017-12-01
Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels (ESL), because increasing damage due to extreme events is one of the major consequences of sea-level rise (SLR) and climate change. Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future SLR; different scenarios were developed with process-based or semi-empirical models and used for coastal impact studies at various temporal and spatial scales to guide coastal management and adaptation efforts. Uncertainties in future SLR are typically accounted for by analyzing the impacts associated with a range of scenarios and model ensembles. ESL distributions are then displaced vertically according to the SLR scenarios under the inherent assumption that we have perfect knowledge on the statistics of extremes. However, there is still a limited understanding of present-day ESL which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of storm surge water levels, and (2) statistical models used for determining present-day ESL exceedance probabilities. There is no universally accepted approach to obtain such values for broad-scale flood risk assessments and while substantial research has explored SLR uncertainties, we quantify, for the first time globally, key uncertainties in ESL estimates. We find that contemporary ESL uncertainties exceed those from SLR projections and, assuming that we meet the Paris agreement, the projected SLR itself by the end of the century. Our results highlight the necessity to further improve our understanding of uncertainties in ESL estimates through (1) continued improvement of numerical and statistical models to simulate and analyze coastal water levels and (2) exploit the rich observational database and continue data archeology to obtain longer time series and remove model bias. Finally, ESL uncertainties need to be integrated with SLR uncertainties. Otherwise, important improvements in providing more robust SLR projections are of less benefit for broad-scale impact and adaptation studies and decision processes.
NASA Astrophysics Data System (ADS)
Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.
2017-12-01
Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.
The Site-Scale Saturated Zone Flow Model for Yucca Mountain
NASA Astrophysics Data System (ADS)
Al-Aziz, E.; James, S. C.; Arnold, B. W.; Zyvoloski, G. A.
2006-12-01
This presentation provides a reinterpreted conceptual model of the Yucca Mountain site-scale flow system subject to all quality assurance procedures. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain, which is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. This effort started from the ground up with a revised and updated hydrogeologic framework model, which incorporates the latest lithology data, and increased grid resolution that better resolves the hydrogeologic framework, which was updated throughout the model domain. In addition, faults are much better represented using the 250× 250- m2 spacing (compared to the previous model's 500× 500-m2 spacing). Data collected since the previous model calibration effort have been included and they comprise all Nye County water-level data through Phase IV of their Early Warning Drilling Program. Target boundary fluxes are derived from the newest (2004) Death Valley Regional Flow System model from the US Geologic Survey. A consistent weighting scheme assigns importance to each measured water-level datum and boundary flux extracted from the regional model. The numerical model is calibrated by matching these weighted water level measurements and boundary fluxes using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM~v2.24 and parameter estimation software PEST~v5.5) and model setup facilitates efficient calibration of multiple conceptual models. Analyses evaluate the impact of these updates and additional data on the modeled potentiometric surface and the flowpaths emanating from below the repository. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the proposed repository and compare them to those from the previous model calibration. Specific discharge at a point 5~km from the repository is also examined and found to be within acceptable uncertainty. The results show that updated model yields a calibration with smaller residuals than the previous model revision while ensuring that flowpaths follow measured gradients and paths derived from hydrochemical analyses. This work was supported by the Yucca Mountain Site Characterization Office as part of the Civilian Radioactive Waste Management Program, which is managed by the U.S. Department of Energy, Yucca Mountain Site Characterization Project. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.
Increasing Work Opportunities for Low-Income Workers through TANF and Economic Development Programs.
ERIC Educational Resources Information Center
Friedman, Pamela
2002-01-01
The numerous layoffs of low-income workers that occurred when the nation's economy slowed in 2001 have created numerous challenges for local Temporary Assistance for Needy Families (TANF) programs. By increasing collaboration between community economic development and workforce development efforts to serve low-income residents, states and…
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
Hydrologic modeling as a predictive basis for ecological restoration of salt marshes
Roman, C.T.; Garvine, R.W.; Portnoy, J.W.
1995-01-01
Roads, bridges, causeways, impoundments, and dikes in the coastal zone often restrict tidal flow to salt marsh ecosystems. A dike with tide control structures, located at the mouth of the Herring River salt marsh estuarine system (Wellfleet, Massachusetts) since 1908, has effectively restricted tidal exchange, causing changes in marsh vegetation composition, degraded water quality, and reduced abundance of fish and macroinvertebrate communities. Restoration of this estuary by reintroduction of tidal exchange is a feasible management alternative. However, restoration efforts must proceed with caution as residential dwellings and a golf course are located immediately adjacent to and in places within the tidal wetland. A numerical model was developed to predict tide height levels for numerous alternative openings through the Herring River dike. Given these model predictions and knowledge of elevations of flood-prone areas, it becomes possible to make responsible decisions regarding restoration. Moreover, tidal flooding elevations relative to the wetland surface must be known to predict optimum conditions for ecological recovery. The tide height model has a universal role, as demonstrated by successful application at a nearby salt marsh restoration site in Provincetown, Massachusetts. Salt marsh restoration is a valuable management tool toward maintaining and enhancing coastal zone habitat diversity. The tide height model presented in this paper will enable both scientists and resource professionals to assign a degree of predictability when designing salt marsh restoration programs.
NASA Astrophysics Data System (ADS)
Nielsen, Jens C. O.; Li, Xin
2018-01-01
An iterative procedure for numerical prediction of long-term degradation of railway track geometry (longitudinal level) due to accumulated differential settlement of ballast/subgrade is presented. The procedure is based on a time-domain model of dynamic vehicle-track interaction to calculate the contact loads between sleepers and ballast in the short-term, which are then used in an empirical model to determine the settlement of ballast/subgrade below each sleeper in the long-term. The number of load cycles (wheel passages) accounted for in each iteration step is determined by an adaptive step length given by a maximum settlement increment. To reduce the computational effort for the simulations of dynamic vehicle-track interaction, complex-valued modal synthesis with a truncated modal set is applied for the linear subset of the discretely supported track model with non-proportional spatial distribution of viscous damping. Gravity loads and state-dependent vehicle, track and wheel-rail contact conditions are accounted for as external loads on the modal model, including situations involving loss of (and recovered) wheel-rail contact, impact between hanging sleeper and ballast, and/or a prescribed variation of non-linear track support stiffness properties along the track model. The procedure is demonstrated by calculating the degradation of longitudinal level over time as initiated by a prescribed initial local rail irregularity (dipped welded rail joint).
Model reduction of multiscale chemical langevin equations: a numerical case study.
Sotiropoulos, Vassilios; Contou-Carrere, Marie-Nathalie; Daoutidis, Prodromos; Kaznessis, Yiannis N
2009-01-01
Two very important characteristics of biological reaction networks need to be considered carefully when modeling these systems. First, models must account for the inherent probabilistic nature of systems far from the thermodynamic limit. Often, biological systems cannot be modeled with traditional continuous-deterministic models. Second, models must take into consideration the disparate spectrum of time scales observed in biological phenomena, such as slow transcription events and fast dimerization reactions. In the last decade, significant efforts have been expended on the development of stochastic chemical kinetics models to capture the dynamics of biomolecular systems, and on the development of robust multiscale algorithms, able to handle stiffness. In this paper, the focus is on the dynamics of reaction sets governed by stiff chemical Langevin equations, i.e., stiff stochastic differential equations. These are particularly challenging systems to model, requiring prohibitively small integration step sizes. We describe and illustrate the application of a semianalytical reduction framework for chemical Langevin equations that results in significant gains in computational cost.
NASA Astrophysics Data System (ADS)
Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.
2009-12-01
During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions (consistent with the ESG2006 exercise which targeted the Grenoble Valley). Diffractions off the basin edges and induced surface-wave propagation mainly contribute to differences between predictions. The differences are particularly large in the elastic models but remain important also in models with attenuation. In the validation, predictions are compared with the recordings by a local array of 19 surface and borehole accelerometers. The level of agreement is found event-dependent. For the largest-magnitude event the agreement is surprisingly good even at high frequencies.
Towards three-dimensional continuum models of self-consistent along-strike megathrust segmentation
NASA Astrophysics Data System (ADS)
Pranger, Casper; van Dinther, Ylona; May, Dave; Le Pourhiet, Laetitia; Gerya, Taras
2016-04-01
At subduction megathrusts, propagation of large ruptures may be confined between the up-dip and down-dip limits of the seismogenic zone. This opens a primary role for lateral rupture dimensions to control the magnitude and severity of megathrust earthquakes. The goal of this study is to improve our understanding of the ways in which the inherent variability of the subduction interface may influence the degree of interseismic locking, and the propensity of a rupture to propagate over regions of variable slip potential. The global absence of a historic record sufficiently long to base risk assessment on, makes us rely on numerical modelling as a way to extend our understanding of the spatio-temporal occurrence of earthquakes. However, the complex interaction of the subduction stress environment, the variability of the subduction interface, and the structure and deformation of the crustal wedge has made it very difficult to construct comprehensive numerical models of megathrust segmentation. We develop and exploit the power of a plastic 3D continuum representation of the subduction megathrust, as well as off-megathrust faulting to model the long-term tectonic build-up of stresses, and their sudden seismic release. The sheer size of the 3D problem, and the time scales covering those of tectonics as well as seismology, force us to explore efficient and accurate physical and numerical techniques. We thus focused our efforts on developing a staggered grid finite difference code that makes use of the PETSc library for massively parallel computing. The code incorporates a newly developed automatic discretization algorithm, which enables it to handle a wide variety of equations with relative ease. The different physical and numerical ingredients - like attenuating visco-elasto-plastic materials, frictional weakening and inertially driven seismic release, and adaptive time marching schemes - most of which have been implemented and benchmarked individually - are now combined into one algorithm. We are working towards presenting the first benchmarked 3D dynamic rupture models as an important step towards seismic cycle modelling of megathrust segmentation in a three-dimensional subduction setting with slow tectonic loading, self consistent fault development, and spontaneous seismicity.
NASA Astrophysics Data System (ADS)
Narula, Manmeet Singh
Innovative concepts using fast flowing thin films of liquid metals (like lithium) have been proposed for the protection of the divertor surface in magnetic fusion devices. However, concerns exist about the possibility of establishing the required flow of liquid metal thin films because of the presence of strong magnetic fields which can cause flow disrupting MHD effects. A plan is underway to design liquid lithium based divertor protection concepts for NSTX, a small spherical torus experiment at Princeton. Of these, a promising concept is the use of modularized fast flowing liquid lithium film zones, as the divertor (called the NSTX liquid surface module concept or NSTX LSM). The dynamic response of the liquid metal film flow in a spatially varying magnetic field configuration is still unknown and it is suspected that some unpredicted effects might be lurking. The primary goal of the research work being reported in this dissertation is to provide qualitative and quantitative information on the liquid metal film flow dynamics under spatially varying magnetic field conditions, typical of the divertor region of a magnetic fusion device. The liquid metal film flow dynamics have been studied through a synergic experimental and numerical modeling effort. The Magneto Thermofluid Omnibus Research (MTOR) facility at UCLA has been used to design several experiments to study the MHD interaction of liquid gallium films under a scaled NSTX outboard divertor magnetic field environment. A 3D multi-material, free surface MHD modeling capability is under development in collaboration with HyPerComp Inc., an SBIR vendor. This numerical code called HIMAG provides a unique capability to model the equations of incompressible MHD with a free surface. Some parts of this modeling capability have been developed in this research work, in the form of subroutines for HIMAG. Extensive code debugging and benchmarking exercise has also been carried out. Finally, HIMAG has been used to study the MHD interaction of fast flowing liquid metal films under various divertor relevant magnetic field configurations through numerical modeling exercises.
TRANSPORT BY MERIDIONAL CIRCULATIONS IN SOLAR-TYPE STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, T. S.; Brummell, N. H., E-mail: tsw25@soe.ucsc.edu
2012-08-20
Transport by meridional flows has significant consequences for stellar evolution, but is difficult to capture in global-scale numerical simulations because of the wide range of timescales involved. Stellar evolution models therefore usually adopt parameterizations for such transport based on idealized laminar or mean-field models. Unfortunately, recent attempts to model this transport in global simulations have produced results that are not consistent with any of these idealized models. In an effort to explain the discrepancies between global simulations and idealized models, here we use three-dimensional local Cartesian simulations of compressible convection to study the efficiency of transport by meridional flows belowmore » a convection zone in several parameter regimes of relevance to the Sun and solar-type stars. In these local simulations we are able to establish the correct ordering of dynamical timescales, although the separation of the timescales remains unrealistic. We find that, even though the generation of internal waves by convective overshoot produces a high degree of time dependence in the meridional flow field, the mean flow has the qualitative behavior predicted by laminar, 'balanced' models. In particular, we observe a progressive deepening, or 'burrowing', of the mean circulation if the local Eddington-Sweet timescale is shorter than the viscous diffusion timescale. Such burrowing is a robust prediction of laminar models in this parameter regime, but has never been observed in any previous numerical simulation. We argue that previous simulations therefore underestimate the transport by meridional flows.« less
NASA Astrophysics Data System (ADS)
Weatherill, Daniel P.; Stefanov, Konstantin D.; Greig, Thomas A.; Holland, Andrew D.
2014-07-01
Pixellated monolithic silicon detectors operated in a photon-counting regime are useful in spectroscopic imaging applications. Since a high energy incident photon may produce many excess free carriers upon absorption, both energy and spatial information can be recovered by resolving each interaction event. The performance of these devices in terms of both the energy and spatial resolution is in large part determined by the amount of diffusion which occurs during the collection of the charge cloud by the pixels. Past efforts to predict the X-ray performance of imaging sensors have used either analytical solutions to the diffusion equation or simplified monte carlo electron transport models. These methods are computationally attractive and highly useful but may be complemented using more physically detailed models based on TCAD simulations of the devices. Here we present initial results from a model which employs a full transient numerical solution of the classical semiconductor equations to model charge collection in device pixels under stimulation from initially Gaussian photogenerated charge clouds, using commercial TCAD software. Realistic device geometries and doping are included. By mapping the pixel response to different initial interaction positions and charge cloud sizes, the charge splitting behaviour of the model sensor under various illuminations and operating conditions is investigated. Experimental validation of the model is presented from an e2v CCD30-11 device under varying substrate bias, illuminated using an Fe-55 source.
Inspiration & Insight - a tribute to Niels Reeh
NASA Astrophysics Data System (ADS)
Ahlstrom, A. P.; Vieli, A.
2009-12-01
Niels Reeh was highly regarded for his contributions to glaciology, specifically through his rigorous combination of numerical modelling and field observations. In 1966 he began his work on the application of beam mechanics to floating glaciers and ice shelves and throughout his life, Niels retained a strong interest in modelling glacier dynamics. In the early 1980s Niels developed a 3D-model for ice sheets and in the late 1980s an advanced flow-line model. Niels Reeh also took part in the early ice-core drilling efforts in Greenland and later pioneered the concept of retrieving similar records from the surface of the ice-sheet margin. Mass balance of glaciers and ice sheets was another theme in Niels Reeh’s research, with a number of important contributions and insights still used when teaching the subject to students. Niels developed elegant models for ablation and snow densification, notable for their applicability in large-scale ice-sheet models and studied the impact of climate change on ice sheets and glaciers. Niels also took his interest in ice-dynamics and mass balance into remote sensing and worked successfully on methods to utilize radar and laser data from airborne surveys and satellites in glaciology. In this, he pioneered the combination of field experiments, satellite observations and numerical modelling to solve problems on the Greenland Ice Sheet. In this presentation we will attempt to provide an overview of Niels Reeh’s many-facetted career in acknowledgement of his contributions to the field of glaciology.
Impacts of moving bottlenecks on traffic flow
NASA Astrophysics Data System (ADS)
Ou, Hui; Tang, Tie-Qiao
2018-06-01
Bottleneck (especially the moving bottleneck) widely exists in the urban traffic system. However, little effort has been made to study the impacts of the moving bottleneck on traffic flow (especially the evolution and propagation of traffic flow). In this article, we introduce the speed of a moving bottleneck into a traffic flow model, then propose an extended macro traffic flow with a moving bottleneck, and finally use the proposed model to study the effects of a moving bottleneck on the evolution and propagation of traffic flow under uniform flow and a small perturbation. The numerical results indicate that the moving bottleneck has prominent influences on the evolution of traffic flow under the two typical traffic situations and that the impacts are dependent on the initial density.
Automotive Gas Turbine Power System-Performance Analysis Code
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
1997-01-01
An open cycle gas turbine numerical modelling code suitable for thermodynamic performance analysis (i.e. thermal efficiency, specific fuel consumption, cycle state points, working fluid flowrates etc.) of automotive and aircraft powerplant applications has been generated at the NASA Lewis Research Center's Power Technology Division. The use this code can be made available to automotive gas turbine preliminary design efforts, either in its present version, or, assuming that resources can be obtained to incorporate empirical models for component weight and packaging volume, in later version that includes the weight-volume estimator feature. The paper contains a brief discussion of the capabilities of the presently operational version of the code, including a listing of input and output parameters and actual sample output listings.
Helmet Sensor - Transfer Function and Model Development
2010-09-01
These events could be exposure to blast events (IEDs), ballistic impacts , and/or blunt impacts . The sensors record orthogonal accelerations and blast...during impact , the measured helmet response will be different from the head response. The objective of this effort is the characterize the...numerical equations or models) that approximate head exposures based on the observed helmet response. The physical testing included ballistic impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit; Deline, Chris
This paper briefly reviews the National Renewable Energy Laboratory's recent efforts on developing all-sky solar irradiance models for solar energy applications. The Fast All-sky Radiation Model for Solar applications (FARMS) utilizes the simulation of clear-sky transmittance and reflectance and a parameterization of cloud transmittance and reflectance to rapidly compute broadband irradiances on horizontal surfaces. FARMS delivers accuracy that is comparable to the two-stream approximation, but it is approximately 1,000 times faster. A FARMS-Narrowband Irradiance over Tilted surfaces (FARMS-NIT) has been developed to compute spectral irradiances on photovoltaic (PV) panels in 2002 wavelength bands. Further, FARMS-NIT has been extended for bifacialmore » PV panels.« less
NASA Technical Reports Server (NTRS)
Antar, Basil N.; Witherow, William K.; Paley, Mark S.; Curreri, Peter A. (Technical Monitor)
2001-01-01
This paper presents results from numerical simulations as well as laboratory experiments of buoyancy driven convection in an ampoule under varying heating and gravitational acceleration loadings. The modeling effort in this work resolves the large scale natural convective motion that occurs in the fluid during photodeposition of polydiacetelene films which is due to energy absorbed by the growth solution from a UV source. Consequently, the growth kinetics of the film are ignored in the model discussed here, and also a much simplified ampoule geometry is considered. The objective of this work is to validate the numerical prediction on the strength and structure of buoyancy driven convection that could occur under terrestrial conditions during nonlinear optical film growth. The validation is used to enable a reliable predictive capability on the nature and strength of the convective motion under low gravity conditions. The ampoule geometry is in the form of a parallelepiped with rectangular faces. The numerical results obtained from the solution to the Boussinesq equations show that natural convection will occur regardless of the orientation of the UV source with respect to the gravity vector. The least strong convective motion occurred with the UV beam directed at the top face of the parallelepiped. The strength of the convective motion was found to be almost linearly proportional to the total power of the UV source. Also, it was found that the strength of the convective motion decreased linearly with the gravity due to acceleration. The pattern of the convective flow on the other hand, depended on the source location.
Regulation of Glycan Structures in Animal Tissues
Nairn, Alison V.; York, William S.; Harris, Kyle; Hall, Erica M.; Pierce, J. Michael; Moremen, Kelley W.
2008-01-01
Glycan structures covalently attached to proteins and lipids play numerous roles in mammalian cells, including protein folding, targeting, recognition, and adhesion at the molecular or cellular level. Regulating the abundance of glycan structures on cellular glycoproteins and glycolipids is a complex process that depends on numerous factors. Most models for glycan regulation hypothesize that transcriptional control of the enzymes involved in glycan synthesis, modification, and catabolism determines glycan abundance and diversity. However, few broad-based studies have examined correlations between glycan structures and transcripts encoding the relevant biosynthetic and catabolic enzymes. Low transcript abundance for many glycan-related genes has hampered broad-based transcript profiling for comparison with glycan structural data. In an effort to facilitate comparison with glycan structural data and to identify the molecular basis of alterations in glycan structures, we have developed a medium-throughput quantitative real time reverse transcriptase-PCR platform for the analysis of transcripts encoding glycan-related enzymes and proteins in mouse tissues and cells. The method employs a comprehensive list of >700 genes, including enzymes involved in sugar-nucleotide biosynthesis, transporters, glycan extension, modification, recognition, catabolism, and numerous glycosylated core proteins. Comparison with parallel microarray analyses indicates a significantly greater sensitivity and dynamic range for our quantitative real time reverse transcriptase-PCR approach, particularly for the numerous low abundance glycan-related enzymes. Mapping of the genes and transcript levels to their respective biosynthetic pathway steps allowed a comparison with glycan structural data and provides support for a model where many, but not all, changes in glycan abundance result from alterations in transcript expression of corresponding biosynthetic enzymes. PMID:18411279
NASA Astrophysics Data System (ADS)
Sousasantos, J.; Kherani, E. A.; Sobral, J. H. A.
2017-02-01
Equatorial plasma bubbles (EPBs), or large-scale plasma depleted regions, are one of the subjects of great interest in space weather research since such phenomena have been extensively reported to cause strong degrading effects on transionospheric radio propagation at low latitudes, especially over the Brazilian region, where satellite communication interruptions by the EPBs have been, frequently, registered. One of the most difficult tasks for this field of scientific research is the forecasting of such plasma-depleted structures. This forecasting capability would be of significant help for users of positioning/navigation systems operating in the low-latitude/equatorial region all over the world. Recently, some efforts have been made trying to assess and improve the capability of predicting the EPB events. The purpose of this paper is to present an alternative approach to EPB prediction by means of the use of mathematical numerical simulation associated with ionospheric vertical drift, obtained through Digisonde data, focusing on telling beforehand whether ionospheric plasma instability processes will evolve or not into EPB structures. Modulations in the ionospheric vertical motion induced by gravity waves prior to the prereversal enhancement occurrence were used as input in the numerical model. A comparison between the numerical results and the observed EPB phenomena through CCD all-sky image data reveals a considerable coherence and supports the hypothesis of a capability of short-term forecasting.
NASA Technical Reports Server (NTRS)
Hiser, H. W.; Lee, S. S.; Veziroglu, T. N.; Sengupta, S.
1975-01-01
A comprehensive numerical model development program for near-field thermal plume discharge and far field general circulation in coastal regions is being carried on at the University of Miami Clean Energy Research Institute. The objective of the program is to develop a generalized, three-dimensional, predictive model for thermal pollution studies. Two regions of specific application of the model are the power plants sites at the Biscayne Bay and Hutchinson Island area along the Florida coastline. Remote sensing from aircraft as well as satellites are used in parallel with in situ measurements to provide information needed for the development and verification of the mathematical model. This paper describes the efforts that have been made to identify problems and limitations of the presently available satellite data and to develop methods for enhancing and enlarging thermal infrared displays for mesoscale sea surface temperature measurements.
Performability evaluation of the SIFT computer
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Furchtgott, D. G.; Wu, L. T.
1979-01-01
Performability modeling and evaluation techniques are applied to the SIFT computer as it might operate in the computational evironment of an air transport mission. User-visible performance of the total system (SIFT plus its environment) is modeled as a random variable taking values in a set of levels of accomplishment. These levels are defined in terms of four attributes of total system behavior: safety, no change in mission profile, no operational penalties, and no economic process whose states describe the internal structure of SIFT as well as relavant conditions of the environment. Base model state trajectories are related to accomplishment levels via a capability function which is formulated in terms of a 3-level model hierarchy. Performability evaluation algorithms are then applied to determine the performability of the total system for various choices of computer and environment parameter values. Numerical results of those evaluations are presented and, in conclusion, some implications of this effort are discussed.
NASA Technical Reports Server (NTRS)
Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.
2005-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
NASA Technical Reports Server (NTRS)
Martin, Michael A.; Nguyen, Huy H.; Greene, William D.; Seymout, David C.
2003-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
Reverse-Engineering Laboratory Astrophysics: Oxygen Inner-shell Absorption in the ISM
NASA Technical Reports Server (NTRS)
Garcia, J.; Gatuzz, E.; Kallman, T. R.; Mendoza, C.; Gorczyca, T. W.
2017-01-01
The modeling of X-ray spectra from photoionized astrophysical plasmas has been significantly improved due to recent advancements in the theoretical and numerical frameworks, as well as a consolidated and reliable atomic database of inner-shell transitions for all the relevant ions. We discuss these developments and the current state of X-ray spectral modeling in the context of oxygen cold absorption in the interstellar medium (ISM). Unconventionally, we use high-resolution astrophysical observations to accurately determine line positions, and adjust the theoretical models for a comprehensive interpretation of the observed X-ray spectra. This approach has brought to light standing discrepancies in the neutral oxygen absorption-line positions determined from observations and laboratory measurements. We give an overview of our current efforts to devise a definitive model of oxygen photoabsorption that can help to resolve the existing controversy regarding ISM atomic and molecular fractions.
Progress in high-lift aerodynamic calculations
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1993-01-01
The current work presents progress in the effort to numerically simulate the flow over high-lift aerodynamic components, namely, multi-element airfoils and wings in either a take-off or a landing configuration. The computational approach utilizes an incompressible flow solver and an overlaid chimera grid approach. A detailed grid resolution study is presented for flow over a three-element airfoil. Two turbulence models, a one-equation Baldwin-Barth model and a two equation k-omega model are compared. Excellent agreement with experiment is obtained for the lift coefficient at all angles of attack, including the prediction of maximum lift when using the two-equation model. Results for two other flap riggings are shown. Three-dimensional results are presented for a wing with a square wing-tip as a validation case. Grid generation and topology is discussed for computing the flow over a T-39 Sabreliner wing with flap deployed and the initial calculations for this geometry are presented.
He, Li; Xu, Zongda; Fan, Xing; Li, Jing; Lu, Hongwei
2017-05-01
This study develops a meta-modeling based mathematical programming approach with flexibility in environmental standards. It integrates numerical simulation, meta-modeling analysis, and fuzzy programming within a general framework. A set of models between remediation strategies and remediation performance can well guarantee the mitigation in computational efforts in the simulation and optimization process. In order to prevent the occurrence of over-optimistic and pessimistic optimization strategies, a high satisfaction level resulting from the implementation of a flexible standard can indicate the degree to which the environmental standard is satisfied. The proposed approach is applied to a naphthalene-contaminated site in China. Results show that a longer remediation period corresponds to a lower total pumping rate and a stringent risk standard implies a high total pumping rate. The wells located near or in the down-gradient direction to the contaminant sources have the most significant efficiency among all of remediation schemes.
A First Step towards a Clinical Decision Support System for Post-traumatic Stress Disorders.
Ma, Sisi; Galatzer-Levy, Isaac R; Wang, Xuya; Fenyö, David; Shalev, Arieh Y
2016-01-01
PTSD is distressful and debilitating, following a non-remitting course in about 10% to 20% of trauma survivors. Numerous risk indicators of PTSD have been identified, but individual level prediction remains elusive. As an effort to bridge the gap between scientific discovery and practical application, we designed and implemented a clinical decision support pipeline to provide clinically relevant recommendation for trauma survivors. To meet the specific challenge of early prediction, this work uses data obtained within ten days of a traumatic event. The pipeline creates personalized predictive model for each individual, and computes quality metrics for each predictive model. Clinical recommendations are made based on both the prediction of the model and its quality, thus avoiding making potentially detrimental recommendations based on insufficient information or suboptimal model. The current pipeline outperforms the acute stress disorder, a commonly used clinical risk factor for PTSD development, both in terms of sensitivity and specificity.
Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trebotich, D
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less
Modeling complex biological flows in multi-scale systems using the APDEC framework
NASA Astrophysics Data System (ADS)
Trebotich, David
2006-09-01
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.
Reverse-engineering laboratory astrophysics: Oxygen inner-shell absorption in the ISM
NASA Astrophysics Data System (ADS)
García, J.; Gatuzz, E.; Kallman, T. R.; Mendoza, C.; Gorczyca, T. W.
2017-03-01
The modeling of X-ray spectra from photoionized astrophysical plasmas has been significantly improved due to recent advancements in the theoretical and numerical frameworks, as well as a consolidated and reliable atomic database of inner-shell transitions for all the relevant ions. We discuss these developments and the current state of X-ray spectral modeling in the context of oxygen cold absorption in the interstellar medium (ISM). Unconventionally, we use high-resolution astrophysical observations to accurately determine line positions, and adjust the theoretical models for a comprehensive interpretation of the observed X-ray spectra. This approach has brought to light standing discrepancies in the neutral oxygen absorption-line positions determined from observations and laboratory measurements. We give an overview of our current efforts to devise a definitive model of oxygen photoabsorption that can help to resolve the existing controversy regarding ISM atomic and molecular fractions.
Marino, Dale J
2005-01-01
Abstract Physiologically based pharmacokinetic (PBPK) models are mathematical descriptions depicting the relationship between external exposure and internal dose. These models have found great utility for interspecies extrapolation. However, specialized computer software packages, which are not widely distributed, have typically been used for model development and utilization. A few physiological models have been reported using more widely available software packages (e.g., Microsoft Excel), but these tend to include less complex processes and dose metrics. To ascertain the capability of Microsoft Excel and Visual Basis for Applications (VBA) for PBPK modeling, models for styrene, vinyl chloride, and methylene chloride were coded in Advanced Continuous Simulation Language (ACSL), Excel, and VBA, and simulation results were compared. For styrene, differences between ACSL and Excel or VBA compartment concentrations and rates of change were less than +/-7.5E-10 using the same numerical integration technique and time step. Differences using VBA fixed step or ACSL Gear's methods were generally <1.00E-03, although larger differences involving very small values were noted after exposure transitions. For vinyl chloride and methylene chloride, Excel and VBA PBPK model dose metrics differed by no more than -0.013% or -0.23%, respectively, from ACSL results. These differences are likely attributable to different step sizes rather than different numerical integration techniques. These results indicate that Microsoft Excel and VBA can be useful tools for utilizing PBPK models, and given the availability of these software programs, it is hoped that this effort will help facilitate the use and investigation of PBPK modeling.
Dynamic Modeling, Controls, and Testing for Electrified Aircraft
NASA Technical Reports Server (NTRS)
Connolly, Joseph; Stalcup, Erik
2017-01-01
Electrified aircraft have the potential to provide significant benefits for efficiency and emissions reductions. To assess these potential benefits, modeling tools are needed to provide rapid evaluation of diverse concepts and to ensure safe operability and peak performance over the mission. The modeling challenge for these vehicles is the ability to show significant benefits over the current highly refined aircraft systems. The STARC-ABL (single-aisle turbo-electric aircraft with an aft boundary layer propulsor) is a new test proposal that builds upon previous N3-X team hybrid designs. This presentation describes the STARC-ABL concept, the NASA Electric Aircraft Testbed (NEAT) which will allow testing of the STARC-ABL powertrain, and the related modeling and simulation efforts to date. Modeling and simulation includes a turbofan simulation, Numeric Propulsion System Simulation (NPSS), which has been integrated with NEAT; and a power systems and control model for predicting testbed performance and evaluating control schemes. Model predictions provide good comparisons with testbed data for an NPSS-integrated test of the single-string configuration of NEAT.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent
2012-01-01
Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.
Using Virtualization to Integrate Weather, Climate, and Coastal Science Education
NASA Astrophysics Data System (ADS)
Davis, J. R.; Paramygin, V. A.; Figueiredo, R.; Sheng, Y.
2012-12-01
To better understand and communicate the important roles of weather and climate on the coastal environment, a unique publically available tool is being developed to support research, education, and outreach activities. This tool uses virtualization technologies to facilitate an interactive, hands-on environment in which students, researchers, and general public can perform their own numerical modeling experiments. While prior efforts have focused solely on the study of the coastal and estuary environments, this effort incorporates the community supported weather and climate model (WRF-ARW) into the Coastal Science Educational Virtual Appliance (CSEVA), an education tool used to assist in the learning of coastal transport processes; storm surge and inundation; and evacuation modeling. The Weather Research and Forecasting (WRF) Model is a next-generation, community developed and supported, mesoscale numerical weather prediction system designed to be used internationally for research, operations, and teaching. It includes two dynamical solvers (ARW - Advanced Research WRF and NMM - Nonhydrostatic Mesoscale Model) as well as a data assimilation system. WRF-ARW is the ARW dynamics solver combined with other components of the WRF system which was developed primarily at NCAR, community support provided by the Mesoscale and Microscale Meteorology (MMM) division of National Center for Atmospheric Research (NCAR). Included with WRF is the WRF Pre-processing System (WPS) which is a set of programs to prepare input for real-data simulations. The CSEVA is based on the Grid Appliance (GA) framework and is built using virtual machine (VM) and virtual networking technologies. Virtualization supports integration of an operating system, libraries (e.g. Fortran, C, Perl, NetCDF, etc. necessary to build WRF), web server, numerical models/grids/inputs, pre-/post-processing tools (e.g. WPS / RIP4 or UPS), graphical user interfaces, "Cloud"-computing infrastructure and other tools into a single ready-to-use package. Thus, the previous ornery task of setting up and compiling these tools becomes obsolete and the research, educator or student can focus on using the tools to study the interactions between weather, climate and the coastal environment. The incorporation of WRF into the CSEVA has been designed to be synergistic with the extensive online tutorials and biannual tutorials hosted by NCAR. Included are working examples of the idealized test simulations provided with WRF (2D sea breeze and squalls, a large eddy simulation, a Held and Suarez simulation, etc.) To demonstrate the integration of weather, coastal and coastal science education, example applications are being developed to demonstrate how the system can be used to couple a coastal and estuarine circulation, transport and storm surge model with downscale reanalysis weather and future climate predictions. Documentation, tutorials and the enhanced CSEVA itself will be found on the web at: http://cseva.coastal.ufl.edu.
Blasting Damage Predictions by Numerical Modeling in Siahbishe Pumped Storage Powerhouse
NASA Astrophysics Data System (ADS)
Eslami, Majid; Goshtasbi, Kamran
2018-04-01
One of the popular methods of underground and surface excavations is the use of blasting. Throughout this method of excavation, the loading resulted from blasting can be affected by different geo-mechanical and structural parameters of rock mass. Several factors affect turbulence in underground structures some of which are explosion, vibration, and stress impulses caused by the neighbouring blasting products. In investigating the blasting mechanism one should address the processes which expand with time and cause seismic events. To protect the adjoining structures against any probable deconstruction or damage, it is very important to model the blasting process prior to any actual operation. Efforts have been taken in the present study to demonstrate the potentiality of numerical methods in predicting the specified parameters in order to prevent any probable destruction. For this purpose the blasting process was modeled, according to its natural implementation, in one of the tunnels of Siahbishe dam by the 3DEC and AUTODYN 3D codes. 3DEC was used for modeling the blasting environment as well as the blast holes and AUTODYN 3D for modeling the explosion process in the blast hole. In this process the output of AUTODYN 3D, which is a result of modeling the blast hole and is in the form of stress waves, is entered into 3DEC. For analyzing the amount of destruction made by the blasting operation, the key parameter of Peak Particle Velocity was used. In the end, the numerical modeling results have been compared with the data recorded by the seismographs planted through the tunnel. As the results indicated 3DEC and AUTODYN 3D proved appropriate for analyzing such an issue. Therefore, by means of these two softwares one can analyze explosion processes prior to their implementation and make close estimation of the damage resulting from these processes.
A cellular automation model accounting for bicycle's group behavior
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Rui, Ying-Xu; Zhang, Jian; Shang, Hua-Yan
2018-02-01
Recently, bicycle has become an important traffic tool in China, again. Due to the merits of bicycle, the group behavior widely exists in urban traffic system. However, little effort has been made to explore the impacts of the group behavior on bicycle flow. In this paper, we propose a CA (cellular automaton) model with group behavior to explore the complex traffic phenomena caused by shoulder group behavior and following group behavior on an open road. The numerical results illustrate that the proposed model can qualitatively describe the impacts of the two kinds of group behaviors on bicycle flow and that the effects are related to the mode and size of group behaviors. The results can help us to better understand the impacts of the bicycle's group behaviors on urban traffic system and effectively control the bicycle's group behavior.
Eruptive event generator based on the Gibson-Low magnetic configuration
NASA Astrophysics Data System (ADS)
Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.
2017-08-01
Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Rastaetter, L.; Kuznetsova, M.; Singer, H.; Balch, C.; Weimer, D.; Toth, G.; Ridley, A.; Gombosi, T.; Wiltberger, M.;
2013-01-01
In this paper we continue the community-wide rigorous modern space weather model validation efforts carried out within GEM, CEDAR and SHINE programs. In this particular effort, in coordination among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), modelers, and science community, we focus on studying the models' capability to reproduce observed ground magnetic field fluctuations, which are closely related to geomagnetically induced current phenomenon. One of the primary motivations of the work is to support NOAA SWPC in their selection of the next numerical model that will be transitioned into operations. Six geomagnetic events and 12 geomagnetic observatories were selected for validation.While modeled and observed magnetic field time series are available for all 12 stations, the primary metrics analysis is based on six stations that were selected to represent the high-latitude and mid-latitude locations. Events-based analysis and the corresponding contingency tables were built for each event and each station. The elements in the contingency table were then used to calculate Probability of Detection (POD), Probability of False Detection (POFD) and Heidke Skill Score (HSS) for rigorous quantification of the models' performance. In this paper the summary results of the metrics analyses are reported in terms of POD, POFD and HSS. More detailed analyses can be carried out using the event by event contingency tables provided as an online appendix. An online interface built at CCMC and described in the supporting information is also available for more detailed time series analyses.
NASA Astrophysics Data System (ADS)
Benettin, G.; Pasquali, S.; Ponno, A.
2018-05-01
FPU models, in dimension one, are perturbations either of the linear model or of the Toda model; perturbations of the linear model include the usual β -model, perturbations of Toda include the usual α +β model. In this paper we explore and compare two families, or hierarchies, of FPU models, closer and closer to either the linear or the Toda model, by computing numerically, for each model, the maximal Lyapunov exponent χ . More precisely, we consider statistically typical trajectories and study the asymptotics of χ for large N (the number of particles) and small ɛ (the specific energy E / N), and find, for all models, asymptotic power laws χ ˜eq Cɛ ^a, C and a depending on the model. The asymptotics turns out to be, in general, rather slow, and producing accurate results requires a great computational effort. We also revisit and extend the analytic computation of χ introduced by Casetti, Livi and Pettini, originally formulated for the β -model. With great evidence the theory extends successfully to all models of the linear hierarchy, but not to models close to Toda.
Evaluation of Cooling Conditions for a High Heat Flux Testing Facility Based on Plasma-Arc Lamps
Charry, Carlos H.; Abdel-khalik, Said I.; Yoda, Minami; ...
2015-07-31
The new Irradiated Material Target Station (IMTS) facility for fusion materials at Oak Ridge National Laboratory (ORNL) uses an infrared plasma-arc lamp (PAL) to deliver incident heat fluxes as high as 27 MW/m 2. The facility is being used to test irradiated plasma-facing component materials as part of the joint US-Japan PHENIX program. The irradiated samples are to be mounted on molybdenum sample holders attached to a water-cooled copper rod. Depending on the size and geometry of samples, several sample holders and copper rod configurations have been fabricated and tested. As a part of the effort to design sample holdersmore » compatible with the high heat flux (HHF) testing to be conducted at the IMTS facility, numerical simulations have been performed for two different water-cooled sample holder designs using the ANSYS FLUENT 14.0 commercial computational fluid dynamics (CFD) software package. The primary objective of this work is to evaluate the cooling capability of different sample holder designs, i.e. to estimate their maximum allowable incident heat flux values. 2D axisymmetric numerical simulations are performed using the realizable k-ε turbulence model and the RPI nucleate boiling model within ANSYS FLUENT 14.0. The results of the numerical model were compared against the experimental data for two sample holder designs tested in the IMTS facility. The model has been used to parametrically evaluate the effect of various operational parameters on the predicted temperature distributions. The results were used to identify the limiting parameter for safe operation of the two sample holders and the associated peak heat flux limits. The results of this investigation will help guide the development of new sample holder designs.« less
Geomagnetic inverse problem and data assimilation: a progress report
NASA Astrophysics Data System (ADS)
Aubert, Julien; Fournier, Alexandre
2013-04-01
In this presentation I will present two studies recently undertaken by our group in an effort to bring the benefits of data assimilation to the study of Earth's magnetic field and the dynamics of its liquid iron core, where the geodynamo operates. In a first part I will focus on the geomagnetic inverse problem, which attempts to recover the fluid flow in the core from the temporal variation of the magnetic field (known as the secular variation). Geomagnetic data can be downward continued from the surface of the Earth down to the core-mantle boundary, but not further below, since the core is an electrical conductor. Historically, solutions to the geomagnetic inverse problem in such a sparsely observed system were thus found only for flow immediately below the core mantle boundary. We have recently shown that combining a numerical model of the geodynamo together with magnetic observations, through the use of Kalman filtering, now allows to present solutions for flow throughout the core. In a second part, I will present synthetic tests of sequential geomagnetic data assimilation aiming at evaluating the range at which the future of the geodynamo can be predicted, and our corresponding prospects to refine the current geomagnetic predictions. Fournier, Aubert, Thébault: Inference on core surface flow from observations and 3-D dynamo modelling, Geophys. J. Int. 186, 118-136, 2011, doi: 10.1111/j.1365-246X.2011.05037.x Aubert, Fournier: Inferring internal properties of Earth's core dynamics and their evolution from surface observations and a numerical geodynamo model, Nonlinear Proc. Geoph. 18, 657-674, 2011, doi:10.5194/npg-18-657-2011 Aubert: Flow throughout the Earth's core inverted from geomagnetic observations and numerical dynamo models, Geophys. J. Int., 2012, doi: 10.1093/gji/ggs051
ERIC Educational Resources Information Center
Agus, Mirian; Penna, Maria Pietronilla; Peró-Cebollero, Maribel; Guàrdia-Olmos, Joan
2016-01-01
Research on the graphical facilitation of probabilistic reasoning has been characterised by the effort expended to identify valid assessment tools. The authors developed an assessment instrument to compare reasoning performances when problems were presented in verbal-numerical and graphical-pictorial formats. A sample of undergraduate psychology…
Angles-only, ground-based, initial orbit determination
NASA Astrophysics Data System (ADS)
Taff, L. G.; Randall, P. M. S.; Stansfield, S. A.
1984-05-01
Over the past few years, passive, ground-based, angles-only initial orbit determination has had a thorough analytical, numerical, experimental, and creative re-examination. This report presents the numerical culmination of this effort and contains specific recommendations for which of several techniques one should use on the different subsets of high altitude artificial satellites and minor planets.
A compendium of computational fluid dynamics at the Langley Research Center
NASA Technical Reports Server (NTRS)
1980-01-01
Through numerous summary examples, the scope and general nature of the computational fluid dynamics (CFD) effort at Langley is identified. These summaries will help inform researchers in CFD and line management at Langley of the overall effort. In addition to the inhouse efforts, out of house CFD work supported by Langley through industrial contracts and university grants are included. Researchers were encouraged to include summaries of work in preliminary and tentative states of development as well as current research approaching definitive results.
A review of direct numerical simulations of astrophysical detonations and their implications
Parete-Koon, Suzanne T.; Smith, Christopher R.; Papatheodore, Thomas L.; ...
2013-04-11
Multi-dimensional direct numerical simulations (DNS) of astrophysical detonations in degenerate matter have revealed that the nuclear burning is typically characterized by cellular structure caused by transverse instabilities in the detonation front. Type Ia supernova modelers often use one- dimensional DNS of detonations as inputs or constraints for their whole star simulations. While these one-dimensional studies are useful tools, the true nature of the detonation is multi-dimensional. The multi-dimensional structure of the burning influences the speed, stability, and the composition of the detonation and its burning products, and therefore, could have an impact on the spectra of Type Ia supernovae. Considerablemore » effort has been expended modeling Type Ia supernovae at densities above 1x10 7 g∙cm -3 where the complexities of turbulent burning dominate the flame propagation. However, most full star models turn the nuclear burning schemes off when the density falls below 1x10 7 g∙cm -3 and distributed burning begins. The deflagration to detonation transition (DDT) is believed to occur at just these densities and consequently they are the densities important for studying the properties of the subsequent detonation. In conclusion, this work reviews the status of DNS studies of detonations and their possible implications for Type Ia supernova models. It will cover the development of Detonation theory from the first simple Chapman-Jouguet (CJ) detonation models to the current models based on the time-dependent, compressible, reactive flow Euler equations of fluid dynamics.« less
Deterrence and Risk Preferences in Sequential Attacker-Defender Games with Continuous Efforts.
Payyappalli, Vineet M; Zhuang, Jun; Jose, Victor Richmond R
2017-11-01
Most attacker-defender games consider players as risk neutral, whereas in reality attackers and defenders may be risk seeking or risk averse. This article studies the impact of players' risk preferences on their equilibrium behavior and its effect on the notion of deterrence. In particular, we study the effects of risk preferences in a single-period, sequential game where a defender has a continuous range of investment levels that could be strategically chosen to potentially deter an attack. This article presents analytic results related to the effect of attacker and defender risk preferences on the optimal defense effort level and their impact on the deterrence level. Numerical illustrations and some discussion of the effect of risk preferences on deterrence and the utility of using such a model are provided, as well as sensitivity analysis of continuous attack investment levels and uncertainty in the defender's beliefs about the attacker's risk preference. A key contribution of this article is the identification of specific scenarios in which the defender using a model that takes into account risk preferences would be better off than a defender using a traditional risk-neutral model. This study provides insights that could be used by policy analysts and decisionmakers involved in investment decisions in security and safety. © 2017 Society for Risk Analysis.
Simulation of Wake Vortex Radiometric Detection via Jet Exhaust Proxy
NASA Technical Reports Server (NTRS)
Daniels, Taumi S.
2015-01-01
This paper describes an analysis of the potential of an airborne hyperspectral imaging IR instrument to infer wake vortices via turbine jet exhaust as a proxy. The goal was to determine the requirements for an imaging spectrometer or radiometer to effectively detect the exhaust plume, and by inference, the location of the wake vortices. The effort examines the gas spectroscopy of the various major constituents of turbine jet exhaust and their contributions to the modeled detectable radiance. Initially, a theoretical analysis of wake vortex proxy detection by thermal radiation was realized in a series of simulations. The first stage used the SLAB plume model to simulate turbine jet exhaust plume characteristics, including exhaust gas transport dynamics and concentrations. The second stage used these plume characteristics as input to the Line By Line Radiative Transfer Model (LBLRTM) to simulate responses from both an imaging IR hyperspectral spectrometer or radiometer. These numerical simulations generated thermal imagery that was compared with previously reported wake vortex temperature data. This research is a continuation of an effort to specify the requirements for an imaging IR spectrometer or radiometer to make wake vortex measurements. Results of the two-stage simulation will be reported, including instrument specifications for wake vortex thermal detection. These results will be compared with previously reported results for IR imaging spectrometer performance.
Gorahava, Kaushik K; Rosenberger, Jay M; Mubayi, Anuj
2015-07-01
Visceral leishmaniasis (VL) is the most deadly form of the leishmaniasis family of diseases, which affects numerous developing countries. The Indian state of Bihar has the highest prevalence and mortality rate of VL in the world. Insecticide spraying is believed to be an effective vector control program for controlling the spread of VL in Bihar; however, it is expensive and less effective if not implemented systematically. This study develops and analyzes a novel optimization model for VL control in Bihar that identifies an optimal (best possible) allocation of chosen insecticide (dichlorodiphenyltrichloroethane [DDT] or deltamethrin) based on the sizes of human and cattle populations in the region. The model maximizes the insecticide-induced sandfly death rate in human and cattle dwellings while staying within the current state budget for VL vector control efforts. The model results suggest that deltamethrin might not be a good replacement for DDT because the insecticide-induced sandfly deaths are 3.72 times more in case of DDT even after 90 days post spray. Different insecticide allocation strategies between the two types of sites (houses and cattle sheds) are suggested based on the state VL-control budget and have a direct implication on VL elimination efforts in a resource-limited region. © The American Society of Tropical Medicine and Hygiene.
NASA Astrophysics Data System (ADS)
Vicente, Gilberto A.
An efficient iterative method has been developed to estimate the vertical profile of SO2 and ash clouds from volcanic eruptions by comparing near real-time satellite observations with numerical modeling outputs. The approach uses UV based SO2 concentration and IR based ash cloud images, the volcanic ash transport model PUFF and wind speed, height and directional information to find the best match between the simulated and the observed displays. The method is computationally fast and is being implemented for operational use at the NOAA Volcanic Ash Advisory Centers (VAACs) in Washington, DC, USA, to support the Federal Aviation Administration (FAA) effort to detect, track and measure volcanic ash cloud heights for air traffic safety and management. The presentation will show the methodology, results, statistical analysis and SO2 and Aerosol Index input products derived from the Ozone Monitoring Instrument (OMI) onboard the NASA EOS/Aura research satellite and from the Global Ozone Monitoring Experiment-2 (GOME-2) instrument in the MetOp-A. The volcanic ash products are derived from AVHRR instruments in the NOAA POES-16, 17, 18, 19 as well as MetOp-A. The presentation will also show how a VAAC volcanic ash analyst interacts with the system providing initial condition inputs such as location and time of the volcanic eruption, followed by the automatic real-time tracking of all the satellite data available, subsequent activation of the iterative approach and the data/product delivery process in numerical and graphical format for operational applications.
NASA Astrophysics Data System (ADS)
Reinen, L. A.; Brenner, K.
2017-12-01
Ongoing efforts to improve undergraduate education in science, technology, engineering, and mathematics (STEM) fields focus on increasing active student participation and decreasing traditional lecture-based teaching. Undergraduate research experiences (UREs), which engage students in the work of STEM professionals, are an example of these efforts. A recent report from the National Academies of Sciences, Engineering and Medicine (Undergraduate Research Experiences for STEM Students: Successes, Challenges, and Opportunities; 2017) provides characteristics of UREs, and indicates that participation in UREs increases student interest and persistence in STEM as well as provides opportunities to broaden student participation in these fields. UREs offer an excellent opportunity to engage students in research using the rapidly evolving technologies used by STEM professionals. In the fall of 2016, students in the Tectonic Landscapes class at Pomona College participated in a course-based URE that combined traditional field mapping methods with analysis of high-resolution topographic data (LiDAR) and 3D numerical modeling to investigate questions of active local faulting. During the first ten weeks students developed skills in: creation of fault maps from both field observations (GPS included) and high-resolution digital elevation models (DEMs), assessment of tectonic activity through analyses of DEMs of hill slope diffusion models and geomorphic indices, and evaluation of fault geometry hypotheses via 3D elastic modeling. Most of these assignments were focused on a single research site. While students primarily used Excel, ArcMap, and Poly3D, no previous knowledge of these was required or assumed. Through this iterative approach, students used increasingly more complex methods as well as gained greater ownership of the research process with time. The course culminated with a 4-week independent research project in which each student investigated a question of their own choosing using skills developed earlier in the course. We will provide details of the course, scaffolding of the technical skills, growing the independence of students in the research process, and discuss early outcomes of student confidence, engagement and retention.
Turbulent circulation above the surface heat source in stably stratified atmosphere
NASA Astrophysics Data System (ADS)
Kurbatskii, A. F.; Kurbatskaya, L. I.
2016-10-01
The 3-level RANS approach for simulating a turbulent circulation over the heat island in a stably stratified environment under nearly calm conditions is formulated. The turbulent kinetic energy its spectral consumption (dissipation) and the dispersion of turbulent fluctuations of temperature are found from differential equations, thus the correct modeling of transport processes in the interface layer with the counter-gradient heat flux is assured. The three-parameter turbulence RANS approach minimizes difficulties in simulating the turbulent transport in a stably stratified environment and reduces efforts needed for the numerical implementation of the 3-level RANS approach. Numerical simulation of the turbulent structure of the penetrative convection over the heat island under conditions of stably stratified atmosphere demonstrates that the three-equation model is able to predict the thermal circulation induced by the heat island. The temperature distribution, root-mean-square fluctuations of the turbulent velocity and temperature fields and spectral turbulent kinetic energy flux are in good agreement with the experimental data. The model describes such thin physical effects, as a crossing of vertical profiles of temperature of a thermal plume with the formation of the negative buoyancy area testifying to development of the dome-shaped form at the top part of a plume in the form of "hat".
Dynamic Shock Response of an S2 Glass/SC15 Epoxy Woven Fabric Composite Material System
NASA Astrophysics Data System (ADS)
Key, Christopher; Alexander, Scott; Harstad, Eric; Schumacher, Shane
2017-06-01
The use of S2 glass/SC15 epoxy woven fabric composite materials for blast and ballistic protection has been an area of on-going research over the past decade. In order to accurately model this material system within potential applications under extreme loading conditions, a well characterized and well understood anisotropic equation of state (EOS) is needed. This work details both an experimental program and associated analytical modelling efforts which aim to provide better physical understanding of the anisotropic EOS behavior of this material. Experimental testing focused on planar shock impact tests loading the composite to peak pressures of 15 GPa in both the through-thickness and on-fiber orientation. Test results highlighted the anisotropic response of the material and provided a basis by which the associated numeric micromechanical investigation was compared. Results of the combined experimental and numerical modelling investigation provided insights into not only the constituent material influence on the composite response but also the importance of the geometrical configuration of the plain weave microstructure and the stochastic significance of the microstructural configuration. Sandia National Laboratories is a multi-mission laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005
Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.
2012-01-01
As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.
Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.
Hussain, S A; Perrier, M; Tartakovsky, B
2018-04-01
Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.
Validating a hydrodynamic framework for long-term modelling of the German Bight
NASA Astrophysics Data System (ADS)
Koesters, Frank; Pluess, Andreas; Heyer, Harro; Kastens, Marko; Sehili, Aissa
2010-05-01
The intention of the "AufMod" project is to set up a modelling framework for questions concerning the large-scale, long-term morphodynamic evolution of the German Bight. First a hydrodynamic model has been set up which includes the entire North Sea and a sophisticated representation of the German Bight. In a second step, simulations of sediment transport and morphodynamic changes will be processed. This paper deals with the calibration and validation process for the hydrodynamic model in detail. The starting point for "AufMod" was the aim to better understand the morphodynamic processes in the German Bight. Changes in bottom topography need to be predicted to ensure a safe and easy transport through the German waterways leading to ports at the German coast such as Hamburg and Bremerhaven. Within "AufMod" this question is addressed through a combined effort of gaining a comprehensive sedimentological and bathymetric data set as well as running different numerical models. The model is based on the numerical method UnTRIM (Casulli and Zanolli, 2002). The model uses an unstructured grid in the horizontal to provide a good representation of the complex topography. The spatial resolution increases from about 20 km in the North Sea to 20 m within the estuaries. The model forcing represents conditions for the year 2006 and consists of wind stress at the surface, water level elevation and salinity at the open boundaries as well as freshwater inflows. Temperature is not taken into account. For the model validation, there exists a large number of over 40 hydrodynamic monitoring stations which are used to compare modelled and measured data. The calibration process consists of adapting the tidal components at the open boundaries following the approach of Pluess (2003). The validation process includes the analysis of tidal components of water level elevation and current values as well as an analysis of tidal characteristic values, e.g. tidal low and high water. Based on these numerical measures, the representation of the underlying physics is quantified by using a skill score. The overall hydrodynamic structure is represented well by the model and will be starting point for the following morphodynamic experiments. Literature Casulli and Zanolli (2002) V. Casulli and P. Zanolli. Semi-Implicit Numerical Modelling of Non-Hydrostatic Free-surface Flows for Environmental Problems. Mathematical and Computer Modelling, 36:1131-1149, 2002. Pluess (2003) A. Pluess. Das Nordseemodell der BAW zur Simulation der Tide in der Deutschen Bucht. Die Kueste, Heft 67, 2003, ISBN 3-8042-1058-9, pp 83-128
Prioritizing Arctic Observations with Limited Resources
NASA Astrophysics Data System (ADS)
Kelly, B.; Starkweather, S.
2012-12-01
U.S. Federal agencies recently completed a five-year research plan for the Arctic including plans to enhance efforts toward an Arctic Observing Network (AON). Following on numerous national and international planning efforts, the five-year plan identifies nine priority areas including enhancing observing system design, assessing priorities of local residents, and improving data access. AON progress to date has been realized through bottom-up funding decisions and some top-down design optimization approaches, which have resulted in valuable yet ad hoc progress towards Arctic research imperatives. We suggest that advancing AON beyond theoretical design and ad hoc efforts with the engagement of multiple U.S. Federal agencies will require a structured, input-based planning approach to prioritization that recognizes budget realities. Completing a long list of worthy observing efforts appears to be unsustainable and inadequate in responding to the rapid changes taking place in the Arctic. Society would be better served by more rapid implementation of sustained, long-term observations focused on those climate feedbacks with the greatest potential negative impacts. Several emerging theoretical frameworks have pointed to the need to enhance iterative, capacity-building dialog between observationalists, modelers, and stakeholders as a way to identify these broadest potential benefits. We concur and suggest that those dialogs need to be facilitated and sustained over long periods. Efforts to isolate observational programs from process research are, we believe, impeding progress. At the same time, we note that bottom-up funding decisions, while useful for prioritizing process research, are less appropriate to building observing systems.
Lampropoulou, Sofia; Nowicky, Alexander V
2012-03-01
The aim of the study was to examine the reliability and validity of the numerical rating scale (0-10 NRS) for rating perception of effort during isometric elbow flexion in healthy people. 33 individuals (32 ± 8 years) participated in the study. Three re-test measurements within one session and three weekly sessions were undertaken to determine the reliability of the scale. The sensitivity of the scale following 10 min isometric fatiguing exercise of the elbow flexors as well as the correlation of the effort with the electromyographic (EMG) activity of the flexor muscles were tested. Perception of effort was tested during isometric elbow flexion at 10, 30, 50, 70, 90, and 100% MVC. The 0-10 NRS demonstrated an excellent test-retest reliability [intra class correlation (ICC) = 0.99 between measurements taken within a session and 0.96 between 3 consecutive weekly sessions]. Exploratory curve fitting for the relationship between effort ratings and voluntary force, and underlying EMG showed that both are best described by power functions (y = ax ( b )). There were also strong correlations (range 0.89-0.95) between effort ratings and EMG recordings of all flexor muscles supporting the concurrent criterion validity of the measure. The 0-10 NRS was sensitive enough to detect changes in the perceived effort following fatigue and significantly increased at the level of voluntary contraction used in its assessment (p < 0.001). These findings suggest the 0-10 NRS is a valid and reliable scale for rating perception of effort in healthy individuals. Future research should seek to establish the validity of the 0-10 NRS in clinical settings.
Moment Tensor Descriptions for Simulated Explosions of the Source Physics Experiment (SPE)
NASA Astrophysics Data System (ADS)
Yang, X.; Rougier, E.; Knight, E. E.; Patton, H. J.
2014-12-01
In this research we seek to understand damage mechanisms governing the behavior of geo-materials in the explosion source region, and the role they play in seismic-wave generation. Numerical modeling tools can be used to describe these mechanisms through the development and implementation of appropriate material models. Researchers at Los Alamos National Laboratory (LANL) have been working on a novel continuum-based-viscoplastic strain-rate-dependent fracture material model, AZ_Frac, in an effort to improve the description of these damage sources. AZ_Frac has the ability to describe continuum fracture processes, and at the same time, to handle pre-existing anisotropic material characteristics. The introduction of fractures within the material generates further anisotropic behavior that is also accounted for within the model. The material model has been calibrated to a granitic medium and has been applied in a number of modeling efforts under the SPE project. In our modeling, we use a 2D, axisymmetric layered earth model of the SPE site consisting of a weathered layer on top of a half-space. We couple the hydrodynamic simulation code with a seismic simulation code and propagate the signals to distances of up to 2 km. The signals are inverted for time-dependent moment tensors using a modified inversion scheme that accounts for multiple sources at different depths. The inversion scheme is evaluated for its resolving power to determine a centroid depth and a moment tensor description of the damage source. The capabilities of the inversion method to retrieve such information from waveforms recorded on three SPE tests conducted to date are also being assessed.
Nanoscale hotspots due to nonequilibrium thermal transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinha, Sanjiv; Goodson, Kenneth E.
2004-01-01
Recent experimental and modeling efforts have been directed towards the issue of temperature localization and hotspot formation in the vicinity of nanoscale heat generating devices. The nonequilibrium transport conditions which develop around these nanoscale devices results in elevated temperatures near the heat source which can not be predicted by continuum diffusion theory. Efforts to determine the severity of this temperature localization phenomena in silicon devices near and above room temperature are of technological importance to the development of microelectronics and other nanotechnologies. In this work, we have developed a new modeling tool in order to explore the magnitude of themore » additional thermal resistance which forms around nanoscale hotspots from temperatures of 100-1000K. The models are based on a two fluid approximation in which thermal energy is transferred between ''stationary'' optical phonons and fast propagating acoustic phonon modes. The results of the model have shown excellent agreement with experimental results of localized hotspots in silicon at lower temperatures. The model predicts that the effect of added thermal resistance due to the nonequilibrium phonon distribution is greatest at lower temperatures, but is maintained out to temperatures of 1000K. The resistance predicted by the numerical code can be easily integrated with continuum models in order to predict the temperature distribution around nanoscale heat sources with improved accuracy. Additional research efforts also focused on the measurements of the thermal resistance of silicon thin films at higher temperatures, with a focus on polycrystalline silicon. This work was intended to provide much needed experimental data on the thermal transport properties for micro and nanoscale devices built with this material. Initial experiments have shown that the exposure of polycrystalline silicon to high temperatures may induce recrystallization and radically increase the thermal transport properties at room temperature. In addition, the defect density was observed to play a major role in the rate of change in thermal resistivity as a function of temperature.« less
Cognitive effort: A neuroeconomic approach
Braver, Todd S.
2015-01-01
Cognitive effort has been implicated in numerous theories regarding normal and aberrant behavior and the physiological response to engagement with demanding tasks. Yet, despite broad interest, no unifying, operational definition of cognitive effort itself has been proposed. Here, we argue that the most intuitive and epistemologically valuable treatment is in terms of effort-based decision-making, and advocate a neuroeconomics-focused research strategy. We first outline psychological and neuroscientific theories of cognitive effort. Then we describe the benefits of a neuroeconomic research strategy, highlighting how it affords greater inferential traction than do traditional markers of cognitive effort, including self-reports and physiologic markers of autonomic arousal. Finally, we sketch a future series of studies that can leverage the full potential of the neuroeconomic approach toward understanding the cognitive and neural mechanisms that give rise to phenomenal, subjective cognitive effort. PMID:25673005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Laboratory and theoretical models of planetary-scale instabilities and waves
NASA Technical Reports Server (NTRS)
Hart, John E.; Toomre, Juri
1991-01-01
Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. The two outstanding problems of interest are: (1) the origins and nature of chaos in baroclinically unstable flows; and (2) the physical mechanisms responsible for high speed zonal winds and banding on the giant planets. The methods used to study these problems, and the insights gained, are useful in more general atmospheric and climate dynamic settings. Because the planetary curvature or beta-effect is crucial in the large scale nonlinear dynamics, the motions of rotating convecting liquids in spherical shells were studied using electrohydrodynamic polarization forces to generate radial gravity and centrally directed buoyancy forces in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. The interpretation and extension of these results have led to the construction of efficient numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. Efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument have led us to develop theoretical and numerical models of baroclinic instability. Some surprising properties of both these models were discovered.
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
NASA Astrophysics Data System (ADS)
Adhikari, S.; Ivins, E. R.; Larour, E. Y.
2015-12-01
Perturbations in gravitational and rotational potentials caused by climate driven mass redistribution on the earth's surface, such as ice sheet melting and terrestrial water storage, affect the spatiotemporal variability in global and regional sea level. Here we present a numerically accurate, computationally efficient, high-resolution model for sea level. Unlike contemporary models that are based on spherical-harmonic formulation, the model can operate efficiently in a flexible embedded finite-element mesh system, thus capturing the physics operating at km-scale yet capable of simulating geophysical quantities that are inherently of global scale with minimal computational cost. One obvious application is to compute evolution of sea level fingerprints and associated geodetic and astronomical observables (e.g., geoid height, gravity anomaly, solid-earth deformation, polar motion, and geocentric motion) as a companion to a numerical 3-D thermo-mechanical ice sheet simulation, thus capturing global signatures of climate driven mass redistribution. We evaluate some important time-varying signatures of GRACE inferred ice sheet mass balance and continental hydrological budget; for example, we identify dominant sources of ongoing sea-level change at the selected tide gauge stations, and explain the relative contribution of different sources to the observed polar drift. We also report our progress on ice-sheet/solid-earth/sea-level model coupling efforts toward realistic simulation of Pine Island Glacier over the past several hundred years.
Maximizing the accuracy of field-derived numeric nutrient criteria in water quality regulations.
McLaughlin, Douglas B
2014-01-01
High levels of the nutrients nitrogen and phosphorus can cause unhealthy biological or ecological conditions in surface waters and prevent the attainment of their designated uses. Regulatory agencies are developing numeric criteria for these nutrients in an effort to ensure that the surface waters in their jurisdictions remain healthy and productive, and that water quality standards are met. These criteria are often derived using field measurements that relate nutrient concentrations and other water quality conditions to expected biological responses such as undesirable growth or changes in aquatic plant and animal communities. Ideally, these numeric criteria can be used to accurately "diagnose" ecosystem health and guide management decisions. However, the degree to which numeric nutrient criteria are useful for decision making depends on how accurately they reflect the status or risk of nutrient-related biological impairments. Numeric criteria that have little predictive value are not likely to be useful for managing nutrient concerns. This paper presents information on the role of numeric nutrient criteria as biological health indicators, and the potential benefits of sufficiently accurate criteria for nutrient management. In addition, it describes approaches being proposed or adopted in states such as Florida and Maine to improve the accuracy of numeric criteria and criteria-based decisions. This includes a preference for developing site-specific criteria in cases where sufficient data are available, and the use of nutrient concentration and biological response criteria together in a framework to support designated use attainment decisions. Together with systematic planning during criteria development, the accuracy of field-derived numeric nutrient criteria can be assessed and maximized as a part of an overall effort to manage nutrient water quality concerns. © 2013 SETAC.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
NASA Astrophysics Data System (ADS)
Kou, Jisheng; Sun, Shuyu
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less
SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models
NASA Astrophysics Data System (ADS)
Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.
2013-12-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2005-01-01
Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.
ERIC Educational Resources Information Center
Tucker, Constance R.; Winsor, Denise L.
2013-01-01
In order to increase the number of health care providers in underserved communities, numerous efforts are being made to increase the number of Black students in the health professions. Research supports the idea that individuals from minority populations seek doctors of the same race or culture. In an effort to provide increased health care to…
ERIC Educational Resources Information Center
Sommerfelt, Ole Henning; Vambheim, Vidar
2008-01-01
Numerous educational efforts have been tried in order to address problems of conflicts and violence at various levels of society. These efforts have been effective to various degrees. This article investigates the effectiveness of the Swedish-based peace education project "The dream of the good" (DODG), through its use of…
ERIC Educational Resources Information Center
Hirano, Alison Izawa
2012-01-01
The processes of globalization have an impact on society in numerous ways. As a result, higher education institutions around the world attempt to adjust to these changes through internationalization efforts. Amongst the key stakeholders who play an important role in assuring that these efforts are successful is the faculty because it is this body…
A Change Management Approach to Enhance Facility Maintenance Programs
2014-03-27
dependent on the particular research effort and the researcher’s experience. Large groups tend to increase the decision quality but can be difficult...consolidate SME opinions on facility maintenance criteria. The Delphi method utilizes numerous questionnaire rounds to capitalize on a group think...effort provides the discussion and conclusions, recommendations, and suggestions for follow on research . 9 II. Literature Review This
NASA Astrophysics Data System (ADS)
Barlow, J. E.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.
2016-12-01
Wildfires in the Western United States can alter landscapes by removing vegetation and changing soil properties. These altered landscapes produce more runoff than pre-fire landscapes which can lead to post-fire flooding that can damage infrastructure and impair natural resources. Resources, structures, historical artifacts and others that could be impacted by increased runoff are considered values at risk. .The Automated Geospatial Watershed Assessment tool (AGWA) allows users to quickly set up and execute the Kinematic Runoff and Erosion model (KINEROS2 or K2) in the ESRI ArcMap environment. The AGWA-K2 workflow leverages the visualization capabilities of GIS to facilitate evaluation of rapid watershed assessments for post-fire planning efforts. High relative change in peak discharge, as simulated by K2, provides a visual and numeric indicator to investigate those channels in the watershed that should be evaluated for more detailed analysis, especially if values at risk are within or near that channel. Modeling inundation extent along a channel would provide more specific guidance about risk along a channel. HEC-2 and HEC-RAS can be used for hydraulic modeling efforts at the reach and river system scale. These models have been used to address flood boundaries and, accordingly, flood risk. However, data collection and organization for hydraulic models can be time consuming and therefore a combined hydrologic-hydraulic modeling approach is not often employed for rapid assessments. A simplified approach could streamline this process and provide managers with a simple workflow and tool to perform a quick risk assessment for a single reach. By focusing on a single reach highlighted by large relative change in peak discharge, data collection efforts can be minimized and the hydraulic computations can be performed to supplement risk analysis. The incorporation of hydraulic analysis through a suite of Python tools (as outlined by HEC-2) with AGWA-K2 will allow more rapid applications of combined hydrologic-hydraulic modeling. This combined modeling approach is built in the ESRI ArcGIS application to enable rapid model preparation, execution and result visualization for risk assessment in post-fire environments.