Coupling large scale hydrologic-reservoir-hydraulic models for impact studies in data sparse regions
NASA Astrophysics Data System (ADS)
O'Loughlin, Fiachra; Neal, Jeff; Wagener, Thorsten; Bates, Paul; Freer, Jim; Woods, Ross; Pianosi, Francesca; Sheffied, Justin
2017-04-01
As hydraulic modelling moves to increasingly large spatial domains it has become essential to take reservoirs and their operations into account. Large-scale hydrological models have been including reservoirs for at least the past two decades, yet they cannot explicitly model the variations in spatial extent of reservoirs, and many reservoirs operations in hydrological models are not undertaken during the run-time operation. This requires a hydraulic model, yet to-date no continental scale hydraulic model has directly simulated reservoirs and their operations. In addition to the need to include reservoirs and their operations in hydraulic models as they move to global coverage, there is also a need to link such models to large scale hydrology models or land surface schemes. This is especially true for Africa where the number of river gauges has consistently declined since the middle of the twentieth century. In this study we address these two major issues by developing: 1) a coupling methodology for the VIC large-scale hydrological model and the LISFLOOD-FP hydraulic model, and 2) a reservoir module for the LISFLOOD-FP model, which currently includes four sets of reservoir operating rules taken from the major large-scale hydrological models. The Volta Basin, West Africa, was chosen to demonstrate the capability of the modelling framework as it is a large river basin ( 400,000 km2) and contains the largest man-made lake in terms of area (8,482 km2), Lake Volta, created by the Akosombo dam. Lake Volta also experiences a seasonal variation in water levels of between two and six metres that creates a dynamic shoreline. In this study, we first run our coupled VIC and LISFLOOD-FP model without explicitly modelling Lake Volta and then compare these results with those from model runs where the dam operations and Lake Volta are included. The results show that we are able to obtain variation in the Lake Volta water levels and that including the dam operations and Lake Volta has significant impacts on the water levels across the domain.
Woodworth-Jefcoats, Phoebe A; Polovina, Jeffrey J; Dunne, John P; Blanchard, Julia L
2013-03-01
Output from an earth system model is paired with a size-based food web model to investigate the effects of climate change on the abundance of large fish over the 21st century. The earth system model, forced by the Intergovernmental Panel on Climate Change (IPCC) Special report on emission scenario A2, combines a coupled climate model with a biogeochemical model including major nutrients, three phytoplankton functional groups, and zooplankton grazing. The size-based food web model includes linkages between two size-structured pelagic communities: primary producers and consumers. Our investigation focuses on seven sites in the North Pacific, each highlighting a specific aspect of projected climate change, and includes top-down ecosystem depletion through fishing. We project declines in large fish abundance ranging from 0 to 75.8% in the central North Pacific and increases of up to 43.0% in the California Current (CC) region over the 21st century in response to change in phytoplankton size structure and direct physiological effects. We find that fish abundance is especially sensitive to projected changes in large phytoplankton density and our model projects changes in the abundance of large fish being of the same order of magnitude as changes in the abundance of large phytoplankton. Thus, studies that address only climate-induced impacts to primary production without including changes to phytoplankton size structure may not adequately project ecosystem responses. © 2012 Blackwell Publishing Ltd.
Modeling, Analysis, and Optimization Issues for Large Space Structures
NASA Technical Reports Server (NTRS)
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
NASA Astrophysics Data System (ADS)
Klingbeil, Knut; Lemarié, Florian; Debreu, Laurent; Burchard, Hans
2018-05-01
The state of the art of the numerics of hydrostatic structured-grid coastal ocean models is reviewed here. First, some fundamental differences in the hydrodynamics of the coastal ocean, such as the large surface elevation variation compared to the mean water depth, are contrasted against large scale ocean dynamics. Then the hydrodynamic equations as they are used in coastal ocean models as well as in large scale ocean models are presented, including parameterisations for turbulent transports. As steps towards discretisation, coordinate transformations and spatial discretisations based on a finite-volume approach are discussed with focus on the specific requirements for coastal ocean models. As in large scale ocean models, splitting of internal and external modes is essential also for coastal ocean models, but specific care is needed when drying & flooding of intertidal flats is included. As one obvious characteristic of coastal ocean models, open boundaries occur and need to be treated in a way that correct model forcing from outside is transmitted to the model domain without reflecting waves from the inside. Here, also new developments in two-way nesting are presented. Single processes such as internal inertia-gravity waves, advection and turbulence closure models are discussed with focus on the coastal scales. Some overview on existing hydrostatic structured-grid coastal ocean models is given, including their extensions towards non-hydrostatic models. Finally, an outlook on future perspectives is made.
Flexible language constructs for large parallel programs
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Schnabel, Robert
1993-01-01
The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.
A mathematical model of a large open fire
NASA Technical Reports Server (NTRS)
Harsha, P. T.; Bragg, W. N.; Edelman, R. B.
1981-01-01
A mathematical model capable of predicting the detailed characteristics of large, liquid fuel, axisymmetric, pool fires is described. The predicted characteristics include spatial distributions of flame gas velocity, soot concentration and chemical specie concentrations including carbon monoxide, carbon dioxide, water, unreacted oxygen, unreacted fuel and nitrogen. Comparisons of the predictions with experimental values are also given.
Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas
2014-01-01
A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.
Bolt installation tool for tightening large nuts and bolts
NASA Technical Reports Server (NTRS)
Mcdougal, A. R.; Norman, R. M.
1974-01-01
Large bolts and nuts are accurately tightened to structures without damaging torque stresses. There are two models of bolt installation tool. One is rigidly mounted and one is hand held. Each model includes torque-multiplier unit.
Flexible Language Constructs for Large Parallel Programs
Rosing, Matt; Schnabel, Robert
1994-01-01
The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less
Power Scaling Fiber Amplifiers Using Very-Large-Mode-Area Fibers
2016-02-23
fiber lasers are limited to below 1kW due to limited mode size and thermal issues, particularly thermal mode instability (TMI). Two comprehensive models...accurately modeling very- large-mode-area fiber amplifiers while simultaneously including thermal lensing and TMI. This model was applied to investigate...expected resilience to TMI. 15. SUBJECT TERMS Fiber amplifier, high power laser, thermal mode instability, large-mode-area fiber, ytterbium-doped
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.
2017-01-01
Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519
Bardenheier, Barbara H; Bullard, Kai McKeever; Caspersen, Carl J; Cheng, Yiling J; Gregg, Edward W; Geiss, Linda S
2013-09-01
To use structural modeling to test a hypothesized model of causal pathways related with prediabetes among older adults in the U.S. Cross-sectional study of 2,230 older adults (≥ 50 years) without diabetes included in the morning fasting sample of the 2001-2006 National Health and Nutrition Examination Surveys. Demographic data included age, income, marital status, race/ethnicity, and education. Behavioral data included physical activity (metabolic equivalent hours per week for vigorous or moderate muscle strengthening, walking/biking, and house/yard work), and poor diet (refined grains, red meat, added sugars, solid fats, and high-fat dairy). Structural-equation modeling was performed to examine the interrelationships among these variables with family history of diabetes, high blood pressure, BMI, large waist (waist circumference: women, ≥ 35 inches; men, ≥ 40 inches), triglycerides ≥ 200 mg/dL, and total and HDL (≥ 60 mg/dL) cholesterol. After dropping BMI and total cholesterol, our best-fit model included three single factors: socioeconomic position (SEP), physical activity, and poor diet. Large waist had the strongest direct effect on prediabetes (0.279), followed by male sex (0.270), SEP (-0.157), high blood pressure (0.122), family history of diabetes (0.070), and age (0.033). Physical activity had direct effects on HDL (0.137), triglycerides (-0.136), high blood pressure (-0.132), and large waist (-0.067); poor diet had direct effects on large waist (0.146) and triglycerides (0.148). Our results confirmed that, while including factors known to be associated with high risk of developing prediabetes, large waist circumference had the strongest direct effect. The direct effect of SEP on prediabetes suggests mediation by some unmeasured factor(s).
NASA Astrophysics Data System (ADS)
Chiaverano, Luciano M.; Robinson, Kelly L.; Tam, Jorge; Ruzicka, James J.; Quiñones, Javier; Aleksa, Katrina T.; Hernandez, Frank J.; Brodeur, Richard D.; Leaf, Robert; Uye, Shin-ichi; Decker, Mary Beth; Acha, Marcelo; Mianzan, Hermes W.; Graham, William M.
2018-05-01
Large jellyfish are important consumers of plankton, fish eggs and fish larvae in heavily fished ecosystems worldwide; yet they are seldom included in fisheries production models. Here we developed a trophic network model with 41 functional groups using ECOPATH re-expressed in a donor-driven, end-to-end format to directly evaluate the efficiency of large jellyfish and forage fish at transferring energy to higher trophic levels, as well as the ecosystem-wide effects of varying jellyfish and forage fish consumption rates and fishing rates, in the Northern Humboldt Current system (NHCS) off of Peru. Large jellyfish were an energy-loss pathway for high trophic-level consumers, while forage fish channelized the production of lower trophic levels directly into production of top-level consumers. A simulated jellyfish bloom resulted in a decline in productivity of all functional groups, including forage fish (12%), with the exception of sea turtles. A modeled increase in forage fish consumption rate by 50% resulted in a decrease in large jellyfish productivity (29%). A simulated increase of 40% in forage fish harvest enhanced jellyfish productivity (24%), while closure of all fisheries caused a decline in large jellyfish productivity (26%) and productivity increases in upper level consumers. These outcomes not only suggest that jellyfish blooms and fisheries have important effects on the structure of the NHCS, but they also support the hypothesis that forage fishing provides a competitive release for large jellyfish. We recommend including jellyfish as a functional group in future ecosystem modeling efforts, including ecosystem-based approaches to fishery management of coastal ecosystems worldwide.
NASA Technical Reports Server (NTRS)
Vontiesenhausen, G. F.
1977-01-01
A program implementation model is presented which covers the in-space construction of certain large space systems from extraterrestrial materials. The model includes descriptions of major program elements and subelements and their operational requirements and technology readiness requirements. It provides a structure for future analysis and development.
Examination of various turbulence models for application in liquid rocket thrust chambers
NASA Technical Reports Server (NTRS)
Hung, R. J.
1991-01-01
There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.
A Wetter Future For California?
NASA Astrophysics Data System (ADS)
Luptowitz, R.; Allen, R.
2016-12-01
Future California (CA) precipitation projections, including those from the most recent Climate Model Intercomparison Project (CMIP5), remain uncertain. This uncertainty is related to several factors, including relatively large natural variability, model shortcomings, and because CA lies within a transition zone, where mid-latitude regions are expected to become wetter and subtropical regions drier. Here, we use the Community Earth System Model (CESM) Large Ensemble Project driven by the business-as-usual scenario, and find a robust increase in CA precipitation. This implies CMIP5 model differences are the dominant cause of the large range of future CA precipitation projections. The boreal winter season-when most of the CA precipitation increase occurs-is associated with changes in the mean circulation reminiscent of an El Niño teleconnection, including a southeastward shift of the upper level winds and an increase in storm track activity in the east Pacific, and an increase in CA moisture convergence. We further show that warming of tropical eastern Pacific sea surface temperatures-a robust feature in all models-accounts for these changes. Models that better simulate El Niño-CA precipitation teleconnections, including CESM, tend to yield larger, and more consistent increases in CA precipitation. Our results show that California will become wetter in a warmer world.
Dynamic analysis of space structures including elastic, multibody, and control behavior
NASA Technical Reports Server (NTRS)
Pinson, Larry; Soosaar, Keto
1989-01-01
The problem is to develop analysis methods, modeling stategies, and simulation tools to predict with assurance the on-orbit performance and integrity of large complex space structures that cannot be verified on the ground. The problem must incorporate large reliable structural models, multi-body flexible dynamics, multi-tier controller interaction, environmental models including 1g and atmosphere, various on-board disturbances, and linkage to mission-level performance codes. All areas are in serious need of work, but the weakest link is multi-body flexible dynamics.
Model verification of large structural systems
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1977-01-01
A methodology was formulated, and a general computer code implemented for processing sinusoidal vibration test data to simultaneously make adjustments to a prior mathematical model of a large structural system, and resolve measured response data to obtain a set of orthogonal modes representative of the test model. The derivation of estimator equations is shown along with example problems. A method for improving the prior analytic model is included.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Antenna and Electronics Cost Tradeoffs For Large Arrays
NASA Technical Reports Server (NTRS)
D'Addario, Larry R.
2007-01-01
This viewgraph presentation describes the cost tradeoffs for large antenna arrays. The contents include: 1) Cost modeling for large arrays; 2) Antenna mechanical cost over a wide range of sizes; and 3) Cost of per-antenna electronics.
Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.
2016-01-01
Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624
Modeling large woody debris recruitment for small streams of the Central Rocky Mountains
Don C. Bragg; Jeffrey L. Kershner; David W. Roberts
2000-01-01
As our understanding of the importance of large woody debris (LWD) evolves, planning for its production in riparian forest management is becoming more widely recognized. This report details the development of a model (CWD, version 1.4) that predicts LWD inputs, including descriptions of the field sampling used to parameterize parts of the model, the theoretical and...
Accurate force field for molybdenum by machine learning large materials data
NASA Astrophysics Data System (ADS)
Chen, Chi; Deng, Zhi; Tran, Richard; Tang, Hanmei; Chu, Iek-Heng; Ong, Shyue Ping
2017-09-01
In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mo's importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces, and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large and long-time scale simulations.
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
Stochasticity and determinism in models of hematopoiesis.
Kimmel, Marek
2014-01-01
This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.
NASA Technical Reports Server (NTRS)
Koenig, D. G.
1984-01-01
Factors influencing effective program planning for V/STOL wind-tunnel testing are discussed. The planning sequence itself, which includes a short checklist of considerations that could enhance the value of the tests, is also described. Each of the considerations, choice of wind tunnel, type of model installation, model development and test operations, is discussed, and examples of appropriate past and current V/STOL test programs are provided. A short survey of the moderate to large subsonic wind tunnels is followed by a review of several model installations, from two-dimensional to large-scale models of complete aircraft configurations. Model sizing, power simulation, and planning are treated, including three areas is test operations: data-acquisition systems, acoustic measurements in wind tunnels, and flow surveying.
NASA/Howard University Large Space Structures Institute
NASA Technical Reports Server (NTRS)
Broome, T. H., Jr.
1984-01-01
Basic research on the engineering behavior of large space structures is presented. Methods of structural analysis, control, and optimization of large flexible systems are examined. Topics of investigation include the Load Correction Method (LCM) modeling technique, stabilization of flexible bodies by feedback control, mathematical refinement of analysis equations, optimization of the design of structural components, deployment dynamics, and the use of microprocessors in attitude and shape control of large space structures. Information on key personnel, budgeting, support plans and conferences is included.
Integrating resource selection into spatial capture-recapture models for large carnivores
Proffitt, Kelly M.; Goldberg, Joshua; Hebblewite, Mark; Russell, Robin E.; Jimenez, Ben; Robinson, Hugh S.; Pilgrim, Kristine; Schwartz, Michael K.
2015-01-01
Wildlife managers need reliable methods to estimate large carnivore densities and population trends; yet large carnivores are elusive, difficult to detect, and occur at low densities making traditional approaches intractable. Recent advances in spatial capture-recapture (SCR) models have provided new approaches for monitoring trends in wildlife abundance and these methods are particularly applicable to large carnivores. We applied SCR models in a Bayesian framework to estimate mountain lion densities in the Bitterroot Mountains of west central Montana. We incorporate an existing resource selection function (RSF) as a density covariate to account for heterogeneity in habitat use across the study area and include data collected from harvested lions. We identify individuals through DNA samples collected by (1) biopsy darting mountain lions detected in systematic surveys of the study area, (2) opportunistically collecting hair and scat samples, and (3) sampling all harvested mountain lions. We included 80 DNA samples collected from 62 individuals in the analysis. Including information on predicted habitat use as a covariate on the distribution of activity centers reduced the median estimated density by 44%, the standard deviation by 7%, and the width of 95% credible intervals by 10% as compared to standard SCR models. Within the two management units of interest, we estimated a median mountain lion density of 4.5 mountain lions/100 km2 (95% CI = 2.9, 7.7) and 5.2 mountain lions/100 km2 (95% CI = 3.4, 9.1). Including harvested individuals (dead recovery) did not create a significant bias in the detection process by introducing individuals that could not be detected after removal. However, the dead recovery component of the model did have a substantial effect on results by increasing sample size. The ability to account for heterogeneity in habitat use provides a useful extension to SCR models, and will enhance the ability of wildlife managers to reliably and economically estimate density of wildlife populations, particularly large carnivores.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
A transport model for computer simulation of wildfires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linn, R.
1997-12-31
Realistic self-determining simulation of wildfires is a difficult task because of a large variety of important length scales (including scales on the size of twigs or grass and the size of large trees), imperfect data, complex fluid mechanics and heat transfer, and very complicated chemical reactions. The author uses a transport approach to produce a model that exhibits a self-determining propagation rate. The transport approach allows him to represent a large number of environments such as those with nonhomogeneous vegetation and terrain. He accounts for the microscopic details of a fire with macroscopic resolution by dividing quantities into mean andmore » fluctuating parts similar to what is done in traditional turbulence modeling. These divided quantities include fuel, wind, gas concentrations, and temperature. Reaction rates are limited by the mixing process and not the chemical kinetics. The author has developed a model that includes the transport of multiple gas species, such as oxygen and volatile hydrocarbons, and tracks the depletion of various fuels and other stationary solids and liquids. From this model he develops a simplified local burning model with which he performs a number of simulations that demonstrate that he is able to capture the important physics with the transport approach. With this simplified model he is able to pick up the essence of wildfire propagation, including such features as acceleration when transitioning to upsloping terrain, deceleration of fire fronts when they reach downslopes, and crowning in the presence of high winds.« less
ERIC Educational Resources Information Center
Andrews, Dee H.; Dineen, Toni; Bell, Herbert H.
1999-01-01
Discusses the use of constructive modeling and virtual simulation in team training; describes a military application of constructive modeling, including technology issues and communication protocols; considers possible improvements; and discusses applications in team-learning environments other than military, including industry and education. (LRW)
The relativistic feedback discharge model of terrestrial gamma ray flashes
NASA Astrophysics Data System (ADS)
Dwyer, Joseph R.
2012-02-01
As thunderclouds charge, the large-scale fields may approach the relativistic feedback threshold, above which the production of relativistic runaway electron avalanches becomes self-sustaining through the generation of backward propagating runaway positrons and backscattered X-rays. Positive intracloud (IC) lightning may force the large-scale electric fields inside thunderclouds above the relativistic feedback threshold, causing the number of runaway electrons, and the resulting X-ray and gamma ray emission, to grow exponentially, producing very large fluxes of energetic radiation. As the flux of runaway electrons increases, ionization eventually causes the electric field to discharge, bringing the field below the relativistic feedback threshold again and reducing the flux of runaway electrons. These processes are investigated with a new model that includes the production, propagation, diffusion, and avalanche multiplication of runaway electrons; the production and propagation of X-rays and gamma rays; and the production, propagation, and annihilation of runaway positrons. In this model, referred to as the relativistic feedback discharge model, the large-scale electric fields are calculated self-consistently from the charge motion of the drifting low-energy electrons and ions, produced from the ionization of air by the runaway electrons, including two- and three-body attachment and recombination. Simulation results show that when relativistic feedback is considered, bright gamma ray flashes are a natural consequence of upward +IC lightning propagating in large-scale thundercloud fields. Furthermore, these flashes have the same time structures, including both single and multiple pulses, intensities, angular distributions, current moments, and energy spectra as terrestrial gamma ray flashes, and produce large current moments that should be observable in radio waves.
Tectonic History of the Terrestrial Planets
NASA Technical Reports Server (NTRS)
Solomon, Sean C.
1993-01-01
The topics covered include the following: patterns of deformation and volcanic flows associated with lithospheric loading by large volcanoes on Venus; aspects of modeling the tectonics of large volcanoes on the terrestrial planets; state of stress, faulting, and eruption characteristics of large volcanoes on Mars; origin and thermal evolution of Mars; geoid-to-topography ratios on Venus; a tectonic resurfacing model for Venus; the resurfacing controversy for Venus; and the deformation belts of Lavinia Planitia.
NASA Technical Reports Server (NTRS)
Spinks, Debra (Compiler)
1997-01-01
This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.
Shared and Distinct Rupture Discriminants of Small and Large Intracranial Aneurysms.
Varble, Nicole; Tutino, Vincent M; Yu, Jihnhee; Sonig, Ashish; Siddiqui, Adnan H; Davies, Jason M; Meng, Hui
2018-04-01
Many ruptured intracranial aneurysms (IAs) are small. Clinical presentations suggest that small and large IAs could have different phenotypes. It is unknown if small and large IAs have different characteristics that discriminate rupture. We analyzed morphological, hemodynamic, and clinical parameters of 413 retrospectively collected IAs (training cohort; 102 ruptured IAs). Hierarchal cluster analysis was performed to determine a size cutoff to dichotomize the IA population into small and large IAs. We applied multivariate logistic regression to build rupture discrimination models for small IAs, large IAs, and an aggregation of all IAs. We validated the ability of these 3 models to predict rupture status in a second, independently collected cohort of 129 IAs (testing cohort; 14 ruptured IAs). Hierarchal cluster analysis in the training cohort confirmed that small and large IAs are best separated at 5 mm based on morphological and hemodynamic features (area under the curve=0.81). For small IAs (<5 mm), the resulting rupture discrimination model included undulation index, oscillatory shear index, previous subarachnoid hemorrhage, and absence of multiple IAs (area under the curve=0.84; 95% confidence interval, 0.78-0.88), whereas for large IAs (≥5 mm), the model included undulation index, low wall shear stress, previous subarachnoid hemorrhage, and IA location (area under the curve=0.87; 95% confidence interval, 0.82-0.93). The model for the aggregated training cohort retained all the parameters in the size-dichotomized models. Results in the testing cohort showed that the size-dichotomized rupture discrimination model had higher sensitivity (64% versus 29%) and accuracy (77% versus 74%), marginally higher area under the curve (0.75; 95% confidence interval, 0.61-0.88 versus 0.67; 95% confidence interval, 0.52-0.82), and similar specificity (78% versus 80%) compared with the aggregate-based model. Small (<5 mm) and large (≥5 mm) IAs have different hemodynamic and clinical, but not morphological, rupture discriminants. Size-dichotomized rupture discrimination models performed better than the aggregate model. © 2018 American Heart Association, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
..., large, non-metallic panels in their designs. In order to provide a level of safety that is equivalent to... with Non-Traditional, Large, Non-Metallic Panels AGENCY: Federal Aviation Administration (FAA), DOT... have novel or unusual design features associated with seats that include non-traditional, large, non...
Hurdles to Overcome to Model Carrington Class Events
NASA Astrophysics Data System (ADS)
Engel, M.; Henderson, M. G.; Jordanova, V. K.; Morley, S.
2017-12-01
Large geomagnetic storms pose a threat to both space and ground based infrastructure. In order to help mitigate that threat a better understanding of the specifics of these storms is required. Various computer models are being used around the world to analyze the magnetospheric environment, however they are largely inadequate for analyzing the large and extreme storm time environments. Here we report on the first steps towards expanding and robustifying the RAM-SCB inner magnetospheric model, used in conjunction with BATS-R-US and the Space Weather Modeling Framework, in order to simulate storms with Dst > -400. These results will then be used to help expand our modelling capabilities towards including Carrington-class events.
NASA Astrophysics Data System (ADS)
Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy
2017-09-01
An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.
Hierarchical models of very large problems, dilemmas, prospects, and an agenda for the future
NASA Technical Reports Server (NTRS)
Richardson, J. M., Jr.
1975-01-01
Interdisciplinary approaches to the modeling of global problems are discussed in terms of multilevel cooperation. A multilevel regionalized model of the Lake Erie Basin is analyzed along with a multilevel regionalized world modeling project. Other topics discussed include: a stratified model of interacting region in a world system, and the application of the model to the world food crisis in south Asia. Recommended research for future development of integrated models is included.
Model-based diagnosis of large diesel engines based on angular speed variations of the crankshaft
NASA Astrophysics Data System (ADS)
Desbazeille, M.; Randall, R. B.; Guillet, F.; El Badaoui, M.; Hoisnard, C.
2010-07-01
This work aims at monitoring large diesel engines by analyzing the crankshaft angular speed variations. It focuses on a powerful 20-cylinder diesel engine with crankshaft natural frequencies within the operating speed range. First, the angular speed variations are modeled at the crankshaft free end. This includes modeling both the crankshaft dynamical behavior and the excitation torques. As the engine is very large, the first crankshaft torsional modes are in the low frequency range. A model with the assumption of a flexible crankshaft is required. The excitation torques depend on the in-cylinder pressure curve. The latter is modeled with a phenomenological model. Mechanical and combustion parameters of the model are optimized with the help of actual data. Then, an automated diagnosis based on an artificially intelligent system is proposed. Neural networks are used for pattern recognition of the angular speed waveforms in normal and faulty conditions. Reference patterns required in the training phase are computed with the model, calibrated using a small number of actual measurements. Promising results are obtained. An experimental fuel leakage fault is successfully diagnosed, including detection and localization of the faulty cylinder, as well as the approximation of the fault severity.
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
An Agent Based Collaborative Simplification of 3D Mesh Model
NASA Astrophysics Data System (ADS)
Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro
Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.
NASA Astrophysics Data System (ADS)
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
NASA Astrophysics Data System (ADS)
Black, R. X.
2017-12-01
We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.
Modeling space-time correlations of velocity fluctuations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2018-07-01
An analytical model for the streamwise velocity space-time correlations in turbulent flows is derived and applied to the special case of velocity fluctuations in large wind farms. The model is based on the Kraichnan-Tennekes random sweeping hypothesis, capturing the decorrelation in time while including a mean wind velocity in the streamwise direction. In the resulting model, the streamwise velocity space-time correlation is expressed as a convolution of the pure space correlation with an analytical temporal decorrelation kernel. Hence, the spatio-temporal structure of velocity fluctuations in wind farms can be derived from the spatial correlations only. We then explore the applicability of the model to predict spatio-temporal correlations in turbulent flows in wind farms. Comparisons of the model with data from a large eddy simulation of flow in a large, spatially periodic wind farm are performed, where needed model parameters such as spatial and temporal integral scales and spatial correlations are determined from the large eddy simulation. Good agreement is obtained between the model and large eddy simulation data showing that spatial data may be used to model the full temporal structure of fluctuations in wind farms.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Kuroshio Pathways in a Climatologically-Forced Model
NASA Astrophysics Data System (ADS)
Douglass, E. M.; Jayne, S. R.; Bryan, F. O.; Peacock, S.; Maltrud, M. E.
2010-12-01
A high resolution ocean model forced with an annually repeating atmosphere is used to examine variability of the Kuroshio, the western boundary current in the North Pacific Ocean. A large meander in the path of the Kuroshio south of Japan develops and disappears in a highly bimodal fashion on decadal time scales. This meander is comparable in timing and spatial extent to an observed feature in the region. Various characteristics of the large meander are examined, including shear, transport and velocity. The many similarities between the model and observations indicate that the meander results from intrinsic oceanic variability, which is represented in this climatologically-forced model. Each large meander is preceded by a smaller "trigger" meander that originates at the southern end of Kyushu, moves up the coast, and develops into the large meander. However there are also many meanders very similar in character to the trigger meander that do not develop into large meanders. The mechanism that determines which trigger meanders develop into large meanders is as yet undetermined.
USDA-ARS?s Scientific Manuscript database
The wheat pathogen Stagonospora nodorum, causal organism of the wheat disease Stagonospora nodorum blotch, has emerged as a model for the Dothideomycetes, a large fungal taxon that includes many important plant pathogens. The initial annotation of the genome assembly included 16 586 nuclear gene mod...
USDA-ARS?s Scientific Manuscript database
The wheat pathogen Stagonospora nodorum, causal organism of the wheat disease Stagonospora nodorum blotch, has emerged as a model for the Dothideomycetes, a large fungal taxon that includes many important plant pathogens. The initial annotation of the genome assembly included 16 586 nuclear gene mod...
Time dependent turbulence modeling and analytical theories of turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, R.
1993-01-01
By simplifying the direct interaction approximation (DIA) for turbulent shear flow, time dependent formulas are derived for the Reynolds stresses which can be included in two equation models. The Green's function is treated phenomenologically, however, following Smith and Yakhot, we insist on the short and long time limits required by DIA. For small strain rates, perturbative evaluation of the correlation function yields a time dependent theory which includes normal stress effects in simple shear flows. From this standpoint, the phenomenological Launder-Reece-Rodi model is obtained by replacing the Green's function by its long time limit. Eddy damping corrections to short time behavior initiate too quickly in this model; in contrast, the present theory exhibits strong suppression of eddy damping at short times. A time dependent theory for large strain rates is proposed in which large scales are governed by rapid distortion theory while small scales are governed by Kolmogorov inertial range dynamics. At short times and large strain rates, the theory closely matches rapid distortion theory, but at long times it relaxes to an eddy damping model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crater, Jason; Galleher, Connor; Lievense, Jeff
NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integratedmore » black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.« less
Large-scale shell-model calculation with core excitations for neutron-rich nuclei beyond 132Sn
NASA Astrophysics Data System (ADS)
Jin, Hua; Hasegawa, Munetake; Tazaki, Shigeru; Kaneko, Kazunari; Sun, Yang
2011-10-01
The structure of neutron-rich nuclei with a few nucleons beyond 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including neutron core excitations, a new effective interaction is determined by employing the extended pairing-plus-quadrupole model with monopole corrections. The model provides a systematical description for energy levels of A=133-135 nuclei up to high spins and reproduces available data of electromagnetic transitions. The structure of these nuclei is analyzed in detail, with emphasis of effects associated with core excitations. The results show evidence of hexadecupole correlation in addition to octupole correlation in this mass region. The suggested feature of magnetic rotation in 135Te occurs in the present shell-model calculation.
Advance finite element modeling of rotor blade aeroelasticity
NASA Technical Reports Server (NTRS)
Straub, F. K.; Sangha, K. B.; Panda, B.
1994-01-01
An advanced beam finite element has been developed for modeling rotor blade dynamics and aeroelasticity. This element is part of the Element Library of the Second Generation Comprehensive Helicopter Analysis System (2GCHAS). The element allows modeling of arbitrary rotor systems, including bearingless rotors. It accounts for moderately large elastic deflections, anisotropic properties, large frame motion for maneuver simulation, and allows for variable order shape functions. The effects of gravity, mechanically applied and aerodynamic loads are included. All kinematic quantities required to compute airloads are provided. In this paper, the fundamental assumptions and derivation of the element matrices are presented. Numerical results are shown to verify the formulation and illustrate several features of the element.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
On the role of minicomputers in structural design
NASA Technical Reports Server (NTRS)
Storaasli, O. O.
1977-01-01
Results are presented of exploratory studies on the use of a minicomputer in conjunction with large-scale computers to perform structural design tasks, including data and program management, use of interactive graphics, and computations for structural analysis and design. An assessment is made of minicomputer use for the structural model definition and checking and for interpreting results. Included are results of computational experiments demonstrating the advantages of using both a minicomputer and a large computer to solve a large aircraft structural design problem.
Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools
NASA Astrophysics Data System (ADS)
Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew
2017-11-01
We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.
2014-12-01
Often, scientists and small research groups collect data, which target to address issues and have limited geographic or temporal range. A large number of such collections together constitute a large database that is of immense value to Earth Science studies. Complexity of integrating these data include heterogeneity in dimensions, coordinate systems, scales, variables, providers, users and contexts. They have been defined as long-tail data. Similarly, we use "long-tail models" to characterize a heterogeneous collection of models and/or modules developed for targeted problems by individuals and small groups, which together provide a large valuable collection. Complexity of integrating across these models include differing variable names and units for the same concept, model runs at different time steps and spatial resolution, use of differing naming and reference conventions, etc. Ability to "integrate long-tail models and data" will provide an opportunity for the interoperability and reusability of communities' resources, where not only models can be combined in a workflow, but each model will be able to discover and (re)use data in application specific context of space, time and questions. This capability is essential to represent, understand, predict, and manage heterogeneous and interconnected processes and activities by harnessing the complex, heterogeneous, and extensive set of distributed resources. Because of the staggering production rate of long-tail models and data resulting from the advances in computational, sensing, and information technologies, an important challenge arises: how can geoinformatics bring together these resources seamlessly, given the inherent complexity among model and data resources that span across various domains. We will present a semantic-based framework to support integration of "long-tail" models and data. This builds on existing technologies including: (i) SEAD (Sustainable Environmental Actionable Data) which supports curation and preservation of long-tail data during its life-cycle; (ii) BrownDog, which enhances the machine interpretability of large unstructured and uncurated data; and (iii) CSDMS (Community Surface Dynamics Modeling System), which "componentizes" models by providing plug-and-play environment for models integration.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Stratosphere-resolving CMIP5 models simulate different changes in the Southern Hemisphere
NASA Astrophysics Data System (ADS)
Rea, Gloria; Riccio, Angelo; Fierli, Federico; Cairo, Francesco; Cagnazzo, Chiara
2018-03-01
This work documents long-term changes in the Southern Hemisphere circulation in the austral spring-summer season in the Coupled Intercomparison Project Phase 5 models, showing that those changes are larger in magnitude and closer to ERA-Interim and other reanalyses if models include a dynamical representation of the stratosphere. Specifically, models with a high-top and included dynamical and—in some cases—chemical feedbacks within the stratosphere better simulate the lower stratospheric cooling observed over 1979-2001 and strongly driven by ozone depletion, when compared to the other models. This occurs because high-top models can fully capture the stratospheric large scale circulation response to the ozone-induced cooling. Interestingly, this difference is also found at the surface for the Southern Annular Mode (SAM) changes, even though all model categories tend to underestimate SAM trends over those decades. In this analysis, models including a proper dynamical stratosphere are more sensitive to lower stratospheric cooling in their tropospheric circulation response. After a brief discussion of two RCP scenarios, our study confirms that at least for large changes in the extratropical regions, stratospheric changes induced by external forcing have to be properly simulated, as they are important drivers of tropospheric climate variations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, M. J.; Moriarty, P. J.; Hao, Y.
The focus of this work is the comparison of the dynamic wake meandering model and large-eddy simulation with field data from the Egmond aan Zee offshore wind plant composed of 36 3-MW turbines. The field data includes meteorological mast measurements, SCADA information from all turbines, and strain-gauge data from two turbines. The dynamic wake meandering model and large-eddy simulation are means of computing unsteady wind plant aerodynamics, including the important unsteady meandering of wakes as they convect downstream and interact with other turbines and wakes. Both of these models are coupled to a turbine model such that power and mechanicalmore » loads of each turbine in the wind plant are computed. We are interested in how accurately different types of waking (e.g., direct versus partial waking), can be modeled, and how background turbulence level affects these loads. We show that both the dynamic wake meandering model and large-eddy simulation appear to underpredict power and overpredict fatigue loads because of wake effects, but it is unclear that they are really in error. This discrepancy may be caused by wind-direction uncertainty in the field data, which tends to make wake effects appear less pronounced.« less
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
SWAT: Model use, calibration, and validation
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...
Ip, Ryan H L; Li, W K; Leung, Kenneth M Y
2013-09-15
Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-27
... that incorporate non-traditional, large, non-metallic panels. To provide a level of safety equivalent... With Non-Traditional, Large, Non-Metallic Panels AGENCY: Federal Aviation Administration (FAA), DOT...., will have a novel or unusual design feature associated with seats that include non-traditional, large...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-04
... that incorporate non-traditional, large, non-metallic panels. To provide a level of safety equivalent...; Passenger Seats With Non-Traditional, Large, Non-Metallic Panels AGENCY: Federal Aviation Administration... novel or unusual design feature associated with seats that include non- traditional, large, non-metallic...
When the Test of Mediation is More Powerful than the Test of the Total Effect
O'Rourke, Holly P.; MacKinnon, David P.
2014-01-01
Although previous research has studied power in mediation models, the extent to which the inclusion of a mediator will increase power has not been investigated. First, a study compared analytical power of the mediated effect to the total effect in a single mediator model to identify the situations in which the inclusion of one mediator increased statistical power. Results from the first study indicated that including a mediator increased statistical power in small samples with large coefficients and in large samples with small coefficients, and when coefficients were non-zero and equal across models. Next, a study identified conditions where power was greater for the test of the total mediated effect compared to the test of the total effect in the parallel two mediator model. Results indicated that including two mediators increased power in small samples with large coefficients and in large samples with small coefficients, the same pattern of results found in the first study. Finally, a study assessed analytical power for a sequential (three-path) two mediator model and compared power to detect the three-path mediated effect to power to detect both the test of the total effect and the test of the mediated effect for the single mediator model. Results indicated that the three-path mediated effect had more power than the mediated effect from the single mediator model and the test of the total effect. Practical implications of these results for researchers are then discussed. PMID:24903690
Susan Will-Wolf; Peter Neitlich
2010-01-01
Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...
Data and Model Integration Promoting Interdisciplinarity
NASA Astrophysics Data System (ADS)
Koike, T.
2014-12-01
It is very difficult to reflect accumulated subsystem knowledge into holistic knowledge. Knowledge about a whole system can rarely be introduced into a targeted subsystem. In many cases, knowledge in one discipline is inapplicable to other disciplines. We are far from resolving cross-disciplinary issues. It is critically important to establish interdisciplinarity so that scientific knowledge can transcend disciplines. We need to share information and develop knowledge interlinkages by building models and exchanging tools. We need to tackle a large increase in the volume and diversity of data from observing the Earth. The volume of data stored has exponentially increased. Previously, almost all of the large-volume data came from satellites, but model outputs occupy the largest volume in general. To address the large diversity of data, we should develop an ontology system for technical and geographical terms in coupling with a metadata design according to international standards. In collaboration between Earth environment scientists and IT group, we should accelerate data archiving by including data loading, quality checking and metadata registration, and enrich data-searching capability. DIAS also enables us to perform integrated research and realize interdisciplinarity. For example, climate change should be addressed in collaboration between the climate models, integrated assessment models including energy, economy, agriculture, health, and the models of adaptation, vulnerability, and human settlement and infrastructure. These models identify water as central to these systems. If a water expert can develop an interrelated system including each component, the integrated crisis can be addressed by collaboration with various disciplines. To realize this purpose, we are developing a water-related data- and model-integration system called a water cycle integrator (WCI).
Advances in multi-scale modeling of solidification and casting processes
NASA Astrophysics Data System (ADS)
Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang
2011-04-01
The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.
A High-Resolution Model of the Beaufort Sea Circulation
NASA Astrophysics Data System (ADS)
Hedstrom, K.; Danielson, S. L.; Curchitser, E. N.; Lemieux, J. F.; Kasper, J.
2016-02-01
Configuration of and results from a coupled sea-ice ocean model of the Beaufort Sea shelf at 900 m resolution will be shown. Challenging features of the domain include large fresh water flux from the MacKenzie River, seasonal land-fast ice, and ice-covered open boundary conditions. A pan-Arctic domain provides boundary fields for both the ocean and sea-ice models (Regional Ocean Modeling System - myroms.org). Both models are forced with river inputs from the ARDAT climatology (Whitefield et al., 2015), which includes heat content as well as flow rate. Coastal discharges are prescribed as lateral inflows distributed over the depth of the ocean-land interface. New in the Beaufort domain is the use of a landfast ice parameterization (Lemieux, 2015), which adds a large bottom stress to the ice when the estimated keel depth approaches that of the ocean.
Global fits of GUT-scale SUSY models with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling
NASA Astrophysics Data System (ADS)
Saksena, S.; Dey, S.; Merwade, V.
2016-12-01
Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.
Evaluation of grid generation technologies from an applied perspective
NASA Technical Reports Server (NTRS)
Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.
1995-01-01
An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.
ERIC Educational Resources Information Center
Shin, Tacksoo
2012-01-01
This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation
Sherfey, Jason S.; Soplata, Austin E.; Ardid, Salva; Roberts, Erik A.; Stanley, David A.; Pittman-Polletta, Benjamin R.; Kopell, Nancy J.
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community. PMID:29599715
DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation.
Sherfey, Jason S; Soplata, Austin E; Ardid, Salva; Roberts, Erik A; Stanley, David A; Pittman-Polletta, Benjamin R; Kopell, Nancy J
2018-01-01
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.
Higher-level simulations of turbulent flows
NASA Technical Reports Server (NTRS)
Ferziger, J. H.
1981-01-01
The fundamentals of large eddy simulation are considered and the approaches to it are compared. Subgrid scale models and the development of models for the Reynolds-averaged equations are discussed as well as the use of full simulation in testing these models. Numerical methods used in simulating large eddies, the simulation of homogeneous flows, and results from full and large scale eddy simulations of such flows are examined. Free shear flows are considered with emphasis on the mixing layer and wake simulation. Wall-bounded flow (channel flow) and recent work on the boundary layer are also discussed. Applications of large eddy simulation and full simulation in meteorological and environmental contexts are included along with a look at the direction in which work is proceeding and what can be expected from higher-level simulation in the future.
Global Bedload Flux Modeling and Analysis in Large Rivers
NASA Astrophysics Data System (ADS)
Islam, M. T.; Cohen, S.; Syvitski, J. P.
2017-12-01
Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.
Large Animal Models for Foamy Virus Vector Gene Therapy
Trobridge, Grant D.; Horn, Peter A.; Beard, Brian C.; Kiem, Hans-Peter
2012-01-01
Foamy virus (FV) vectors have shown great promise for hematopoietic stem cell (HSC) gene therapy. Their ability to efficiently deliver transgenes to multi-lineage long-term repopulating cells in large animal models suggests they will be effective for several human hematopoietic diseases. Here, we review FV vector studies in large animal models, including the use of FV vectors with the mutant O6-methylguanine-DNA methyltransferase, MGMTP140K to increase the number of genetically modified cells after transplantation. In these studies, FV vectors have mediated efficient gene transfer to polyclonal repopulating cells using short ex vivo transduction protocols designed to minimize the negative effects of ex vivo culture on stem cell engraftment. In this regard, FV vectors appear superior to gammaretroviral vectors, which require longer ex vivo culture to effect efficient transduction. FV vectors have also compared favorably with lentiviral vectors when directly compared in the dog model. FV vectors have corrected leukocyte adhesion deficiency and pyruvate kinase deficiency in the dog large animal model. FV vectors also appear safer than gammaretroviral vectors based on a reduced frequency of integrants near promoters and also near proto-oncogenes in canine repopulating cells. Together, these studies suggest that FV vectors should be highly effective for several human hematopoietic diseases, including those that will require relatively high percentages of gene-modified cells to achieve clinical benefit. PMID:23223198
Finite element modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1983-01-01
Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.
Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik
We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custommore » number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.« less
Not-so-well-tempered neutralino
NASA Astrophysics Data System (ADS)
Profumo, Stefano; Stefaniak, Tim; Stephenson-Haskins, Laurel
2017-09-01
Light electroweakinos, the neutral and charged fermionic supersymmetric partners of the standard model SU (2 )×U (1 ) gauge bosons and of the two SU(2) Higgs doublets, are an important target for searches for new physics with the Large Hadron Collider (LHC). However, if the lightest neutralino is the dark matter, constraints from direct dark matter detection experiments rule out large swaths of the parameter space accessible to the LHC, including in large part the so-called "well-tempered" neutralinos. We focus on the minimal supersymmetric standard model (MSSM) and explore in detail which regions of parameter space are not excluded by null results from direct dark matter detection, assuming exclusive thermal production of neutralinos in the early universe, and illustrate the complementarity with current and future LHC searches for electroweak gauginos. We consider both bino-Higgsino and bino-wino "not-so-well-tempered" neutralinos, i.e. we include models where the lightest neutralino constitutes only part of the cosmological dark matter, with the consequent suppression of the constraints from direct and indirect dark matter searches.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Deep learning-based fine-grained car make/model classification for visual surveillance
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut
2017-10-01
Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.
NASA Astrophysics Data System (ADS)
Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.
2009-02-01
We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of the 2DEG in the drain access region.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
NASA Technical Reports Server (NTRS)
Branscome, Lee E.; Bleck, Rainer; Obrien, Enda
1990-01-01
The project objectives are to develop process models to investigate the interaction of planetary and synoptic-scale waves including the effects of latent heat release (precipitation), nonlinear dynamics, physical and boundary-layer processes, and large-scale topography; to determine the importance of latent heat release for temporal variability and time-mean behavior of planetary and synoptic-scale waves; to compare the model results with available observations of planetary and synoptic wave variability; and to assess the implications of the results for monitoring precipitation in oceanic-storm tracks by satellite observing systems. Researchers have utilized two different models for this project: a two-level quasi-geostrophic model to study intraseasonal variability, anomalous circulations and the seasonal cycle, and a 10-level, multi-wave primitive equation model to validate the two-level Q-G model and examine effects of convection, surface processes, and spherical geometry. It explicitly resolves several planetary and synoptic waves and includes specific humidity (as a predicted variable), moist convection, and large-scale precipitation. In the past year researchers have concentrated on experiments with the multi-level primitive equation model. The dynamical part of that model is similar to the spectral model used by the National Meteorological Center for medium-range forecasts. The model includes parameterizations of large-scale condensation and moist convection. To test the validity of results regarding the influence of convective precipitation, researchers can use either one of two different convective schemes in the model, a Kuo convective scheme or a modified Arakawa-Schubert scheme which includes downdrafts. By choosing one or the other scheme, they can evaluate the impact of the convective parameterization on the circulation. In the past year researchers performed a variety of initial-value experiments with the primitive-equation model. Using initial conditions typical of climatological winter conditions, they examined the behavior of synoptic and planetary waves growing in moist and dry environments. Surface conditions were representative of a zonally averaged ocean. They found that moist convection associated with baroclinic wave development was confined to the subtropics.
Calculating Second-Order Effects in MOSFET's
NASA Technical Reports Server (NTRS)
Benumof, Reuben; Zoutendyk, John A.; Coss, James R.
1990-01-01
Collection of mathematical models includes second-order effects in n-channel, enhancement-mode, metal-oxide-semiconductor field-effect transistors (MOSFET's). When dimensions of circuit elements relatively large, effects neglected safely. However, as very-large-scale integration of microelectronic circuits leads to MOSFET's shorter or narrower than 2 micrometer, effects become significant in design and operation. Such computer programs as widely-used "Simulation Program With Integrated Circuit Emphasis, Version 2" (SPICE 2) include many of these effects. In second-order models of n-channel, enhancement-mode MOSFET, first-order gate-depletion region diminished by triangular-cross-section deletions on end and augmented by circular-wedge-cross-section bulges on sides.
NASA Astrophysics Data System (ADS)
Kelkar, S.; Karra, S.; Pawar, R. J.; Zyvoloski, G.
2012-12-01
There has been an increasing interest in the recent years in developing computational tools for analyzing coupled thermal, hydrological and mechanical (THM) processes that occur in geological porous media. This is mainly due to their importance in applications including carbon sequestration, enhanced geothermal systems, oil and gas production from unconventional sources, degradation of Arctic permafrost, and nuclear waste isolation. Large changes in pressures, temperatures and saturation can result due to injection/withdrawal of fluids or emplaced heat sources. These can potentially lead to large changes in the fluid flow and mechanical behavior of the formation, including shear and tensile failure on pre-existing or induced fractures and the associated permeability changes. Due to this, plastic deformation and large changes in material properties such as permeability and porosity can be expected to play an important role in these processes. We describe a general purpose computational code FEHM that has been developed for the purpose of modeling coupled THM processes during multi-phase fluid flow and transport in fractured porous media. The code uses a continuum mechanics approach, based on control volume - finite element method. It is designed to address spatial scales on the order of tens of centimeters to tens of kilometers. While large deformations are important in many situations, we have adapted the small strain formulation as useful insight can be obtained in many problems of practical interest with this approach while remaining computationally manageable. Nonlinearities in the equations and the material properties are handled using a full Jacobian Newton-Raphson technique. Stress-strain relationships are assumed to follow linear elastic/plastic behavior. The code incorporates several plasticity models such as von Mises, Drucker-Prager, and also a large suite of models for coupling flow and mechanical deformation via permeability and stresses/deformations. In this work we present several example applications of such models.
Preliminary results on the dynamics of large and flexible space structures in Halo orbits
NASA Astrophysics Data System (ADS)
Colagrossi, Andrea; Lavagna, Michèle
2017-05-01
The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around one of the Earth-Moon collinear Lagrangian points, L1 or L2, is discussed to point out some relevant outcomes for the potential implementation of such a mission.
Shape determination and control for large space structures
NASA Technical Reports Server (NTRS)
Weeks, C. J.
1981-01-01
An integral operator approach is used to derive solutions to static shape determination and control problems associated with large space structures. Problem assumptions include a linear self-adjoint system model, observations and control forces at discrete points, and performance criteria for the comparison of estimates or control forms. Results are illustrated by simulations in the one dimensional case with a flexible beam model, and in the multidimensional case with a finite model of a large space antenna. Modal expansions for terms in the solution algorithms are presented, using modes from the static or associated dynamic mode. These expansions provide approximated solutions in the event that a used form analytical solution to the system boundary value problem is not available.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1993-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1992-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
NASA Technical Reports Server (NTRS)
Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.
1999-01-01
A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clauss, D.B.
The analyses used to predict the behavior of a 1:8-scale model of a steel LWR containment building to static overpressurization are described and results are presented. Finite strain, large displacement, and nonlinear material properties were accounted for using finite element methods. Three-dimensional models were needed to analyze the penetrations, which included operable equipment hatches, personnel lock representations, and a constrained pipe. It was concluded that the scale model would fail due to leakage caused by large deformations of the equipment hatch sleeves. 13 refs., 34 figs., 1 tab.
NASA Technical Reports Server (NTRS)
Weiberg, James A.; Holzhauser, Curt A.
1961-01-01
Tests were made of a large-scale tilt-wing deflected-slipstream VTOL airplane with blowing-type BLC trailing-edge flaps. The model was tested with flap deflections of 0 deg. without BLC, 50 deg. with and without BLC, and 80 deg. with BLC for wing-tilt angles of 0, 30, and 50 deg. Included are results of tests of the model equipped with a leading-edge flap and the results of tests of the model in the presence of a ground plane.
NASA Technical Reports Server (NTRS)
Schwan, Karsten
1994-01-01
Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.
When the test of mediation is more powerful than the test of the total effect.
O'Rourke, Holly P; MacKinnon, David P
2015-06-01
Although previous research has studied power in mediation models, the extent to which the inclusion of a mediator will increase power has not been investigated. To address this deficit, in a first study we compared the analytical power values of the mediated effect and the total effect in a single-mediator model, to identify the situations in which the inclusion of one mediator increased statistical power. The results from this first study indicated that including a mediator increased statistical power in small samples with large coefficients and in large samples with small coefficients, and when coefficients were nonzero and equal across models. Next, we identified conditions under which power was greater for the test of the total mediated effect than for the test of the total effect in the parallel two-mediator model. These results indicated that including two mediators increased power in small samples with large coefficients and in large samples with small coefficients, the same pattern of results that had been found in the first study. Finally, we assessed the analytical power for a sequential (three-path) two-mediator model and compared the power to detect the three-path mediated effect to the power to detect both the test of the total effect and the test of the mediated effect for the single-mediator model. The results indicated that the three-path mediated effect had more power than the mediated effect from the single-mediator model and the test of the total effect. Practical implications of these results for researchers are then discussed.
NASA Astrophysics Data System (ADS)
lai, W.; Steinke, R. C.; Ogden, F. L.
2013-12-01
Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents development of a physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, includes: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow and water management. The ADHydro model uses the explicit finite volume method to solve PDEs for 2D overland flow, 2D saturated groundwater flow coupled to 1D channel flow. The model has a quasi-3D formulation that couples 2D overland flow and 2D saturated groundwater flow using the 1D Talbot-Ogden finite water-content infiltration and redistribution model. This eliminates difficulties in solving the highly nonlinear 3D Richards equation, while the finite volume Talbot-Ogden infiltration solution is computationally efficient, guaranteed to conserve mass, and allows simulation of the effect of near-surface groundwater tables on runoff generation. The process-level components of the model are being individually tested and validated. The model as a whole will be tested on the Green River basin in Wyoming and ultimately applied to the entire Upper Colorado River basin. ADHydro development has necessitated development of tools for large-scale watershed modeling, including open-source workflow steps to extract hydromorphological information from GIS data, integrate hydrometeorological and water management forcing input, and post-processing and visualization of large output data sets. The ADHydro model will be coupled with relevant components of the NOAH-MP land surface scheme and the WRF mesoscale meteorological model. Model objectives include well documented Application Programming Interfaces (APIs) to facilitate modifications and additions by others. We will release the model as open-source in 2014 and begin establishing a users' community.
On quantum integrable models related to nonlinear quantum optics. An algebraic Bethe ansatz approach
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
1989-08-01
A unified approach based on Bethe ansatz in a large variety of integrable models in quantum optics is given. Second harmonics generation, three-boson interaction, the Dicke model, and some cases of four-boson interaction as special cases of su(2)⊕su(1,1)-Gaudin models are included.
New optical and radio frequency angular tropospheric refraction models for deep space applications
NASA Technical Reports Server (NTRS)
Berman, A. L.; Rockwell, S. T.
1976-01-01
The development of angular tropospheric refraction models for optical and radio frequency usage is presented. The models are compact analytic functions, finite over the entire domain of elevation angle, and accurate over large ranges of pressure, temperature, and relative humidity. Additionally, FORTRAN subroutines for each of the models are included.
Hunting down the best model of inflation with Bayesian evidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Jerome; Ringeval, Christophe; Trotta, Roberto
2011-03-15
We present the first calculation of the Bayesian evidence for different prototypical single field inflationary scenarios, including representative classes of small field and large field models. This approach allows us to compare inflationary models in a well-defined statistical way and to determine the current 'best model of inflation'. The calculation is performed numerically by interfacing the inflationary code FieldInf with MultiNest. We find that small field models are currently preferred, while large field models having a self-interacting potential of power p>4 are strongly disfavored. The class of small field models as a whole has posterior odds of approximately 3 ratiomore » 1 when compared with the large field class. The methodology and results presented in this article are an additional step toward the construction of a full numerical pipeline to constrain the physics of the early Universe with astrophysical observations. More accurate data (such as the Planck data) and the techniques introduced here should allow us to identify conclusively the best inflationary model.« less
Pre-supernova models for massive stars produced with large nuclear reaction network by MESA
NASA Astrophysics Data System (ADS)
Park, Byeongchan; Kwak, Kyujin
2018-04-01
Core-collapsed Supernova (CCSN) is one of violent phenomena in the universe. CCSN generates heavy elements and leaves a neutron star behind. It has been known that the physical properties of CCSN depend on those of pre-supernova such as mass, metallicities including distribution of elements, and the density and temperature profile which are obtained from the stellar evolution calculation. In particular, the production of heavy elements in CCSN is sensitive to the abundance profiles in the pre-supernova models. In this study, we evolve a massive main sequence star with 15Msun and solar metallicity to the pre-supernova stage by using two different networks, small and large. The large nuclear reaction network includes more than four times isotopes than the small network. Our calculations were done by MESA (Modules for Experiments in Stellar Astrophysics) which allowed us to use the large network containing about a hundred isotopes. We compare the results obtained with two networks.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
...-traditional, large, non-metallic panels. In order to provide a level of safety that is equivalent to that... Airplanes; Seats With Non-Traditional, Large, Non-Metallic Panels AGENCY: Federal Aviation Administration... a novel or unusual design feature(s) associated with seats that include non-traditional, large, non...
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Continuum modeling of large lattice structures: Status and projections
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Mikulas, Martin M., Jr.
1988-01-01
The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.
Motor Vehicle Demand Models : Assessment of the State of the Art and Directions for Future Research
DOT National Transportation Integrated Search
1981-04-01
The report provides an assessment of the current state of motor vehicle demand modeling. It includes a detailed evaluation of one leading large-scale econometric vehicle demand model, which is tested for both logical consistency and forecasting accur...
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Modelling MIZ dynamics in a global model
NASA Astrophysics Data System (ADS)
Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto
2016-04-01
Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.
Bailey, Ryan T.; Morway, Eric D.; Niswonger, Richard G.; Gates, Timothy K.
2013-01-01
A numerical model was developed that is capable of simulating multispecies reactive solute transport in variably saturated porous media. This model consists of a modified version of the reactive transport model RT3D (Reactive Transport in 3 Dimensions) that is linked to the Unsaturated-Zone Flow (UZF1) package and MODFLOW. Referred to as UZF-RT3D, the model is tested against published analytical benchmarks as well as other published contaminant transport models, including HYDRUS-1D, VS2DT, and SUTRA, and the coupled flow and transport modeling system of CATHY and TRAN3D. Comparisons in one-dimensional, two-dimensional, and three-dimensional variably saturated systems are explored. While several test cases are included to verify the correct implementation of variably saturated transport in UZF-RT3D, other cases are included to demonstrate the usefulness of the code in terms of model run-time and handling the reaction kinetics of multiple interacting species in variably saturated subsurface systems. As UZF1 relies on a kinematic-wave approximation for unsaturated flow that neglects the diffusive terms in Richards equation, UZF-RT3D can be used for large-scale aquifer systems for which the UZF1 formulation is reasonable, that is, capillary-pressure gradients can be neglected and soil parameters can be treated as homogeneous. Decreased model run-time and the ability to include site-specific chemical species and chemical reactions make UZF-RT3D an attractive model for efficient simulation of multispecies reactive transport in variably saturated large-scale subsurface systems.
75 FR 72611 - Assessments, Large Bank Pricing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-24
... the worst risk ranking and are included in the statistical analysis. Appendix 1 to the NPR describes the statistical analysis in detail. \\12\\ The percentage approximated by factors is based on the statistical model for that particual year. Actual weights assigned to each scorecard measure are largely based...
Nonlinear finite element modeling of corrugated board
A. C. Gilchrist; J. C. Suhling; T. J. Urbanik
1999-01-01
In this research, an investigation on the mechanical behavior of corrugated board has been performed using finite element analysis. Numerical finite element models for corrugated board geometries have been created and executed. Both geometric (large deformation) and material nonlinearities were included in the models. The analyses were performed using the commercial...
Michael, Andrew J.
2012-01-01
Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.
Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.
Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick
2017-10-01
In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Recent advances in large-eddy simulation of spray and coal combustion
NASA Astrophysics Data System (ADS)
Zhou, L. X.
2013-07-01
Large-eddy simulation (LES) is under its rapid development and is recognized as a possible second generation of CFD methods used in engineering. Spray and coal combustion is widely used in power, transportation, chemical and metallurgical, iron and steel making, aeronautical and astronautical engineering, hence LES of spray and coal two-phase combustion is particularly important for engineering application. LES of two-phase combustion attracts more and more attention; since it can give the detailed instantaneous flow and flame structures and more exact statistical results than those given by the Reynolds averaged modeling (RANS modeling). One of the key problems in LES is to develop sub-grid scale (SGS) models, including SGS stress models and combustion models. Different investigators proposed or adopted various SGS models. In this paper the present author attempts to review the advances in studies on LES of spray and coal combustion, including the studies done by the present author and his colleagues. Different SGS models adopted by different investigators are described, some of their main results are summarized, and finally some research needs are discussed.
Geist; Dauble
1998-09-01
/ Knowledge of the three-dimensional connectivity between rivers and groundwater within the hyporheic zone can be used to improve the definition of fall chinook salmon (Oncorhynchus tshawytscha) spawning habitat. Information exists on the microhabitat characteristics that define suitable salmon spawning habitat. However, traditional spawning habitat models that use these characteristics to predict available spawning habitat are restricted because they can not account for the heterogeneous nature of rivers. We present a conceptual spawning habitat model for fall chinook salmon that describes how geomorphic features of river channels create hydraulic processes, including hyporheic flows, that influence where salmon spawn in unconstrained reaches of large mainstem alluvial rivers. Two case studies based on empirical data from fall chinook salmon spawning areas in the Hanford Reach of the Columbia River are presented to illustrate important aspects of our conceptual model. We suggest that traditional habitat models and our conceptual model be combined to predict the limits of suitable fall chinook salmon spawning habitat. This approach can incorporate quantitative measures of river channel morphology, including general descriptors of geomorphic features at different spatial scales, in order to understand the processes influencing redd site selection and spawning habitat use. This information is needed in order to protect existing salmon spawning habitat in large rivers, as well as to recover habitat already lost.KEY WORDS: Hyporheic zone; Geomorphology; Spawning habitat; Large rivers; Fall chinook salmon; Habitat management
A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis
NASA Astrophysics Data System (ADS)
Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.
2006-12-01
Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.
Staniczenko, Phillip P A; Sivasubramaniam, Prabu; Suttle, K Blake; Pearson, Richard G
2017-06-01
Macroecological models for predicting species distributions usually only include abiotic environmental conditions as explanatory variables, despite knowledge from community ecology that all species are linked to other species through biotic interactions. This disconnect is largely due to the different spatial scales considered by the two sub-disciplines: macroecologists study patterns at large extents and coarse resolutions, while community ecologists focus on small extents and fine resolutions. A general framework for including biotic interactions in macroecological models would help bridge this divide, as it would allow for rigorous testing of the role that biotic interactions play in determining species ranges. Here, we present an approach that combines species distribution models with Bayesian networks, which enables the direct and indirect effects of biotic interactions to be modelled as propagating conditional dependencies among species' presences. We show that including biotic interactions in distribution models for species from a California grassland community results in better range predictions across the western USA. This new approach will be important for improving estimates of species distributions and their dynamics under environmental change. © 2017 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
Visualization and modeling of smoke transport over landscape scales
Glenn P. Forney; William Mell
2007-01-01
Computational tools have been developed at the National Institute of Standards and Technology (NIST) for modeling fire spread and smoke transport. These tools have been adapted to address fire scenarios that occur in the wildland urban interface (WUI) over kilometer-scale distances. These models include the smoke plume transport model ALOFT (A Large Open Fire plume...
Morphodynamic modeling of the river pattern continuum (Invited)
NASA Astrophysics Data System (ADS)
Nicholas, A. P.
2013-12-01
Numerical models provide valuable tools for integrating understanding of fluvial processes and morphology. Moreover, they have considerable potential for use in investigating river responses to environmental change and catchment management, and for aiding the interpretation of alluvial deposits and landforms. For this potential to be realised fully, such models must be capable of representing diverse river styles and the spatial and temporal transitions between styles that are driven by changes in environmental forcing. However, while numerical modeling of rivers has advanced considerable over the past few decades, this has been accomplished largely by developing separate approaches to modeling single and multi-thread channels. Results are presented here from numerical simulations undertaken using a new model of river and floodplain co-evolution, applied to investigate the morphodynamics of large sand-bed rivers. This model solves the two-dimensional depth-averaged shallow water equations using a Godunov-type finite volume scheme, with a two-fraction representation of sediment transport, and includes the effects of secondary circulation, bank erosion and floodplain development due to the colonization of bar surfaces by vegetation. Simulation results demonstrate the feasibility of representing a wide range of fluvial styles (including braiding, meandering and anabranching channels) using relatively simple physics-based models, and provide insight into the controls on channel pattern diversity in large sand-bed rivers. Analysis of model sensitivity illustrates the important role of upstream boundary conditions as a control on channel dynamics. Moreover, this analysis highlights key uncertainties in model process representation and their implications for modelling river evolution in response to natural and anthropogenic-induced river disturbance.
Some aspects of control of a large-scale dynamic system
NASA Technical Reports Server (NTRS)
Aoki, M.
1975-01-01
Techniques of predicting and/or controlling the dynamic behavior of large scale systems are discussed in terms of decentralized decision making. Topics discussed include: (1) control of large scale systems by dynamic team with delayed information sharing; (2) dynamic resource allocation problems by a team (hierarchical structure with a coordinator); and (3) some problems related to the construction of a model of reduced dimension.
USDA-ARS?s Scientific Manuscript database
Large animals (both livestock and wildlife) serve as important reservoirs of zoonotic pathogens, including Brucella, Salmonella, and E. coli, as well as useful models for the study of pathogenesis and/or spread of the bacteria in non-murine hosts. With the key function of lymph nodes in the host imm...
An Illustrative Guide to the Minerva Framework
NASA Astrophysics Data System (ADS)
Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration
2017-10-01
Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.
Thick strings, the liquid crystal blue phase, and cosmological large-scale structure
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1992-01-01
A phenomenological model based on the liquid crystal blue phase is proposed as a model for a late-time cosmological phase transition. Topological defects, in particular thick strings and/or domain walls, are presented as seeds for structure formation. It is shown that the observed large-scale structure, including quasi-periodic wall structure, can be well fitted in the model without violating the microwave background isotropy bound or the limits from induced gravitational waves and the millisecond pulsar timing. Furthermore, such late-time transitions can produce objects such as quasars at high redshifts. The model appears to work with either cold or hot dark matter.
Techno-economic assessment of novel vanadium redox flow batteries with large-area cells
NASA Astrophysics Data System (ADS)
Minke, Christine; Kunz, Ulrich; Turek, Thomas
2017-09-01
The vanadium redox flow battery (VRFB) is a promising electrochemical storage system for stationary megawatt-class applications. The currently limited cell area determined by the bipolar plate (BPP) could be enlarged significantly with a novel extruded large-area plate. For the first time a techno-economic assessment of VRFB in a power range of 1 MW-20 MW and energy capacities of up to 160 MWh is presented on the basis of the production cost model of large-area BPP. The economic model is based on the configuration of a 250 kW stack and the overall system including stacks, power electronics, electrolyte and auxiliaries. Final results include a simple function for the calculation of system costs within the above described scope. In addition, the impact of cost reduction potentials for key components (membrane, electrode, BPP, vanadium electrolyte) on stack and system costs is quantified and validated.
Engineering Large Animal Species to Model Human Diseases.
Rogers, Christopher S
2016-07-01
Animal models are an important resource for studying human diseases. Genetically engineered mice are the most commonly used species and have made significant contributions to our understanding of basic biology, disease mechanisms, and drug development. However, they often fail to recreate important aspects of human diseases and thus can have limited utility as translational research tools. Developing disease models in species more similar to humans may provide a better setting in which to study disease pathogenesis and test new treatments. This unit provides an overview of the history of genetically engineered large animals and the techniques that have made their development possible. Factors to consider when planning a large animal model, including choice of species, type of modification and methodology, characterization, production methods, and regulatory compliance, are also covered. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
Movement ecology: size-specific behavioral response of an invasive snail to food availability.
Snider, Sunny B; Gilliam, James F
2008-07-01
Immigration, emigration, migration, and redistribution describe processes that involve movement of individuals. These movements are an essential part of contemporary ecological models, and understanding how movement is affected by biotic and abiotic factors is important for effectively modeling ecological processes that depend on movement. We asked how phenotypic heterogeneity (body size) and environmental heterogeneity (food resource level) affect the movement behavior of an aquatic snail (Tarebia granifera), and whether including these phenotypic and environmental effects improves advection-diffusion models of movement. We postulated various elaborations of the basic advection diffusion model as a priori working hypotheses. To test our hypotheses we measured individual snail movements in experimental streams at high- and low-food resource treatments. Using these experimental movement data, we examined the dependency of model selection on resource level and body size using Akaike's Information Criterion (AIC). At low resources, large individuals moved faster than small individuals, producing a platykurtic movement distribution; including size dependency in the model improved model performance. In stark contrast, at high resources, individuals moved upstream together as a wave, and body size differences largely disappeared. The model selection exercise indicated that population heterogeneity is best described by the advection component of movement for this species, because the top-ranked model included size dependency in advection, but not diffusion. Also, all probable models included resource dependency. Thus population and environmental heterogeneities both influence individual movement behaviors and the population-level distribution kernels, and their interaction may drive variation in movement behaviors in terms of both advection rates and diffusion rates. A behaviorally informed modeling framework will integrate the sentient response of individuals in terms of movement and enhance our ability to accurately model ecological processes that depend on animal movement.
New Models and Methods for the Electroweak Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Linda
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently beingmore » measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac Gaugino Models.« less
DocCube: Multi-Dimensional Visualization and Exploration of Large Document Sets.
ERIC Educational Resources Information Center
Mothe, Josiane; Chrisment, Claude; Dousset, Bernard; Alaux, Joel
2003-01-01
Describes a user interface that provides global visualizations of large document sets to help users formulate the query that corresponds to their information needs. Highlights include concept hierarchies that users can browse to specify and refine information needs; knowledge discovery in databases and texts; and multidimensional modeling.…
NASA Technical Reports Server (NTRS)
Rodriguez, G. (Editor)
1983-01-01
Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.
Marketing Library and Information Services: Comparing Experiences at Large Institutions.
ERIC Educational Resources Information Center
Noel, Robert; Waugh, Timothy
This paper explores some of the similarities and differences between publicizing information services within the academic and corporate environments, comparing the marketing experiences of Abbot Laboratories (Illinois) and Indiana University. It shows some innovative online marketing tools, including an animated gif model of a large, integrated…
Dean, Christopher; Kirkpatrick, Jamie B; Osborn, Jon; Doyle, Richard B; Fitzgerald, Nicholas B; Roxburgh, Stephen H
2018-01-01
Abstract There is high uncertainty in the contribution of land-use change to anthropogenic climate change, especially pertaining to below-ground carbon loss resulting from conversion of primary-to-secondary forest. Soil organic carbon (SOC) and coarse roots are concentrated close to tree trunks, a region usually unmeasured during soil carbon sampling. Soil carbon estimates and their variation with land-use change have not been correspondingly adjusted. Our aim was to deduce allometric equations that will allow improvement of SOC estimates and tree trunk carbon estimates, for primary forest stands that include large trees in rugged terrain. Terrestrial digital photography, photogrammetry and GIS software were used to produce 3D models of the buttresses, roots and humus mounds of large trees in primary forests dominated by Eucalyptus regnans in Tasmania. Models of 29, in situ eucalypts were made and analysed. 3D models of example eucalypt roots, logging debris, rainforest tree species, fallen trees, branches, root and trunk slices, and soil profiles were also derived. Measurements in 2D, from earlier work, of three buttress ‘logs’ were added to the data set. The 3D models had high spatial resolution. The modelling allowed checking and correction of field measurements. Tree anatomical detail was formulated, such as buttress shape, humus volume, root volume in the under-sampled zone and trunk hollow area. The allometric relationships developed link diameter at breast height and ground slope, to SOC and tree trunk carbon, the latter including a correction for senescence. These formulae can be applied to stand-level carbon accounting. The formulae allow the typically measured, inter-tree SOC to be corrected for not sampling near large trees. The 3D models developed are irreplaceable, being for increasingly rare, large trees, and they could be useful to other scientific endeavours. PMID:29593855
Dean, Christopher; Kirkpatrick, Jamie B; Osborn, Jon; Doyle, Richard B; Fitzgerald, Nicholas B; Roxburgh, Stephen H
2018-03-01
There is high uncertainty in the contribution of land-use change to anthropogenic climate change, especially pertaining to below-ground carbon loss resulting from conversion of primary-to-secondary forest. Soil organic carbon (SOC) and coarse roots are concentrated close to tree trunks, a region usually unmeasured during soil carbon sampling. Soil carbon estimates and their variation with land-use change have not been correspondingly adjusted. Our aim was to deduce allometric equations that will allow improvement of SOC estimates and tree trunk carbon estimates, for primary forest stands that include large trees in rugged terrain. Terrestrial digital photography, photogrammetry and GIS software were used to produce 3D models of the buttresses, roots and humus mounds of large trees in primary forests dominated by Eucalyptus regnans in Tasmania. Models of 29, in situ eucalypts were made and analysed. 3D models of example eucalypt roots, logging debris, rainforest tree species, fallen trees, branches, root and trunk slices, and soil profiles were also derived. Measurements in 2D, from earlier work, of three buttress 'logs' were added to the data set. The 3D models had high spatial resolution. The modelling allowed checking and correction of field measurements. Tree anatomical detail was formulated, such as buttress shape, humus volume, root volume in the under-sampled zone and trunk hollow area. The allometric relationships developed link diameter at breast height and ground slope, to SOC and tree trunk carbon, the latter including a correction for senescence. These formulae can be applied to stand-level carbon accounting. The formulae allow the typically measured, inter-tree SOC to be corrected for not sampling near large trees. The 3D models developed are irreplaceable, being for increasingly rare, large trees, and they could be useful to other scientific endeavours.
ERIC Educational Resources Information Center
Rijmen, Frank; Jeon, Minjeong; von Davier, Matthias; Rabe-Hesketh, Sophia
2014-01-01
Second-order item response theory models have been used for assessments consisting of several domains, such as content areas. We extend the second-order model to a third-order model for assessments that include subdomains nested in domains. Using a graphical model framework, it is shown how the model does not suffer from the curse of…
Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.
Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi
2015-02-01
We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Integrative analysis of the Caenorhabditis elegans genome by the modENCODE project.
Gerstein, Mark B; Lu, Zhi John; Van Nostrand, Eric L; Cheng, Chao; Arshinoff, Bradley I; Liu, Tao; Yip, Kevin Y; Robilotto, Rebecca; Rechtsteiner, Andreas; Ikegami, Kohta; Alves, Pedro; Chateigner, Aurelien; Perry, Marc; Morris, Mitzi; Auerbach, Raymond K; Feng, Xin; Leng, Jing; Vielle, Anne; Niu, Wei; Rhrissorrakrai, Kahn; Agarwal, Ashish; Alexander, Roger P; Barber, Galt; Brdlik, Cathleen M; Brennan, Jennifer; Brouillet, Jeremy Jean; Carr, Adrian; Cheung, Ming-Sin; Clawson, Hiram; Contrino, Sergio; Dannenberg, Luke O; Dernburg, Abby F; Desai, Arshad; Dick, Lindsay; Dosé, Andréa C; Du, Jiang; Egelhofer, Thea; Ercan, Sevinc; Euskirchen, Ghia; Ewing, Brent; Feingold, Elise A; Gassmann, Reto; Good, Peter J; Green, Phil; Gullier, Francois; Gutwein, Michelle; Guyer, Mark S; Habegger, Lukas; Han, Ting; Henikoff, Jorja G; Henz, Stefan R; Hinrichs, Angie; Holster, Heather; Hyman, Tony; Iniguez, A Leo; Janette, Judith; Jensen, Morten; Kato, Masaomi; Kent, W James; Kephart, Ellen; Khivansara, Vishal; Khurana, Ekta; Kim, John K; Kolasinska-Zwierz, Paulina; Lai, Eric C; Latorre, Isabel; Leahey, Amber; Lewis, Suzanna; Lloyd, Paul; Lochovsky, Lucas; Lowdon, Rebecca F; Lubling, Yaniv; Lyne, Rachel; MacCoss, Michael; Mackowiak, Sebastian D; Mangone, Marco; McKay, Sheldon; Mecenas, Desirea; Merrihew, Gennifer; Miller, David M; Muroyama, Andrew; Murray, John I; Ooi, Siew-Loon; Pham, Hoang; Phippen, Taryn; Preston, Elicia A; Rajewsky, Nikolaus; Rätsch, Gunnar; Rosenbaum, Heidi; Rozowsky, Joel; Rutherford, Kim; Ruzanov, Peter; Sarov, Mihail; Sasidharan, Rajkumar; Sboner, Andrea; Scheid, Paul; Segal, Eran; Shin, Hyunjin; Shou, Chong; Slack, Frank J; Slightam, Cindie; Smith, Richard; Spencer, William C; Stinson, E O; Taing, Scott; Takasaki, Teruaki; Vafeados, Dionne; Voronina, Ksenia; Wang, Guilin; Washington, Nicole L; Whittle, Christina M; Wu, Beijing; Yan, Koon-Kiu; Zeller, Georg; Zha, Zheng; Zhong, Mei; Zhou, Xingliang; Ahringer, Julie; Strome, Susan; Gunsalus, Kristin C; Micklem, Gos; Liu, X Shirley; Reinke, Valerie; Kim, Stuart K; Hillier, LaDeana W; Henikoff, Steven; Piano, Fabio; Snyder, Michael; Stein, Lincoln; Lieb, Jason D; Waterston, Robert H
2010-12-24
We systematically generated large-scale data sets to improve genome annotation for the nematode Caenorhabditis elegans, a key model organism. These data sets include transcriptome profiling across a developmental time course, genome-wide identification of transcription factor-binding sites, and maps of chromatin organization. From this, we created more complete and accurate gene models, including alternative splice forms and candidate noncoding RNAs. We constructed hierarchical networks of transcription factor-binding and microRNA interactions and discovered chromosomal locations bound by an unusually large number of transcription factors. Different patterns of chromatin composition and histone modification were revealed between chromosome arms and centers, with similarly prominent differences between autosomes and the X chromosome. Integrating data types, we built statistical models relating chromatin, transcription factor binding, and gene expression. Overall, our analyses ascribed putative functions to most of the conserved genome.
Earthquake Hazard and Risk in New Zealand
NASA Astrophysics Data System (ADS)
Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.
2014-12-01
To quantify risk in New Zealand we examine the impact of updating the seismic hazard model. The previous RMS New Zealand hazard model is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS model, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence rate and implementing new background rates and a robust methodology for modeling background earthquake sources. The number of crustal faults has increased by over 200 from the 2002 model, to the 2012 model which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated model with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard model changes have on New Zealand earthquake risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is modeled by averaged Mw-frequency relationships on area sources. Thus small changes to the background rates can have a large impact on the risk profile for the area. Wellington, another area of high exposure is particularly sensitive to how the Hikurangi subduction zone and the Wellington fault are modeled. Minor changes on these sources have substantial impacts for the risk profile of the city and the country at large.
NASA Astrophysics Data System (ADS)
Mulcahy, J. P.; Walters, D. N.; Bellouin, N.; Milton, S. F.
2014-05-01
The inclusion of the direct and indirect radiative effects of aerosols in high-resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three-dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing long-wave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propagate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high-latitude clean-air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short-range forecasts. However, the indirect aerosol effect leads to a strengthening of the low-level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the importance of including a more realistic treatment of aerosol-cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes.
Impacts of increasing the aerosol complexity in the Met Office global NWP model
NASA Astrophysics Data System (ADS)
Mulcahy, J. P.; Walters, D. N.; Bellouin, N.; Milton, S. F.
2013-11-01
Inclusion of the direct and indirect radiative effects of aerosols in high resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing longwave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propogate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high latitude clean air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short range forecasts. However, the indirect aerosol effect leads to a strengthening of the low level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the importance of including a~more realistic treatment of aerosol-cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes.
A new algorithm for construction of coarse-grained sites of large biomolecules.
Li, Min; Zhang, John Z H; Xia, Fei
2016-04-05
The development of coarse-grained (CG) models for large biomolecules remains a challenge in multiscale simulations, including a rigorous definition of CG representations for them. In this work, we proposed a new stepwise optimization imposed with the boundary-constraint (SOBC) algorithm to construct the CG sites of large biomolecules, based on the s cheme of essential dynamics CG. By means of SOBC, we can rigorously derive the CG representations of biomolecules with less computational cost. The SOBC is particularly efficient for the CG definition of large systems with thousands of residues. The resulted CG sites can be parameterized as a CG model using the normal mode analysis based fluctuation matching method. Through normal mode analysis, the obtained modes of CG model can accurately reflect the functionally related slow motions of biomolecules. The SOBC algorithm can be used for the construction of CG sites of large biomolecules such as F-actin and for the study of mechanical properties of biomaterials. © 2015 Wiley Periodicals, Inc.
Soliton configurations in generalized Mie electrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rybakov, Yu. P., E-mail: soliton4@mail.ru
2011-07-15
The generalization of the Mie electrodynamics within the scope of the effective 8-spinor field model is suggested, with the Lagrangian including Higgs-like potential and higher degrees of the invariant A{sub Micro-Sign }A{sup Micro-Sign }. Using special Brioschi 8-spinor identity, we show that the model includes the Skyrme and the Faddeev models as particular cases. We investigate the large-distance asymptotic of static solutions and estimate the electromagnetic contribution to the energy of the localized charged configuration.
NASA Astrophysics Data System (ADS)
Zhu, Hong; Huang, Mai; Sadagopan, Sriram; Yao, Hong
2017-09-01
With increasing vehicle fuel economy standards, automotive OEMs are widely using various AHSS grades including DP, TRIP, CP and 3rd Gen AHSS to reduce vehicle weight due to their good combination of strength and formability. As one of enabling technologies for AHSS application, the requirement for requiring accurate prediction of springback for cold stamped AHSS parts stimulated a large number of investigations in the past decade with reversed loading path at large strains followed by constitutive modeling. With a spectrum of complex loading histories occurring in production stamping processes, there were many challenges in this field including issues of test data reliability, loading path representability, constitutive model robustness and non-unique constitutive parameter-identification. In this paper, various testing approaches and constitutive modeling will be reviewed briefly and a systematic methodology from stress-strain characterization, constitutive model parameter identification for material card generation will be presented in order to support automotive OEM’s need on virtual stamping. This systematic methodology features a tension-compression test at large strain with robust anti-buckling device with concurrent friction force correction, properly selected loading paths to represent material behavior during different springback modes as well as the 10-parameter Yoshida model with knowledge-based parameter-identification through nonlinear optimization. Validation cases for lab AHSS parts will also be discussed to check applicability of this methodology.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
Attitude tracking control of flexible spacecraft with large amplitude slosh
NASA Astrophysics Data System (ADS)
Deng, Mingle; Yue, Baozeng
2017-12-01
This paper is focused on attitude tracking control of a spacecraft that is equipped with flexible appendage and partially filled liquid propellant tank. The large amplitude liquid slosh is included by using a moving pulsating ball model that is further improved to estimate the settling location of liquid in microgravity or a zero-g environment. The flexible appendage is modelled as a three-dimensional Bernoulli-Euler beam, and the assumed modal method is employed. A hybrid controller that combines sliding mode control with an adaptive algorithm is designed for spacecraft to perform attitude tracking. The proposed controller has proved to be asymptotically stable. A nonlinear model for the overall coupled system including spacecraft attitude dynamics, liquid slosh, structural vibration and control action is established. Numerical simulation results are presented to show the dynamic behaviors of the coupled system and to verify the effectiveness of the control approach when the spacecraft undergoes the disturbance produced by large amplitude slosh and appendage vibration. Lastly, the designed adaptive algorithm is found to be effective to improve the precision of attitude tracking.
Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors
NASA Technical Reports Server (NTRS)
Probst, David K.
1993-01-01
A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Dark energy, α-attractors, and large-scale structure surveys
NASA Astrophysics Data System (ADS)
Akrami, Yashar; Kallosh, Renata; Linde, Andrei; Vardanyan, Valeri
2018-06-01
Over the last few years, a large family of cosmological attractor models has been discovered, which can successfully match the latest inflation-related observational data. Many of these models can also describe a small cosmological constant Λ, which provides the most natural description of the present stage of the cosmological acceleration. In this paper, we study α-attractor models with dynamical dark energy, including the cosmological constant Λ as a free parameter. Predominantly, the models with 0Λ > converge to the asymptotic regime with the equation of state w=‑1. However, there are some models with w≠ ‑1, which are compatible with the current observations. In the simplest models with Λ = 0, one has the tensor to scalar ratio r=12α/N2 and the asymptotic equation of state w=‑1+2/9α (which in general differs from its present value). For example, in the seven disk M-theory related model with α = 7/3 one finds r ~ 10‑2 and the asymptotic equation of state is w ~ ‑0.9. Future observations, including large-scale structure surveys as well as B-mode detectors will test these, as well as more general models presented here. We also discuss gravitational reheating in models of quintessential inflation and argue that its investigation may be interesting from the point of view of inflationary cosmology. Such models require a much greater number of e-folds, and therefore predict a spectral index ns that can exceed the value in more conventional models by about 0.006. This suggests a way to distinguish the conventional inflationary models from the models of quintessential inflation, even if they predict w = ‑1.
TABULATED EQUIVALENT SDR FLAMELET (TESF) MODEFL
DOE Office of Scientific and Technical Information (OSTI.GOV)
KUNDU, PRITHWISH; AMEEN, mUHSIN MOHAMMED; UNNIKRISHNAN, UMESH
The code consists of an implementation of a novel tabulated combustion model for non-premixed flames in CFD solvers. This novel technique/model is used to implement an unsteady flamelet tabulation without using progress variables for non-premixed flames. It also has the capability to include history effects which is unique within tabulated flamelet models. The flamelet table generation code can be run in parallel to generate tables with large chemistry mechanisms in relatively short wall clock times. The combustion model/code reads these tables. This framework can be coupled with any CFD solver with RANS as well as LES turbulence models. This frameworkmore » enables CFD solvers to run large chemistry mechanisms with large number of grids at relatively lower computational costs. Currently it has been coupled with the Converge CFD code and validated against available experimental data. This model can be used to simulate non-premixed combustion in a variety of applications like reciprocating engines, gas turbines and industrial burners operating over a wide range of fuels.« less
Large memory capacity in chaotic artificial neural networks: a view of the anti-integrable limit.
Lin, Wei; Chen, Guanrong
2009-08-01
In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models.
A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kim, Hwa-Soo
1991-01-01
A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shaocheng; Tang, Shuaiqi; Zhang, Yunyan
2016-07-01
Single-Column Model (SCM) Forcing Data are derived from the ARM facility observational data using the constrained variational analysis approach (Zhang and Lin 1997 and Zhang et al., 2001). The resulting products include both the large-scale forcing terms and the evaluation fields, which can be used for driving the SCMs and Cloud Resolving Models (CRMs) and validating model simulations.
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Retinal Remodeling in the Tg P347L Rabbit, a Large-Eye Model of Retinal Degeneration
Jones, Bryan William; Kondo, Mineo; Terasaki, Hiroko; Watt, Carl Brock; Rapp, Kevin; Anderson, James; Lin, Yanhua; Shaw, Marguerite Victoria; Yang, Jia-Hui; Marc, Robert Edward
2013-01-01
Retinitis pigmentosa (RP) is an inherited blinding disease characterized by progressive loss of retinal photo-receptors. There are numerous rodent models of retinal degeneration, but most are poor platforms for interventions that will translate into clinical practice. The rabbit possesses a number of desirable qualities for a model of retinal disease including a large eye and an existing and substantial knowledge base in retinal circuitry, anatomy, and ophthalmology. We have analyzed degeneration, remodeling, and reprogramming in a rabbit model of retinal degeneration, expressing a rhodopsin proline 347 to leucine transgene in a TgP347L rabbit as a powerful model to study the pathophysiology and treatment of retinal degeneration. We show that disease progression in the TgP347L rabbit closely tracks human cone-sparing RP, including the cone-associated preservation of bipolar cell signaling and triggering of reprogramming. The relatively fast disease progression makes the TgP347L rabbit an excellent model for gene therapy, cell biological intervention, progenitor cell transplantation, surgical interventions, and bionic prosthetic studies. PMID:21681749
NASA Astrophysics Data System (ADS)
Tohidi, Ali; Gollner, Michael J.; Xiao, Huahua
2018-01-01
Fire whirls present a powerful intensification of combustion, long studied in the fire research community because of the dangers they present during large urban and wildland fires. However, their destructive power has hidden many features of their formation, growth, and propagation. Therefore, most of what is known about fire whirls comes from scale modeling experiments in the laboratory. Both the methods of formation, which are dominated by wind and geometry, and the inner structure of the whirl, including velocity and temperature fields, have been studied at this scale. Quasi-steady fire whirls directly over a fuel source form the bulk of current experimental knowledge, although many other cases exist in nature. The structure of fire whirls has yet to be reliably measured at large scales; however, scaling laws have been relatively successful in modeling the conditions for formation from small to large scales. This review surveys the state of knowledge concerning the fluid dynamics of fire whirls, including the conditions for their formation, their structure, and the mechanisms that control their unique state. We highlight recent discoveries and survey potential avenues for future research, including using the properties of fire whirls for efficient remediation and energy generation.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Handling Qualities of Large Flexible Aircraft. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Poopaka, S.
1980-01-01
The effects on handling qualities of elastic modes interaction with the rigid body dynamics of a large flexible aircraft are studied by a mathematical computer simulation. An analytical method to predict the pilot ratings when there is a severe modes interactions is developed. This is done by extending the optimal control model of the human pilot response to include the mode decomposition mechanism into the model. The handling qualities are determined for a longitudinal tracking task using a large flexible aircraft with parametric variations in the undamped natural frequencies of the two lowest frequency, symmetric elastic modes made to induce varying amounts of mode interaction.
High Throughput Exposure Estimation Using NHANES Data (SOT)
In the ExpoCast project, high throughput (HT) exposure models enable rapid screening of large numbers of chemicals for exposure potential. Evaluation of these models requires empirical exposure data and due to the paucity of human metabolism/exposure data such evaluations includ...
Asymptotically Safe Standard Model via Vectorlike Fermions.
Mann, R B; Meffe, J R; Sannino, F; Steele, T G; Wang, Z W; Zhang, C
2017-12-29
We construct asymptotically safe extensions of the standard model by adding gauged vectorlike fermions. Using large number-of-flavor techniques we argue that all gauge couplings, including the hypercharge and, under certain conditions, the Higgs coupling, can achieve an interacting ultraviolet fixed point.
Asymptotically Safe Standard Model via Vectorlike Fermions
NASA Astrophysics Data System (ADS)
Mann, R. B.; Meffe, J. R.; Sannino, F.; Steele, T. G.; Wang, Z. W.; Zhang, C.
2017-12-01
We construct asymptotically safe extensions of the standard model by adding gauged vectorlike fermions. Using large number-of-flavor techniques we argue that all gauge couplings, including the hypercharge and, under certain conditions, the Higgs coupling, can achieve an interacting ultraviolet fixed point.
NASA Astrophysics Data System (ADS)
Gad-El-Hak, Mohamed
"Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.
NASA Astrophysics Data System (ADS)
Prasad, K.
2017-12-01
Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and compared with results obtained from spectrometer data to estimate the temporally evolving methane flux during the Aliso Canyon blowout.
Measurement and prediction of broadband noise from large horizontal axis wind turbine generators
NASA Technical Reports Server (NTRS)
Grosveld, F. W.; Shepherd, K. P.; Hubbard, H. H.
1995-01-01
A method is presented for predicting the broadband noise spectra of large wind turbine generators. It includes contributions from such noise sources as the inflow turbulence to the rotor, the interactions between the turbulent boundary layers on the blade surfaces with their trailing edges and the wake due to a blunt trailing edge. The method is partly empirical and is based on acoustic measurements of large wind turbines and airfoil models. Spectra are predicted for several large machines including the proposed MOD-5B. Measured data are presented for the MOD-2, the WTS-4, the MOD-OA, and the U.S. Windpower Inc. machines. Good agreement is shown between the predicted and measured far field noise spectra.
Brito, Thiago V.; Morley, Steven K.
2017-10-25
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brito, Thiago V.; Morley, Steven K.
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?
NASA Technical Reports Server (NTRS)
Lum, Karen; Hihn, Jairus; Menzies, Tim
2006-01-01
While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.
A holistic approach for large-scale derived flood frequency analysis
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno
2017-04-01
Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.
Multicomponent phase-field model for extremely large partition coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welland, Michael J.; Wolf, Dieter; Guyer, Jonathan E.
2014-01-01
We develop a multicomponent phase-field model specially formulated to robustly simulate concentration variations from molar to atomic magnitudes across an interlace, i.e., partition coefficients in excess of 10±23 such as may be the case with species which are predominant in one phase and insoluble in the other. Substitutional interdiffusion on a normal lattice and concurrent interstitial diffusion are included. The composition in the interlace follows the approach of Kim. Kim, and Suzuki [Phys. Rev. E 60, 7186 (1999)] and is compared to that of Wheeler, Boettinger, and McFadden [Phys. Rev. A 45, 7424 (1992)] in the context of large partitioning.more » The model successfully reproduces analytical solutions for binary diffusion couples and solute trapping for the demonstrated cases of extremely large partitioning.« less
Living Design Memory: Framework, Implementation, Lessons Learned.
ERIC Educational Resources Information Center
Terveen, Loren G.; And Others
1995-01-01
Discusses large-scale software development and describes the development of the Designer Assistant to improve software development effectiveness. Highlights include the knowledge management problem; related work, including artificial intelligence and expert systems, software process modeling research, and other approaches to organizational memory;…
Interactive Computing and Processing of NASA Land Surface Observations Using Google Earth Engine
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Burks, Jason; Bell, Jordan
2016-01-01
Google's Earth Engine offers a "big data" approach to processing large volumes of NASA and other remote sensing products. h\\ps://earthengine.google.com/ Interfaces include a Javascript or Python-based API, useful for accessing and processing over large periods of record for Landsat and MODIS observations. Other data sets are frequently added, including weather and climate model data sets, etc. Demonstrations here focus on exploratory efforts to perform land surface change detection related to severe weather, and other disaster events.
Large-Scale Machine Learning for Classification and Search
ERIC Educational Resources Information Center
Liu, Wei
2012-01-01
With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…
Structure analysis for hole-nuclei close to 132Sn by a large-scale shell-model calculation
NASA Astrophysics Data System (ADS)
Wang, Han-Kui; Sun, Yang; Jin, Hua; Kaneko, Kazunari; Tazaki, Shigeru
2013-11-01
The structure of neutron-rich nuclei with a few holes in respect of the doubly magic nucleus 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including orbitals allowing both neutron and proton core excitations, an effective interaction for the extended pairing-plus-quadrupole model with monopole corrections is tested through detailed comparison between the calculation and experimental data. By using the experimental energy of the core-excited 21/2+ level in 131In as a benchmark, monopole corrections are determined that describe the size of the neutron N=82 shell gap. The level spectra, up to 5 MeV of excitation in 131In, 131Sn, 130In, 130Cd, and 130Sn, are well described and clearly explained by couplings of single-hole orbitals and by core excitations.
NASA Technical Reports Server (NTRS)
Scott, Carl D.
2004-01-01
Chemical kinetic models for the nucleation and growth of clusters and single-walled carbon nanotube (SWNT) growth are developed for numerical simulations of the production of SWNTs. Two models that involve evaporation and condensation of carbon and metal catalysts, a full model involving all carbon clusters up to C80, and a reduced model are discussed. The full model is based on a fullerene model, but nickel and carbon/nickel cluster reactions are added to form SWNTs from soot and fullerenes. The full model has a large number of species--so large that to incorporate them into a flow field computation for simulating laser ablation and arc processes requires that they be simplified. The model is reduced by defining large clusters that represent many various sized clusters. Comparisons are given between these models for cases that may be applicable to arc and laser ablation production. Solutions to the system of chemical rate equations of these models for a ramped temperature profile show that production of various species, including SWNTs, agree to within about 50% for a fast ramp, and within 10% for a slower temperature decay time.
Earth observing system instrument pointing control modeling for polar orbiting platforms
NASA Technical Reports Server (NTRS)
Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.
1987-01-01
An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.
The Neutral Islands during the Late Epoch of Reionization
NASA Astrophysics Data System (ADS)
Xu, Yidong; Yue, Bin; Chen, Xuelei
2018-05-01
The large-scale structure of the ionization field during the epoch of reionization (EoR) can be modeled by the excursion set theory. While the growth of ionized regions during the early stage are described by the ``bubble model'', the shrinking process of neutral regions after the percolation of the ionized region calls for an ``island model''. An excursion set based analytical model and a semi-numerical code (islandFAST) have been developed. The ionizing background and the bubbles inside the islands are also included in the treatment. With two kinds of absorbers of ionizing photons, i.e. the large-scale under-dense neutral islands and the small-scale over-dense clumps, the ionizing background are self-consistently evolved in the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Gang
Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less
NASA Astrophysics Data System (ADS)
Smith, R. A.; Alexander, R. B.; Schwarz, G. E.
2003-12-01
Determining the effects of land use change (e.g. urbanization, deforestation) on water quality at large spatial scales has been difficult because water quality measurements in large rivers with heterogeneous basins show the integrated effects of multiple factors. Moreover, the observed effects of land use changes on water quality in small homogeneous stream basins may not be indicative of downstream effects (including effects on such ecologically relevant characteristics as nutrient levels and elemental ratios) because of loss processes occurring during downstream transport in river channels. In this study we used the USGS SPARROW (Spatially-Referenced Regression on Watersheds) models of total nitrogen (TN) and total phosphorus (TP) in streams and rivers of the conterminous US to examine the effects of various aspects of land use change on nutrient concentrations and flux from the pre-development era to the present. The models were calibrated with data from 370 long-term monitoring stations representing a wide range of basin sizes, land use/cover classes, climates, and physiographies. The non-linear formulation for each model includes 20+ statistically estimated parameters relating to land use/cover characteristics and other environmental variables such as temperature, soil conditions, hill slope, and the hydraulic characteristics of 2200 large lakes and reservoirs. Model predictions are available for 62,000 river/stream channel nodes. Model predictions of pre-development water quality compare favorably with nutrient data from 63 undeveloped (reference) sites. Error statistics are available for predictions at all nodes. Model simulations were chosen to compare the effects of selected aspects of land use change on nutrient levels at large and small basin scales, lacustrine and coastal receiving waters, and among the major US geographic regions.
New generation of elastic network models.
López-Blanco, José Ramón; Chacón, Pablo
2016-04-01
The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Biomolecular Modeling in a Process Dynamics and Control Course
ERIC Educational Resources Information Center
Gray, Jeffrey J.
2006-01-01
I present modifications to the traditional course entitled, "Process dynamics and control," which I renamed "Modeling, dynamics, and control of chemical and biological processes." Additions include the central dogma of biology, pharmacokinetic systems, population balances, control of gene transcription, and large-scale…
A High-Resolution Model of the Beaufort Sea Circulation
NASA Astrophysics Data System (ADS)
Hedstrom, K.; Danielson, S. L.; Curchitser, E. N.; Lemieux, J. F.; Kasper, J.
2016-12-01
Configuration of and results from a coupled sea-ice ocean model of the Beaufort Sea shelf at 500 m resolution will be shown. Challenging features of the domain include large fresh water flux from the MacKenzie River, seasonal land-fast ice, and ice-covered open boundary conditions. A pan-Arctic domain provides boundary fields to an intermediate resolution (4 km) domain which in turn provides boundary fields to the Beaufort Shelf domain. These are all coupled ocean and sea-ice models (Regional Ocean Modeling System - myroms.org) and all are forced with river inputs from the ARDAT climatology (Whitefield et al., 2015), which includes heat content as well as flow rate. Coastal discharges are prescribed as lateral inflows distributed over the depth of the ocean-land interface. New in the Beaufort domain is the use of a landfast ice parameterization (Lemieux, 2015), which adds a large bottom stress to the ice when the estimated keel depth approaches that of the ocean.
Sotiropoulos, Stamatios N.; Brookes, Matthew J.; Woolrich, Mark W.
2018-01-01
Over long timescales, neuronal dynamics can be robust to quite large perturbations, such as changes in white matter connectivity and grey matter structure through processes including learning, aging, development and certain disease processes. One possible explanation is that robust dynamics are facilitated by homeostatic mechanisms that can dynamically rebalance brain networks. In this study, we simulate a cortical brain network using the Wilson-Cowan neural mass model with conduction delays and noise, and use inhibitory synaptic plasticity (ISP) to dynamically achieve a spatially local balance between excitation and inhibition. Using MEG data from 55 subjects we find that ISP enables us to simultaneously achieve high correlation with multiple measures of functional connectivity, including amplitude envelope correlation and phase locking. Further, we find that ISP successfully achieves local E/I balance, and can consistently predict the functional connectivity computed from real MEG data, for a much wider range of model parameters than is possible with a model without ISP. PMID:29474352
NASA Astrophysics Data System (ADS)
Shao, Yaping; Liu, Shaofeng; Schween, Jan H.; Crewell, Susanne
2013-08-01
A model is developed for the large-eddy simulation (LES) of heterogeneous atmosphere and land-surface processes. This couples a LES model with a land-surface scheme. New developments are made to the land-surface scheme to ensure the adequate representation of atmosphere-land-surface transfers on the large-eddy scale. These include, (1) a multi-layer canopy scheme; (2) a method for flux estimates consistent with the large-eddy subgrid closure; and (3) an appropriate soil-layer configuration. The model is then applied to a heterogeneous region with 60-m horizontal resolution and the results are compared with ground-based and airborne measurements. The simulated sensible and latent heat fluxes are found to agree well with the eddy-correlation measurements. Good agreement is also found in the modelled and observed net radiation, ground heat flux, soil temperature and moisture. Based on the model results, we study the patterns of the sensible and latent heat fluxes, how such patterns come into existence, and how large eddies propagate and destroy land-surface signals in the atmosphere. Near the surface, the flux and land-use patterns are found to be closely correlated. In the lower boundary layer, small eddies bearing land-surface signals organize and develop into larger eddies, which carry the signals to considerably higher levels. As a result, the instantaneous flux patterns appear to be unrelated to the land-use patterns, but on average, the correlation between them is significant and persistent up to about 650 m. For a given land-surface type, the scatter of the fluxes amounts to several hundred W { m }^{-2}, due to (1) large-eddy randomness; (2) rapid large-eddy and surface feedback; and (3) local advection related to surface heterogeneity.
Running SW4 On New Commodity Technology Systems (CTS-1) Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Petersson, N. Anders; Pitarka, Arben
We have recently been running earthquake ground motion simulations with SW4 on the new capacity computing systems, called the Commodity Technology Systems - 1 (CTS-1) at Lawrence Livermore National Laboratory (LLNL). SW4 is a fourth order time domain finite difference code developed by LLNL and distributed by the Computational Infrastructure for Geodynamics (CIG). SW4 simulates seismic wave propagation in complex three-dimensional Earth models including anelasticity and surface topography. We are modeling near-fault earthquake strong ground motions for the purposes of evaluating the response of engineered structures, such as nuclear power plants and other critical infrastructure. Engineering analysis of structures requiresmore » the inclusion of high frequencies which can cause damage, but are often difficult to include in simulations because of the need for large memory to model fine grid spacing on large domains.« less
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro
2017-08-01
Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.
Videogrammetric Model Deformation Measurement Technique
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tian-Shu
2001-01-01
The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Chwalowski, Pawel; Wieseman, Carol D.; Eller, David; Ringertz, Ulf
2017-01-01
A status report is provided on the collaboration between the Royal Institute of Technology (KTH) in Sweden and the NASA Langley Research Center regarding the aeroelastic analyses of a full-span fighter configuration wind-tunnel model. This wind-tunnel model was tested in the Transonic Dynamics Tunnel (TDT) in the summer of 2016. Large amounts of data were acquired including steady/unsteady pressures, accelerations, strains, and measured dynamic deformations. The aeroelastic analyses presented include linear aeroelastic analyses, CFD steady analyses, and analyses using CFD-based reduced-order models (ROMs).
Landscape scale mapping of forest inventory data by nearest neighbor classification
Andrew Lister
2009-01-01
One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2011-01-01
The Global Modeling and Assimilation Office at NASA's Goddard Space Flight Center is developing a number of experimental prediction and analysis products suitable for research and applications. The prediction products include a large suite of subseasonal and seasonal hindcasts and forecasts (as a contribution to the US National MME), a suite of decadal (10-year) hindcasts (as a contribution to the IPCC decadal prediction project), and a series of large ensemble and high resolution simulations of selected extreme events, including the 2010 Russian and 2011 US heat waves. The analysis products include an experimental atlas of climate (in particular drought) and weather extremes. This talk will provide an update on those activities, and discuss recent efforts by WCRP to leverage off these and similar efforts at other institutions throughout the world to develop an experimental global drought early warning system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. H. Titus, S. Avasaralla, A.Brooks, R. Hatcher
2010-09-22
The National Spherical Torus Experiment (NSTX) project is planning upgrades to the toroidal field, plasma current and pulse length. This involves the replacement of the center-stack, including the inner legs of the TF, OH, and inner PF coils. A second neutral beam will also be added. The increased performance of the upgrade requires qualification of the remaining components including the vessel, passive plates, and divertor for higher disruption loads. The hardware needing qualification is more complex than is typically accessible by large scale electromagnetic (EM) simulations of the plasma disruptions. The usual method is to include simplified representations of componentsmore » in the large EM models and attempt to extract forces to apply to more detailed models. This paper describes a more efficient approach of combining comprehensive modeling of the plasma and tokamak conducting structures, using the 2D OPERA code, with much more detailed treatment of individual components using ANSYS electromagnetic (EM) and mechanical analysis. This capture local eddy currents and resulting loads in complex details, and allows efficient non-linear, and dynamic structural analyses.« less
Full-scale flammability test data for validation of aircraft fire mathematical models
NASA Technical Reports Server (NTRS)
Kuminecz, J. F.; Bricker, R. W.
1982-01-01
Twenty-five large scale aircraft flammability tests were conducted in a Boeing 737 fuselage at the NASA Johnson Space Center (JSC). The objective of this test program was to provide a data base on the propagation of large scale aircraft fires to support the validation of aircraft fire mathematical models. Variables in the test program included cabin volume, amount of fuel, fuel pan area, fire location, airflow rate, and cabin materials. A number of tests were conducted with jet A-1 fuel only, while others were conducted with various Boeing 747 type cabin materials. These included urethane foam seats, passenger service units, stowage bins, and wall and ceiling panels. Two tests were also included using special urethane foam and polyimide foam seats. Tests were conducted with each cabin material individually, with various combinations of these materials, and finally, with all materials in the cabin. The data include information obtained from approximately 160 locations inside the fuselage.
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Michael
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less
The global gridded crop model intercomparison: Data and modeling protocols for Phase 1 (v1.0)
Elliott, J.; Müller, C.; Deryng, D.; ...
2015-02-11
We present protocols and input data for Phase 1 of the Global Gridded Crop Model Intercomparison, a project of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The project consist of global simulations of yields, phenologies, and many land-surface fluxes using 12–15 modeling groups for many crops, climate forcing data sets, and scenarios over the historical period from 1948 to 2012. The primary outcomes of the project include (1) a detailed comparison of the major differences and similarities among global models commonly used for large-scale climate impact assessment, (2) an evaluation of model and ensemble hindcasting skill, (3) quantification ofmore » key uncertainties from climate input data, model choice, and other sources, and (4) a multi-model analysis of the agricultural impacts of large-scale climate extremes from the historical record.« less
NASA Astrophysics Data System (ADS)
Kirdyashkin, A. A.; Kirdyashkin, A. G.; Gurov, V. V.
2017-07-01
Based on laboratory and theoretical modeling results, we present the thermal and hydrodynamical structure of the plume conduit during plume ascent and eruption on the Earth's surface. The modeling results show that a mushroom-shaped plume head forms after melt eruption on the surface for 1.9 < Ka < 10. Such plumes can be responsible for the formation of large intrusive bodies, including batholiths. The results of laboratory modeling of plumes with mushroom-shaped heads are presented for Ka = 8.7 for a constant viscosity and uniform melt composition. Images of flow patterns are obtained, as well as flow velocity profiles in the melt of the conduit and the head of the model plume. Based on the laboratory modeling data, we present a scheme of a thermochemical plume with a mushroom-shaped head responsible for the formation of a large intrusive body (batholith). After plume eruption to the surface, melting occurs along the base of the massif above the plume head, resulting in a mushroom-shaped plume head. A possible mechanism for the formation of localized surface manifestations of batholiths is presented. The parameters of some plumes with mushroom-shaped heads (plumes of the Altay-Sayan and Barguzin-Vitim large-igneous provinces, and Khangai and Khentei plumes) are estimated using geological data, including age intervals and volumes of magma melts.
Seven challenges for metapopulation models of epidemics, including households models.
Ball, Frank; Britton, Tom; House, Thomas; Isham, Valerie; Mollison, Denis; Pellis, Lorenzo; Scalia Tomba, Gianpaolo
2015-03-01
This paper considers metapopulation models in the general sense, i.e. where the population is partitioned into sub-populations (groups, patches,...), irrespective of the biological interpretation they have, e.g. spatially segregated large sub-populations, small households or hosts themselves modelled as populations of pathogens. This framework has traditionally provided an attractive approach to incorporating more realistic contact structure into epidemic models, since it often preserves analytic tractability (in stochastic as well as deterministic models) but also captures the most salient structural inhomogeneity in contact patterns in many applied contexts. Despite the progress that has been made in both the theory and application of such metapopulation models, we present here several major challenges that remain for future work, focusing on models that, in contrast to agent-based ones, are amenable to mathematical analysis. The challenges range from clarifying the usefulness of systems of weakly-coupled large sub-populations in modelling the spread of specific diseases to developing a theory for endemic models with household structure. They include also developing inferential methods for data on the emerging phase of epidemics, extending metapopulation models to more complex forms of human social structure, developing metapopulation models to reflect spatial population structure, developing computationally efficient methods for calculating key epidemiological model quantities, and integrating within- and between-host dynamics in models. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Regueiro Sanfiz, Sabela; Gómez, Breo; Miguez Macho, Gonzalo
2017-04-01
Because of its continental position, Central Europe summertime rainfall is largely dependent on local or regional dynamics, with precipitation water possibly also significantly dependent on local sources. We investigate here land-atmosphere feedbacks over inland Europe focusing in particular on evapotranspiration-soil moisture connections and precipitation recycling ratios. For this purpose, a set of simulations were performed with the Weather Research and Forecasting (WRF) model coupled to LEAFHYDRO soil-vegetation-hydrology model. The LEAFHYDRO Land Surface Model includes a groundwater parameterization with a dynamic water table fully coupling groundwater to the soil-vegetation and surface waters via two-way fluxes. A water tagging capability in the WRF model is used to quantify evapotranspiration contribution to precipitation over the region. Several years are considered, including summertime 2002, during which severe flooding occurred. Preliminary results from our simulations highlight the link of large areas with shallow water with high air moisture values through the summer season; and the importance of the contribution of evapotranspiration to summertime precipitation. Consequently, results show the advantages of using a fully coupled hydrology-atmospheric modeling system.
Improvements in mode-based waveform modeling and application to Eurasian velocity structure
NASA Astrophysics Data System (ADS)
Panning, M. P.; Marone, F.; Kim, A.; Capdeville, Y.; Cupillard, P.; Gung, Y.; Romanowicz, B.
2006-12-01
We introduce several recent improvements to mode-based 3D and asymptotic waveform modeling and examine how to integrate them with numerical approaches for an improved model of upper-mantle structure under eastern Eurasia. The first step in our approach is to create a large-scale starting model including shear anisotropy using Nonlinear Asymptotic Coupling Theory (NACT; Li and Romanowicz, 1995), which models the 2D sensitivity of the waveform to the great-circle path between source and receiver. We have recently improved this approach by implementing new crustal corrections which include a non-linear correction for the difference between the average structure of several large regions from the global model with further linear corrections to account for the local structure along the path between source and receiver (Marone and Romanowicz, 2006; Panning and Romanowicz, 2006). This model is further refined using a 3D implementation of Born scattering (Capdeville, 2005). We have made several recent improvements to this method, in particular introducing the ability to represent perturbations to discontinuities. While the approach treats all sensitivity as linear perturbations to the waveform, we have also experimented with a non-linear modification analogous to that used in the development of NACT. This allows us to treat large accumulated phase delays determined from a path-average approximation non-linearly, while still using the full 3D sensitivity of the Born approximation. Further refinement of shallow regions of the model is obtained using broadband forward finite-difference waveform modeling. We are also integrating a regional Spectral Element Method code into our tomographic modeling, allowing us to move beyond many assumptions inherent in the analytic mode-based approaches, while still taking advantage of their computational efficiency. Illustrations of the effects of these increasingly sophisticated steps will be presented.
Java Web Simulation (JWS); a web based database of kinetic models.
Snoep, J L; Olivier, B G
2002-01-01
Software to make a database of kinetic models accessible via the internet has been developed and a core database has been set up at http://jjj.biochem.sun.ac.za/. This repository of models, available to everyone with internet access, opens a whole new way in which we can make our models public. Via the database, a user can change enzyme parameters and run time simulations or steady state analyses. The interface is user friendly and no additional software is necessary. The database currently contains 10 models, but since the generation of the program code to include new models has largely been automated the addition of new models is straightforward and people are invited to submit their models to be included in the database.
Comparisons for ESTA-Task3: ASTEC, CESAM and CLÉS
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.
The ESTA activity under the CoRoT project aims at testing the tools for computing stellar models and oscillation frequencies that will be used in the analysis of asteroseismic data from CoRoT and other large-scale upcoming asteroseismic projects. Here I report results of comparisons between calculations using the Aarhus code (ASTEC) and two other codes, for models that include diffusion and settling. It is found that there are likely deficiencies, requiring further study, in the ASTEC computation of models including convective cores.
Large-deflection statics analysis of active cardiac catheters through co-rotational modelling.
Peng Qi; Chen Qiu; Mehndiratta, Aadarsh; I-Ming Chen; Haoyong Yu
2016-08-01
This paper presents a co-rotational concept for large-deflection formulation of cardiac catheters. Using this approach, the catheter is first discretized with a number of equal length beam elements and nodes, and the rigid body motions of an individual beam element are separated from its deformations. Therefore, it is adequate for modelling arbitrarily large deflections of a catheter with linear elastic analysis at the local element level. A novel design of active cardiac catheter of 9 Fr in diameter at the beginning of the paper is proposed, which is based on the contra-rotating double helix patterns and is improved from the previous prototypes. The modelling section is followed by MATLAB simulations of various deflections when the catheter is exerted different types of loads. This proves the feasibility of the presented modelling approach. To the best knowledge of the authors, it is the first to utilize this methodology for large-deflection static analysis of the catheter, which will enable more accurate control of robot-assisted cardiac catheterization procedures. Future work would include further experimental validations.
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
Large Angle Transient Dynamics (LATDYN) user's manual
NASA Technical Reports Server (NTRS)
Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.
1991-01-01
A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.
The use of remotely sensed soil moisture data in large-scale models of the hydrological cycle
NASA Technical Reports Server (NTRS)
Salomonson, V. V.; Gurney, R. J.; Schmugge, T. J.
1985-01-01
Manabe (1982) has reviewed numerical simulations of the atmosphere which provided a framework within which an examination of the dynamics of the hydrological cycle could be conducted. It was found that the climate is sensitive to soil moisture variability in space and time. The challenge arises now to improve the observations of soil moisture so as to provide up-dated boundary condition inputs to large scale models including the hydrological cycle. Attention is given to details regarding the significance of understanding soil moisture variations, soil moisture estimation using remote sensing, and energy and moisture balance modeling.
CSI related dynamics and control issues in space robotics
NASA Technical Reports Server (NTRS)
Schmitz, Eric; Ramey, Madison
1993-01-01
The research addressed includes: (1) CSI issues in space robotics; (2) control of elastic payloads, which includes 1-DOF example, and 3-DOF harmonic drive arm with elastic beam; and (3) control of large space arms with elastic links, which includes testbed description, modeling, and experimental implementation of colocated PD and end-point tip position controllers.
An antenna pointing mechanism for large reflector antennas
NASA Technical Reports Server (NTRS)
Heimerdinger, H.
1981-01-01
An antenna pointing mechanism for large reflector antennas on direct broadcasting communication satellites was built and tested. After listing the requirements and constraints for this equipment the model is described, and performance figures are given. Futhermore, results of the qualification level tests, including functional, vibrational, thermovacuum, and accelerated life tests are reported. These tests were completed successfully.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-19
... Series Airplanes; Seats With Non-Traditional, Large, Non- Metallic Panels AGENCY: Federal Aviation... airplanes will have a novel or unusual design feature associated with seats that include non-traditional, large, non-metallic panels that would affect survivability during a post-crash fire event. The...
ERIC Educational Resources Information Center
Resta, Paul E.; Rost, Paul
The Albuquerque (New Mexico) Public Schools conducted a three-year study of integrated computer-based learning systems, including WICAT, Dolphin, PLATO, CCC, and DEGEM. Through cooperation with the Education Consolidation Improvement Act Chapter 1 program, four large integrated learning systems (ILS) were purchased and studied. They were installed…
Controllable Bidirectional dc Power Sources For Large Loads
NASA Technical Reports Server (NTRS)
Tripp, John S.; Daniels, Taumi S.
1995-01-01
System redesigned for greater efficiency, durability, and controllability. Modern electronically controlled dc power sources proposed to supply currents to six electromagnets used to position aerodynamic test model in wind tunnel. Six-phase bridge rectifier supplies load with large current at voltage of commanded magnitude and polarity. Current-feedback circuit includes current-limiting feature giving some protection against overload.
What Explains Patterns of Diversification and Richness among Animal Phyla?
Jezkova, Tereza; Wiens, John J.
2016-01-01
Animal phyla vary dramatically in species richness (from 1 species to >1.2 million), but the causes of this variation remain largely unknown. Animals have also evolved striking variation in morphology and ecology, including sessile marine taxa lacking heads, eyes, limbs, and complex organs (e.g. sponges), parasitic worms (e.g. nematodes, platyhelminths), and taxa with eyes, skeletons, limbs, and complex organs that dominate terrestrial ecosystems (arthropods, chordates). Relating this remarkable variation in traits to the diversification and richness of animal phyla is a fundamental yet unresolved problem in biology. Here, we test the impacts of 18 traits (including morphology, ecology, reproduction, and development) on diversification and richness of extant animal phyla. Using phylogenetic multiple regression, the best-fitting model includes five traits that explain ~74% of the variation in diversification rates (dioecy, parasitism, eyes/photoreceptors, a skeleton, non-marine habitat). However, a model including just three (skeleton, parasitism, habitat) explains nearly as much variation (~67%). Diversification rates then largely explain richness patterns. Our results also identify many striking traits that have surprisingly little impact on diversification (e.g. head, limbs, and complex circulatory and digestive systems). Overall, our results reveal the key factors that shape large-scale patterns of diversification and richness across >80% of all extant, described species. PMID:28221832
What Explains Patterns of Diversification and Richness among Animal Phyla?
Jezkova, Tereza; Wiens, John J
2017-03-01
Animal phyla vary dramatically in species richness (from one species to >1.2 million), but the causes of this variation remain largely unknown. Animals have also evolved striking variation in morphology and ecology, including sessile marine taxa lacking heads, eyes, limbs, and complex organs (e.g., sponges), parasitic worms (e.g., nematodes, platyhelminths), and taxa with eyes, skeletons, limbs, and complex organs that dominate terrestrial ecosystems (arthropods, chordates). Relating this remarkable variation in traits to the diversification and richness of animal phyla is a fundamental yet unresolved problem in biology. Here, we test the impacts of 18 traits (including morphology, ecology, reproduction, and development) on diversification and richness of extant animal phyla. Using phylogenetic multiple regression, the best-fitting model includes five traits that explain ∼74% of the variation in diversification rates (dioecy, parasitism, eyes/photoreceptors, a skeleton, nonmarine habitat). However, a model including just three (skeleton, parasitism, habitat) explains nearly as much variation (∼67%). Diversification rates then largely explain richness patterns. Our results also identify many striking traits that have surprisingly little impact on diversification (e.g., head, limbs, and complex circulatory and digestive systems). Overall, our results reveal the key factors that shape large-scale patterns of diversification and richness across >80% of all extant, described species.
Baryon magnetic moments: Symmetries and relations
NASA Astrophysics Data System (ADS)
Parreño, Assumpta; Savage, Martin J.; Tiburzi, Brian C.; Wilhelm, Jonas; Chang, Emmanuel; Detmold, William; Orginos, Kostas
2018-03-01
Magnetic moments of the octet baryons are computed using lattice QCD in background magnetic fields, including the first treatment of the magnetically coupled ∑0- ⋀ system. Although the computations are performed for relatively large values of the up and down quark masses, we gain new insight into the symmetries and relations between magnetic moments by working at a three-flavor mass-symmetric point. While the spinflavor symmetry in the large Nc limit of QCD is shared by the naïve constituent quark model, we find instances where quark model predictions are considerably favored over those emerging in the large Nc limit. We suggest further calculations that would shed light on the curious patterns of baryon magnetic moments.
Damage Tolerance of Large Shell Structures
NASA Technical Reports Server (NTRS)
Minnetyan, L.; Chamis, C. C.
1999-01-01
Progressive damage and fracture of large shell structures is investigated. A computer model is used for the assessment of structural response, progressive fracture resistance, and defect/damage tolerance characteristics. Critical locations of a stiffened conical shell segment are identified. Defective and defect-free computer models are simulated to evaluate structural damage/defect tolerance. Safe pressurization levels are assessed for the retention of structural integrity at the presence of damage/ defects. Damage initiation, growth, accumulation, and propagation to fracture are included in the simulations. Damage propagation and burst pressures for defective and defect-free shells are compared to evaluate damage tolerance. Design implications with regard to defect and damage tolerance of a large steel pressure vessel are examined.
An Approach to a Digital Library of Newspapers.
ERIC Educational Resources Information Center
Arambura Cabo, Maria Jose; Berlanga Llavori, Rafael
1997-01-01
Presents a new application for retrieving news from a large electronic bank of newspapers that is intended to manage past issues of newspapers. Highlights include a data model for newspapers, including metadata and metaclasses; document definition language; document retrieval language; and memory organization and indexes. (Author/LRW)
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
NASA Astrophysics Data System (ADS)
Stunder, B.
2009-12-01
Atmospheric transport and dispersion (ATD) models are used in real-time at Volcanic Ash Advisory Centers to predict the location of airborne volcanic ash at a future time because of the hazardous nature of volcanic ash. Transport and dispersion models usually do not include eruption column physics, but start with an idealized eruption column. Eruption source parameters (ESP) input to the models typically include column top, eruption start time and duration, volcano latitude and longitude, ash particle size distribution, and total mass emission. An example based on the Okmok, Alaska, eruption of July 12-14, 2008, was used to qualitatively estimate the effect of various model inputs on transport and dispersion simulations using the NOAA HYSPLIT model. Variations included changing the ash column top and bottom, eruption start time and duration, particle size specifications, simulations with and without gravitational settling, and the effect of different meteorological model data. Graphical ATD model output of ash concentration from the various runs was qualitatively compared. Some parameters such as eruption duration and ash column depth had a large effect, while simulations using only small particles or changing the particle shape factor had much less of an effect. Some other variations such as using only large particles had a small effect for the first day or so after the eruption, then a larger effect on subsequent days. Example probabilistic output will be shown for an ensemble of dispersion model runs with various model inputs. Model output such as this may be useful as a means to account for some of the uncertainties in the model input. To improve volcanic ash ATD models, a reference database for volcanic eruptions is needed, covering many volcanoes. The database should include three major components: (1) eruption source, (2) ash observations, and (3) analyses meteorology. In addition, information on aggregation or other ash particle transformation processes would be useful.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies
NASA Astrophysics Data System (ADS)
Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine
2007-03-01
We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).
Design of a V/STOL propulsion system for a large-scale fighter model
NASA Technical Reports Server (NTRS)
Willis, W. S.
1981-01-01
Modifications were made to the existing Large-Scale STOL fighter model to simulate a V/STOL configuration. Modifications include the substitutions of two dimensional lift/cruise exhaust nozzles in the nacelles, and the addition of a third J97 engine in the fuselage to suppy a remote exhaust nozzle simulating a Remote Augmented Lift System. A preliminary design of the inlet and exhaust ducting for the third engine was developed and a detailed design was completed of the hot exhaust ducting and remote nozzle.
User's guide for a large signal computer model of the helical traveling wave tube
NASA Technical Reports Server (NTRS)
Palmer, Raymond W.
1992-01-01
The use is described of a successful large-signal, two-dimensional (axisymmetric), deformable disk computer model of the helical traveling wave tube amplifier, an extensively revised and operationally simplified version. We also discuss program input and output and the auxiliary files necessary for operation. Included is a sample problem and its input data and output results. Interested parties may now obtain from the author the FORTRAN source code, auxiliary files, and sample input data on a standard floppy diskette, the contents of which are described herein.
Neurobehavioral studies pose unique challenges for dose-response modeling, including small sample size and relatively large intra-subject variation, repeated measurements over time, multiple endpoints with both continuous and ordinal scales, and time dependence of risk characteri...
Main Factors of Teachers' Professional Well-Being
ERIC Educational Resources Information Center
Yildirim, Kamil
2014-01-01
The purpose of the study was to reveal the main factors of teachers' professional well being. Theoretically constructed model was tested on large scale data belong to 72.190 teachers working at lower secondary level. Theoretical model included teachers' individual, professional and organizational characteristics. Professional well-being…
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
Large animal models and new therapies for glycogen storage disease.
Brooks, Elizabeth D; Koeberl, Dwight D
2015-05-01
Glycogen storage diseases (GSD), a unique category of inherited metabolic disorders, were first described early in the twentieth century. Since then, the biochemical and genetic bases of these disorders have been determined, and an increasing number of animal models for GSD have become available. At least seven large mammalian models have been developed for laboratory research on GSDs. These models have facilitated the development of new therapies, including gene therapy, which are undergoing clinical translation. For example, gene therapy prolonged survival and prevented hypoglycemia during fasting for greater than one year in dogs with GSD type Ia, and the need for periodic re-administration to maintain efficacy was demonstrated in that dog model. The further development of gene therapy could provide curative therapy for patients with GSD and other inherited metabolic disorders.
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2016-12-01
Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Simulating the Effects of Semivolatile Compounds on Cloud Processing of Aerosol
NASA Astrophysics Data System (ADS)
Kokkola, H.; Kudzotsa, I.; Tonttila, J.; Raatikainen, T.; Romakkaniemi, S.
2017-12-01
Aerosol removal processes largely dictate how well aerosol is transported in the atmosphere and thus the aerosol load over remote regions depends on how effectively aerosol is removed during its transport from the source regions. This means that in order to model the global distribution aerosol, both in vertical and horizontal, wet deposition processes have to be properly modelled. However, in large scale models, the description of wet removal and the vertical redistribution of aerosol by cloud processes is often extremely simplified.Here we present a novel aerosol-cloud model SALSA, where the aerosol properties are tracked through different cloud processes. These processes include: cloud droplet activation, precipitation formation, ice nucleation, melting, and evaporation. It is a sectional model that includes separate size sections for non-activated aerosol, cloud droplets, precipitation droplets, and ice crystals. The aerosol-cloud model was coupled to a large eddy model UCLALES which simulates the boundary-layer dynamics. In this study, the model has been applied in studying the wet removal as well as interactions between aerosol, clouds, and semi-volatile compounds, ammonia and nitric acid. These semi-volative compounds are special in the sense that they co-condense together with water during cloud activation and have been suggested to form droplets that can be considered cloud-droplet-like already in subsaturated conditions. In our model, we calculate the kinetic partitioning of ammonia and sulfate thus explicitly taking into account the effect of ammonia and nitric acid in the cloud formation. Our simulations indicate that especially in polluted conditions, these compounds significantly affect the properties of cloud droplets thus significantly affecting the lifecycle of different aerosol compounds.
NASA Technical Reports Server (NTRS)
Sanders, Bobby W.; Weir, Lois J.
2008-01-01
A new hypersonic inlet for a turbine-based combined-cycle (TBCC) engine has been designed. This split-flow inlet is designed to provide flow to an over-under propulsion system with turbofan and dual-mode scramjet engines for flight from takeoff to Mach 7. It utilizes a variable-geometry ramp, high-speed cowl lip rotation, and a rotating low-speed cowl that serves as a splitter to divide the flow between the low-speed turbofan and the high-speed scramjet and to isolate the turbofan at high Mach numbers. The low-speed inlet was designed for Mach 4, the maximum mode transition Mach number. Integration of the Mach 4 inlet into the Mach 7 inlet imposed significant constraints on the low-speed inlet design, including a large amount of internal compression. The inlet design was used to develop mechanical designs for two inlet mode transition test models: small-scale (IMX) and large-scale (LIMX) research models. The large-scale model is designed to facilitate multi-phase testing including inlet mode transition and inlet performance assessment, controls development, and integrated systems testing with turbofan and scramjet engines.
The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris
2015-08-01
Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands
NASA Astrophysics Data System (ADS)
Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir
2018-06-01
Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled hydrologic and hydrodynamic modelling proves to be an important tool for integrated evaluation of hydrological processes in such poorly gauged, large scale basins. We hope that this model application provides new ways forward for large scale model development in such systems, involving semi-arid regions and complex floodplains.
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.
The Saskatchewan River Basin - a large scale observatory for water security research (Invited)
NASA Astrophysics Data System (ADS)
Wheater, H. S.
2013-12-01
The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and multiple jurisdictions. The SaskRB has therefore been developed as a large scale observatory, now a Regional Hydroclimate Project of the World Climate Research Programme's GEWEX project, and is available to contribute to the emerging North American Water Program. State-of-the-art hydro-ecological experimental sites have been developed for the key biomes, and a river and lake biogeochemical research facility, focussed on impacts of nutrients and exotic chemicals. Data are integrated at SaskRB scale to support the development of improved large scale climate and hydrological modelling products, the development of DSS systems for local, provincial and basin-scale management, and the development of related social science research, engaging stakeholders in the research and exploring their values and priorities for water security. The observatory provides multiple scales of observation and modelling required to develop: a) new climate, hydrological and ecological science and modelling tools to address environmental change in key environments, and their integrated effects and feedbacks at large catchment scale, b) new tools needed to support river basin management under uncertainty, including anthropogenic controls on land and water management and c) the place-based focus for the development of new transdisciplinary science.
Biogeochemical Trends and Their Ecosystem Impacts in Atlantic Canada
NASA Astrophysics Data System (ADS)
Fennel, Katja; Rutherford, Krysten; Kuhn, Angela; Zhang, Wenxia; Brennan, Katie; Zhang, Rui
2017-04-01
The representation of coastal oceans in global biogeochemical models is a challenge, yet the ecosystems in these regions are most vulnerable to the combined stressors of ocean warming, deoxygenation, acidification, eutrophication and fishing. Coastal regions also have large air-sea fluxes of CO2, making them an important but poorly quantified component of the global carbon cycle, and are the most relevant for human activities. Regional model applications that are nested within large-scale or global models are necessary for detailed studies of coastal regions. We present results from such a regional biogeochemical model for the northwestern North Atlantic shelves and adjacent deep ocean of Atlantic Canada. The model is an implementation of the Regional Ocean Modeling System (ROMS) and includes an NPZD-type nitrogen cycle model with explicit representation of dissolved oxygen and inorganic carbon. The region is at the confluence of the Gulf Stream and Labrador Current making it highly dynamic, a challenge for analysis and prediction, and prone to large changes. Historically a rich fishing ground, coastal ecosystems in Atlantic Canada have undergone dramatic changes including the collapse of several economically important fish stocks and the listing of many species as threatened or endangered. Furthermore it is unclear whether the region is a net source or sink of atmospheric CO2 with estimates of the size and direction of the net air-sea CO2 flux remaining controversial. We will discuss simulated patterns of primary production, inorganic carbon fluxes and oxygen trends in the context of circulation features and shelf residence times for the present ocean state and present future projections.
Keshvari, J; Kivento, M; Christ, A; Bit-Babik, G
2016-04-21
This paper presents the results of two computational large scale studies using highly realistic exposure scenarios, MRI based human head and hand models, and two mobile phone models. The objectives are (i) to study the relevance of age when people are exposed to RF by comparing adult and child heads and (ii) to analyze and discuss the conservativeness of the SAM phantom for all age groups. Representative use conditions were simulated using detailed CAD models of two mobile phones operating between 900 MHz and 1950 MHz including configurations with the hand holding the phone, which were not considered in most previous studies. The peak spatial-average specific absorption rate (psSAR) in the head and the pinna tissues is assessed using anatomically accurate head and hand models. The first of the two mentioned studies involved nine head-, four hand- and two phone-models, the second study included six head-, four hand- and three simplified phone-models (over 400 configurations in total). In addition, both studies also evaluated the exposure using the SAM phantom. Results show no systematic differences between psSAR induced in the adult and child heads. The exposure level and its variation for different age groups may be different for particular phones, but no correlation between psSAR and model age was found. The psSAR from all exposure conditions was compared to the corresponding configurations using SAM, which was found to be conservative in the large majority of cases.
NASA Astrophysics Data System (ADS)
Keshvari, J.; Kivento, M.; Christ, A.; Bit-Babik, G.
2016-04-01
This paper presents the results of two computational large scale studies using highly realistic exposure scenarios, MRI based human head and hand models, and two mobile phone models. The objectives are (i) to study the relevance of age when people are exposed to RF by comparing adult and child heads and (ii) to analyze and discuss the conservativeness of the SAM phantom for all age groups. Representative use conditions were simulated using detailed CAD models of two mobile phones operating between 900 MHz and 1950 MHz including configurations with the hand holding the phone, which were not considered in most previous studies. The peak spatial-average specific absorption rate (psSAR) in the head and the pinna tissues is assessed using anatomically accurate head and hand models. The first of the two mentioned studies involved nine head-, four hand- and two phone-models, the second study included six head-, four hand- and three simplified phone-models (over 400 configurations in total). In addition, both studies also evaluated the exposure using the SAM phantom. Results show no systematic differences between psSAR induced in the adult and child heads. The exposure level and its variation for different age groups may be different for particular phones, but no correlation between psSAR and model age was found. The psSAR from all exposure conditions was compared to the corresponding configurations using SAM, which was found to be conservative in the large majority of cases.
Introducing improved structural properties and salt dependence into a coarse-grained model of DNA
NASA Astrophysics Data System (ADS)
Snodin, Benedict E. K.; Randisi, Ferdinando; Mosayebi, Majid; Šulc, Petr; Schreck, John S.; Romano, Flavio; Ouldridge, Thomas E.; Tsukanov, Roman; Nir, Eyal; Louis, Ard A.; Doye, Jonathan P. K.
2015-06-01
We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na+] = 0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.
Introducing improved structural properties and salt dependence into a coarse-grained model of DNA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snodin, Benedict E. K., E-mail: benedict.snodin@chem.ox.ac.uk; Mosayebi, Majid; Schreck, John S.
2015-06-21
We introduce an extended version of oxDNA, a coarse-grained model of deoxyribonucleic acid (DNA) designed to capture the thermodynamic, structural, and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures, such as DNA origami, which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na{sup +}] = 0.5M), so that it can be used for a range of salt concentrations including thosemore » corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.« less
Development of an Intelligent Videogrammetric Wind Tunnel Measurement System
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Burner, Alpheus W.
2004-01-01
A videogrammetric technique developed at NASA Langley Research Center has been used at five NASA facilities at the Langley and Ames Research Centers for deformation measurements on a number of sting mounted and semispan models. These include high-speed research and transport models tested over a wide range of aerodynamic conditions including subsonic, transonic, and supersonic regimes. The technique, based on digital photogrammetry, has been used to measure model attitude, deformation, and sting bending. In addition, the technique has been used to study model injection rate effects and to calibrate and validate methods for predicting static aeroelastic deformations of wind tunnel models. An effort is currently underway to develop an intelligent videogrammetric measurement system that will be both useful and usable in large production wind tunnels while providing accurate data in a robust and timely manner. Designed to encode a higher degree of knowledge through computer vision, the system features advanced pattern recognition techniques to improve automated location and identification of targets placed on the wind tunnel model to be used for aerodynamic measurements such as attitude and deformation. This paper will describe the development and strategy of the new intelligent system that was used in a recent test at a large transonic wind tunnel.
Statistical Models of Fracture Relevant to Nuclear-Grade Graphite: Review and Recommendations
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bratton, Robert L.
2011-01-01
The nuclear-grade (low-impurity) graphite needed for the fuel element and moderator material for next-generation (Gen IV) reactors displays large scatter in strength and a nonlinear stress-strain response from damage accumulation. This response can be characterized as quasi-brittle. In this expanded review, relevant statistical failure models for various brittle and quasi-brittle material systems are discussed with regard to strength distribution, size effect, multiaxial strength, and damage accumulation. This includes descriptions of the Weibull, Batdorf, and Burchell models as well as models that describe the strength response of composite materials, which involves distributed damage. Results from lattice simulations are included for a physics-based description of material breakdown. Consideration is given to the predicted transition between brittle and quasi-brittle damage behavior versus the density of damage (level of disorder) within the material system. The literature indicates that weakest-link-based failure modeling approaches appear to be reasonably robust in that they can be applied to materials that display distributed damage, provided that the level of disorder in the material is not too large. The Weibull distribution is argued to be the most appropriate statistical distribution to model the stochastic-strength response of graphite.
Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe
2017-01-01
Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien
2017-03-01
We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
Animal Models of Atherosclerosis
Getz, Godfrey S.; Reardon, Catherine A.
2012-01-01
Atherosclerosis is a chronic inflammatory disorder that is the underlying cause of most cardiovascular disease. Both cells of the vessel wall and cells of the immune system participate in atherogenesis. This process is heavily influenced by plasma lipoproteins, genetics and the hemodynamics of the blood flow in the artery. A variety of small and large animal models have been used to study the atherogenic process. No model is ideal as each has its own advantages and limitations with respect to manipulation of the atherogenic process and modeling human atherosclerosis or lipoprotein profile. Useful large animal models include pigs, rabbits and non-human primates. Due in large part to the relative ease of genetic manipulation and the relatively short time frame for the development of atherosclerosis, murine models are currently the most extensively used. While not all aspects of murine atherosclerosis are identical to humans, studies using murine models have suggested potential biological processes and interactions that underlie this process. As it becomes clear that different factors may influence different stages of lesion development, the use of mouse models with the ability to turn on or delete proteins or cells in tissue specific and temporal manner will be very valuable. PMID:22383700
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westbrook, C K; Mizobuchi, Y; Poinsot, T J
2004-08-26
Progress in the field of computational combustion over the past 50 years is reviewed. Particular attention is given to those classes of models that are common to most system modeling efforts, including fluid dynamics, chemical kinetics, liquid sprays, and turbulent flame models. The developments in combustion modeling are placed into the time-dependent context of the accompanying exponential growth in computer capabilities and Moore's Law. Superimposed on this steady growth, the occasional sudden advances in modeling capabilities are identified and their impacts are discussed. Integration of submodels into system models for spark ignition, diesel and homogeneous charge, compression ignition engines, surfacemore » and catalytic combustion, pulse combustion, and detonations are described. Finally, the current state of combustion modeling is illustrated by descriptions of a very large jet lifted 3D turbulent hydrogen flame with direct numerical simulation and 3D large eddy simulations of practical gas burner combustion devices.« less
NASA Astrophysics Data System (ADS)
van der Molen, Johan
2015-04-01
Tidal power generation through submerged turbine-type devices is in an advanced stage of testing, and large-scale applications are being planned in areas with high tidal current speeds. The potential impact of such large-scale applications on the hydrography can be investigated using hydrodynamical models. In addition, aspects of the potential impact on the marine ecosystem can be studied using biogeochemical models. In this study, the coupled hydrodynamics-biogeochemistry model GETM-ERSEM is used in a shelf-wide application to investigate the potential impact of large-scale tidal power generation in the Pentland Firth. A scenario representing the currently licensed power extraction suggested i) an average reduction in M2 tidal current velocities of several cm/s within the Pentland Firth, ii) changes in the residual circulation of several mm/s in the vicinity of the Pentland Firth, iii) an increase in M2 tidal amplitude of up to 1 cm to the west of the Pentland Firth, and iv) a reduction of several mm in M2 tidal amplitude along the east coast of the UK. A second scenario representing 10 times the currently licensed power extraction resulted in changes that were approximately 10 times as large. Simulations including the biogeochemistry model for these scenarios are currently in preparation, and first results will be presented at the the conference, aiming at impacts on primary production and benthic production.
A model of forest floor carbon mass for United States forest types
James E. Smith; Linda S. Heath
2002-01-01
Includes a large set of published values of forest floor mass and develop large-scale estimates of carbon mass according to region and forest type. Estimates of average forest floor carbon mass per hectare of forest applied to a 1997 summary forest inventory, sum to 4.5 Gt carbon stored in forests of the 48 contiguous United States.
Viger, Roland J.; Hay, Lauren E.; Jones, John W.; Buell, Gary R.
2010-01-01
This report documents an extension of the Precipitation Runoff Modeling System that accounts for the effect of a large number of water-holding depressions in the land surface on the hydrologic response of a basin. Several techniques for developing the inputs needed by this extension also are presented. These techniques include the delineation of the surface depressions, the generation of volume estimates for the surface depressions, and the derivation of model parameters required to describe these surface depressions. This extension is valuable for applications in basins where surface depressions are too small or numerous to conveniently model as discrete spatial units, but where the aggregated storage capacity of these units is large enough to have a substantial effect on streamflow. In addition, this report documents several new model concepts that were evaluated in conjunction with the depression storage functionality, including: ?hydrologically effective? imperviousness, rates of hydraulic conductivity, and daily streamflow routing. All of these techniques are demonstrated as part of an application in the Upper Flint River Basin, Georgia. Simulated solar radiation, potential evapotranspiration, and water balances match observations well, with small errors for the first two simulated data in June and August because of differences in temperatures from the calibration and evaluation periods for those months. Daily runoff simulations show increasing accuracy with streamflow and a good fit overall. Including surface depression storage in the model has the effect of decreasing daily streamflow for all but the lowest flow values. The report discusses the choices and resultant effects involved in delineating and parameterizing these features. The remaining enhancements to the model and its application provide a more realistic description of basin geography and hydrology that serve to constrain the calibration process to more physically realistic parameter values.
NASA Astrophysics Data System (ADS)
Fennel, K.; Rutherford, K. E.; Kuhn, A. M.; Zhang, W.; Brennan, C. E.; Zhang, R.
2016-12-01
Representing coastal oceans in global biogeochemical models is a challenge, yet the ecosystems in these regions are most vulnerable to the combined stressors of ocean warming, deoxygenation, acidification, eutrophication and fishing. Coastal regions also have large air-sea fluxes of CO2, making them an important but poorly quantified component of the global carbon cycle, and are the most relevant for human activities. Regional model applications that are nested within large-scale or global models are necessary for detailed studies of coastal regions. We present results from such a regional biogeochemical model for the northwestern North Atlantic shelves and adjacent deep ocean of Atlantic Canada. The model is an implementation of the Regional Ocean Modeling System (ROMS) and includes an NPZD-type nitrogen cycle model with explicit representation of dissolved oxygen and inorganic carbon. The region is at the confluence of the Gulf Stream and Labrador Current making it highly dynamic, a challenge for analysis and prediction, and prone to large changes. Historically a rich fishing ground, coastal ecosystems in Atlantic Canada have undergone dramatic changes including the collapse of several economically important fish stocks and the listing of many species as threatened or endangered. Furthermore it is unclear whether the region is a net source or sink of atmospheric CO2 with estimates of the size and direction of the net air-sea CO2 flux remaining controversial. We will discuss simulated patterns of primary production, inorganic carbon fluxes and oxygen trends in the context of circulation features and shelf residence times for the present ocean state and present future projections.
Competing opinions and stubborness: Connecting models to data.
Burghardt, Keith; Rand, William; Girvan, Michelle
2016-03-01
We introduce a general contagionlike model for competing opinions that includes dynamic resistance to alternative opinions. We show that this model can describe candidate vote distributions, spatial vote correlations, and a slow approach to opinion consensus with sensible parameter values. These empirical properties of large group dynamics, previously understood using distinct models, may be different aspects of human behavior that can be captured by a more unified model, such as the one introduced in this paper.
Quantifying properties of hot and dense QCD matter through systematic model-to-data comparison
Bernhard, Jonah E.; Marcy, Peter W.; Coleman-Smith, Christopher E.; ...
2015-05-22
We systematically compare an event-by-event heavy-ion collision model to data from the CERN Large Hadron Collider. Using a general Bayesian method, we probe multiple model parameters including fundamental quark-gluon plasma properties such as the specific shear viscosity η/s, calibrate the model to optimally reproduce experimental data, and extract quantitative constraints for all parameters simultaneously. Furthermore, the method is universal and easily extensible to other data and collision models.
Beauregard, Frieda; de Blois, Sylvie
2014-01-01
Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential for non-climate aspects of the environment to pose a constraint to range expansion under climate change. PMID:24658097
Beauregard, Frieda; de Blois, Sylvie
2014-01-01
Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential for non-climate aspects of the environment to pose a constraint to range expansion under climate change.
Coupled SWAT-MODFLOW Model Development for Large Basins
NASA Astrophysics Data System (ADS)
Aliyari, F.; Bailey, R. T.; Tasdighi, A.
2017-12-01
Water management in semi-arid river basins requires allocating water resources between urban, industrial, energy, and agricultural sectors, with the latter competing for necessary irrigation water to sustain crop yield. Competition between these sectors will intensify due to changes in climate and population growth. In this study, the recently developed SWAT-MODFLOW coupled hydrologic model is modified for application in a large managed river basin that provides both surface water and groundwater resources for urban and agricultural areas. Specific modifications include the linkage of groundwater pumping and irrigation practices and code changes to allow for the large number of SWAT hydrologic response units (HRU) required for a large river basin. The model is applied to the South Platte River Basin (SPRB), a 56,980 km2 basin in northeastern Colorado dominated by large urban areas along the front range of the Rocky Mountains and agriculture regions to the east. Irregular seasonal and annual precipitation and 150 years of urban and agricultural water management history in the basin provide an ideal test case for the SWAT-MODFLOW model. SWAT handles land surface and soil zone processes whereas MODFLOW handles groundwater flow and all sources and sinks (pumping, injection, bedrock inflow, canal seepage, recharge areas, groundwater/surface water interaction), with recharge and stream stage provided by SWAT. The model is tested against groundwater levels, deep percolation estimates, and stream discharge. The model will be used to quantify spatial groundwater vulnerability in the basin under scenarios of climate change and population growth.
A hydrodynamic model for granular material flows including segregation effects
NASA Astrophysics Data System (ADS)
Gilberg, Dominik; Klar, Axel; Steiner, Konrad
2017-06-01
The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.
Volcanism-Climate Interactions
NASA Technical Reports Server (NTRS)
Walter, Louis S. (Editor); Desilva, Shanaka (Editor)
1991-01-01
The range of disciplines in the study of volcanism-climate interactions includes paleoclimate, volcanology, petrology, tectonics, cloud physics and chemistry, and climate and radiation modeling. Questions encountered in understanding the interactions include: the source and evolution of sulfur and sulfur-gaseous species in magmas; their entrainment in volcanic plumes and injection into the stratosphere; their dissipation rates; and their radiative effects. Other issues include modeling and measuring regional and global effects of such large, dense clouds. A broad-range plan of research designed to answer these questions was defined. The plan includes observations of volcanoes, rocks, trees, and ice cores, as well as satellite and aircraft observations of erupting volcanoes and resulting lumes and clouds.
Radiation protection for manned space activities
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1983-01-01
The Earth's natural radiation environment poses a hazard to manned space activities directly through biological effects and indirectly through effects on materials and electronics. The following standard practices are indicated that address: (1) environment models for all radiation species including uncertainties and temporal variations; (2) upper bound and nominal quality factors for biological radiation effects that include dose, dose rate, critical organ, and linear energy transfer variations; (3) particle transport and shielding methodology including system and man modeling and uncertainty analysis; (4) mission planning that includes active dosimetry, minimizes exposure during extravehicular activities, subjects every mission to a radiation review, and specifies operational procedures for forecasting, recognizing, and dealing with large solar flaes.
A hierarchy for modeling high speed propulsion systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Deabreu, Alex
1991-01-01
General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery, such as inlets, ramjets, and scramjets. The discussion is separated into four areas: (1) computational fluid dynamics models for the entire nonlinear system or high order nonlinear models; (2) high order linearized models derived from fundamental physics; (3) low order linear models obtained from the other high order models; and (4) low order nonlinear models (order here refers to the number of dynamic states). Included in the discussion are any special considerations based on the relevant control system designs. The methods discussed are for the quasi-one-dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, including moving normal shocks, hammershocks, simple subsonic combustion via heat addition, temperature dependent gases, detonations, and thermal choking. The report also contains a comprehensive list of papers and theses generated by this grant.
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...
2018-05-14
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
Large-Eddy Simulation of Internal Flow through Human Vocal Folds
NASA Astrophysics Data System (ADS)
Lasota, Martin; Šidlof, Petr
2018-06-01
The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
NASA Astrophysics Data System (ADS)
Civerolo, Kevin; Hogrefe, Christian; Zalewsky, Eric; Hao, Winston; Sistla, Gopal; Lynn, Barry; Rosenzweig, Cynthia; Kinney, Patrick L.
2010-10-01
This paper compares spatial and seasonal variations and temporal trends in modeled and measured concentrations of sulfur and nitrogen compounds in wet and dry deposition over an 18-year period (1988-2005) over a portion of the northeastern United States. Substantial emissions reduction programs occurred over this time period, including Title IV of the Clean Air Act Amendments of 1990 which primarily resulted in large decreases in sulfur dioxide (SO 2) emissions by 1995, and nitrogen oxide (NO x) trading programs which resulted in large decreases in warm season NO x emissions by 2004. Additionally, NO x emissions from mobile sources declined more gradually over this period. The results presented here illustrate the use of both operational and dynamic model evaluation and suggest that the modeling system largely captures the seasonal and long-term changes in sulfur compounds. The modeling system generally captures the long-term trends in nitrogen compounds, but does not reproduce the average seasonal variation or spatial patterns in nitrate.
Paleoclimate diagnostics: consistent large-scale temperature responses in warm and cold climates
NASA Astrophysics Data System (ADS)
Izumi, Kenji; Bartlein, Patrick; Harrison, Sandy
2015-04-01
The CMIP5 model simulations of the large-scale temperature responses to increased raditative forcing include enhanced land-ocean contrast, stronger response at higher latitudes than in the tropics, and differential responses in warm and cool season climates to uniform forcing. Here we show that these patterns are also characteristic of CMIP5 model simulations of past climates. The differences in the responses over land as opposed to over the ocean, between high and low latitudes, and between summer and winter are remarkably consistent (proportional and nearly linear) across simulations of both cold and warm climates. Similar patterns also appear in historical observations and paleoclimatic reconstructions, implying that such responses are characteristic features of the climate system and not simple model artifacts, thereby increasing our confidence in the ability of climate models to correctly simulate different climatic states. We also show the possibility that a small set of common mechanisms control these large-scale responses of the climate system across multiple states.
Nonperturbative quantization of the electroweak model's electrodynamic sector
NASA Astrophysics Data System (ADS)
Fry, M. P.
2015-04-01
Consider the Euclidean functional integral representation of any physical process in the electroweak model. Integrating out the fermion degrees of freedom introduces 24 fermion determinants. These multiply the Gaussian functional measures of the Maxwell, Z , W , and Higgs fields to give an effective functional measure. Suppose the functional integral over the Maxwell field is attempted first. This paper is concerned with the large amplitude behavior of the Maxwell effective measure. It is assumed that the large amplitude variation of this measure is insensitive to the presence of the Z , W , and H fields; they are assumed to be a subdominant perturbation of the large amplitude Maxwell sector. Accordingly, we need only examine the large amplitude variation of a single QED fermion determinant. To facilitate this the Schwinger proper time representation of this determinant is decomposed into a sum of three terms. The advantage of this is that the separate terms can be nonperturbatively estimated for a measurable class of large amplitude random fields in four dimensions. It is found that the QED fermion determinant grows faster than exp [c e2∫d4x Fμν 2] , c >0 , in the absence of zero mode supporting random background potentials. This raises doubt on whether the QED fermion determinant is integrable with any Gaussian measure whose support does not include zero mode supporting potentials. Including zero mode supporting background potentials can result in a decaying exponential growth of the fermion determinant. This is prima facie evidence that Maxwellian zero modes are necessary for the nonperturbative quantization of QED and, by implication, for the nonperturbative quantization of the electroweak model.
A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks
Wang, Ping; Zhang, Lin; Li, Victor O. K.
2013-01-01
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708
Sondak, D.; Shadid, J. N.; Oberai, A. A.; ...
2015-04-29
New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less
A stratified acoustic model accounting for phase shifts for underwater acoustic networks.
Wang, Ping; Zhang, Lin; Li, Victor O K
2013-05-13
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.
Implementing Capsule Representation in a Total Hip Dislocation Finite Element Model
Stewart, Kristofer J; Pedersen, Douglas R; Callaghan, John J; Brown, Thomas D
2004-01-01
Previously validated hardware-only finite element models of THA dislocation have clarified how various component design and surgical placement variables contribute to resisting the propensity for implant dislocation. This body of work has now been enhanced with the incorporation of experimentally based capsule representation, and with anatomic bone structures. The current form of this finite element model provides for large deformation multi-body contact (including capsule wrap-around on bone and/or implant), large displacement interfacial sliding, and large deformation (hyperelastic) capsule representation. In addition, the modular nature of this model now allows for rapid incorporation of current or future total hip implant designs, accepts complex multi-axial physiologic motion inputs, and outputs case-specific component/bone/soft-tissue impingement events. This soft-tissue-augmented finite element model is being used to investigate the performance of various implant designs for a range of clinically-representative soft tissue integrities and surgical techniques. Preliminary results show that capsule enhancement makes a substantial difference in stability, compared to an otherwise identical hardware-only model. This model is intended to help put implant design and surgical technique decisions on a firmer scientific basis, in terms of reducing the likelihood of dislocation. PMID:15296198
Future changes in large-scale transport and stratosphere-troposphere exchange
NASA Astrophysics Data System (ADS)
Abalos, M.; Randel, W. J.; Kinnison, D. E.; Garcia, R. R.
2017-12-01
Future changes in large-scale transport are investigated in long-term (1955-2099) simulations of the Community Earth System Model - Whole Atmosphere Community Climate Model (CESM-WACCM) under an RCP6.0 climate change scenario. We examine artificial passive tracers in order to isolate transport changes from future changes in emissions and chemical processes. The model suggests enhanced stratosphere-troposphere exchange in both directions (STE), with decreasing tropospheric and increasing stratospheric tracer concentrations in the troposphere. Changes in the different transport processes are evaluated using the Transformed Eulerian Mean continuity equation, including parameterized convective transport. Dynamical changes associated with the rise of the tropopause height are shown to play a crucial role on future transport trends.
Tank Investigation of the EDO Model 142 Hydro-Ski Research Airplane
NASA Technical Reports Server (NTRS)
Ramsen, John A.; Wadlin, Kenneth L.; Gray, George R.
1951-01-01
A tank investigation has been conducted of a 1/10-size powered-dynamic model of the Edo model 142 hydra-ski research airplane. The results of tests of two configurations are presented: One included a large ski and a ski well; the other, a small ski without a well. Water take-offs would be possible with the available thrust for either configuration: however, the configuration with the large ski emerged sooner and had less resistance from ski emergence until take-off. Longitudinal stability and landing behavior in smooth water were satisfactory for both configurations. Some alteration to the design of the tail would be desirable in order to reduce the spray loads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghorbanpour, Saeede; Zecevic, Milovan; Kumar, Anil
An elasto-plastic polycrystal plasticity model is developed and applied to an Inconel 718 (IN718) superalloy that was produced by additive manufacturing (AM). The model takes into account the contributions of solid solution, precipitates shearing, and grain size and shape effects into the initial slip resistance. Non-Schmid effects and backstress are also included in the crystal plasticity model for activating slip. The hardening law for the critical resolved shear stress is based on the evolution of dislocation density. In using the same set of material and physical parameters, the model is compared against a suite of compression, tension, and large-strain cyclicmore » mechanical test data applied in different AM build directions. We demonstrate that the model is capable of predicting the particularities of both monotonic and cyclic deformation to large strains of the alloy, including decreasing hardening rate during monotonic loading, the non-linear unloading upon the load reversal, the Bauschinger effect, the hardening rate change during loading in the reverse direction as well as plastic anisotropy and the concomitant microstructure evolution. It is anticipated that the general model developed here can be applied to other multiphase alloys containing precipitates.« less
Ghorbanpour, Saeede; Zecevic, Milovan; Kumar, Anil; ...
2017-09-14
An elasto-plastic polycrystal plasticity model is developed and applied to an Inconel 718 (IN718) superalloy that was produced by additive manufacturing (AM). The model takes into account the contributions of solid solution, precipitates shearing, and grain size and shape effects into the initial slip resistance. Non-Schmid effects and backstress are also included in the crystal plasticity model for activating slip. The hardening law for the critical resolved shear stress is based on the evolution of dislocation density. In using the same set of material and physical parameters, the model is compared against a suite of compression, tension, and large-strain cyclicmore » mechanical test data applied in different AM build directions. We demonstrate that the model is capable of predicting the particularities of both monotonic and cyclic deformation to large strains of the alloy, including decreasing hardening rate during monotonic loading, the non-linear unloading upon the load reversal, the Bauschinger effect, the hardening rate change during loading in the reverse direction as well as plastic anisotropy and the concomitant microstructure evolution. It is anticipated that the general model developed here can be applied to other multiphase alloys containing precipitates.« less
Tracheo-bronchial soft tissue and cartilage resonances in the subglottal acoustic input impedance.
Lulich, Steven M; Arsikere, Harish
2015-06-01
This paper offers a re-evaluation of the mechanical properties of the tracheo-bronchial soft tissues and cartilage and uses a model to examine their effects on the subglottal acoustic input impedance. It is shown that the values for soft tissue elastance and cartilage viscosity typically used in models of subglottal acoustics during phonation are not accurate, and corrected values are proposed. The calculated subglottal acoustic input impedance using these corrected values reveals clusters of weak resonances due to soft tissues (SgT) and cartilage (SgC) lining the walls of the trachea and large bronchi, which can be observed empirically in subglottal acoustic spectra. The model predicts that individuals may exhibit SgT and SgC resonances to variable degrees, depending on a number of factors including tissue mechanical properties and the dimensions of the trachea and large bronchi. Potential implications for voice production and large pulmonary airway tissue diseases are also discussed.
NASA Technical Reports Server (NTRS)
Knight, Montgomery; Wenzinger, Carl J
1930-01-01
This investigation covers force tests through a large range of angle of attack on a series of monoplane and biplane wing models. The tests were conducted in the atmospheric wind tunnel of the National Advisory Committee for Aeronautics. The models were arranged in such a manner as to make possible a determination of the effects of variations in tip shape, aspect ratio, flap setting, stagger, gap, decalage, sweep back, and airfoil profile. The arrangements represented most of the types of wing systems in use on modern airplanes. The effect of each variable is illustrated by means of groups of curves. In addition, there are included approximate autorotational characteristics in the form of calculated ranges of "rotary instability." a correction for blocking in this tunnel which applies to monoplanes at large angles of attack has been developed, and is given in an appendix. (author)
An experimental and theoretical investigation on torrefaction of a large wet wood particle.
Basu, Prabir; Sadhukhan, Anup Kumar; Gupta, Parthapratim; Rao, Shailendra; Dhungana, Alok; Acharya, Bishnu
2014-05-01
A competitive kinetic scheme representing primary and secondary reactions is proposed for torrefaction of large wet wood particles. Drying and diffusive, convective and radiative mode of heat transfer is considered including particle shrinking during torrefaction. The model prediction compares well with the experimental results of both mass fraction residue and temperature profiles for biomass particles. The effect of temperature, residence time and particle size on torrefaction of cylindrical wood particles is investigated through model simulations. For large biomass particles heat transfer is identified as one of the controlling factor for torrefaction. The optimum torrefaction temperature, residence time and particle size are identified. The model may thus be integrated with CFD analysis to estimate the performance of an existing torrefier for a given feedstock. The performance analysis may also provide useful insight for design and development of an efficient torrefier. Copyright © 2014 Elsevier Ltd. All rights reserved.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
The Iowa Model for Pediatric Low Vision Services.
ERIC Educational Resources Information Center
Wilkinson, Mark E.; Stewart, Ian; Trantham, Carole S.
2000-01-01
This article describes the evolution of Iowa's model of low vision care for students with visual impairments. It reviews the benefits of a transdisciplinary team approach to providing low vision services for children with visual impairments, including a decrease in the number of students requiring large-print materials and related costs. (Contains…
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Demonstrating the Relationship between School Nurse Workload and Student Outcomes
ERIC Educational Resources Information Center
Daughtry, Donna; Engelke, Martha Keehner
2018-01-01
This article describes how one very large, diverse school district developed a Student Acuity Tool for School Nurse Assignment and used a logic model to successfully advocate for additional school nurse positions. The logic model included three student outcomes that were evaluated: provide medications and procedures safely and accurately, increase…
Recursive renormalization group theory based subgrid modeling
NASA Technical Reports Server (NTRS)
Zhou, YE
1991-01-01
Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.
Baryon magnetic moments: Symmetries and relations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parreno, Assumpta; Savage, Martin; Tiburzi, Brian
Magnetic moments of the octet baryons are computed using lattice QCD in background magnetic fields, including the first treatment of the magnetically coupled Σ0- Λ system. Although the computations are performed for relatively large values of the up and down quark masses, we gain new insight into the symmetries and relations between magnetic moments by working at a three-flavor mass-symmetric point. While the spinflavor symmetry in the large Nc limit of QCD is shared by the naïve constituent quark model, we find instances where quark model predictions are considerably favored over those emerging in the large Nc limit. We suggestmore » further calculations that would shed light on the curious patterns of baryon magnetic moments.« less
Videometric Applications in Wind Tunnels
NASA Technical Reports Server (NTRS)
Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu
1997-01-01
Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.
Valuing water resources in Switzerland using a hedonic price model
NASA Astrophysics Data System (ADS)
van Dijk, Diana; Siber, Rosi; Brouwer, Roy; Logar, Ivana; Sanadgol, Dorsa
2016-05-01
In this paper, linear and spatial hedonic price models are applied to the housing market in Switzerland, covering all 26 cantons in the country over the period 2005-2010. Besides structural house, neighborhood and socioeconomic characteristics, we include a wide variety of new environmental characteristics related to water to examine their role in explaining variation in sales prices. These include water abundance, different types of water bodies, the recreational function of water, and water disamenity. Significant spatial autocorrelation is found in the estimated models, as well as nonlinear effects for distances to the nearest lake and large river. Significant effects are furthermore found for water abundance and the distance to large rivers, but not to small rivers. Although in both linear and spatial models water related variables explain less than 1% of the price variation, the distance to the nearest bathing site has a larger marginal contribution than many neighborhood-related distance variables. The housing market shows to differentiate between different water related resources in terms of relative contribution to house prices, which could help the housing development industry make more geographically targeted planning activities.
Analysis for Non-Traditional Security Challenges: Methods and Tools
2006-11-20
PMESII Modeling Challenges modeling or where data is not available to support the model, would aid decision Domain is large, nebulous, complex, and...traditional challenges . This includes enlisting the aid of the inter-agency and alliance/coalition communities. Second, we need to realize this...20 November 2006 MILITARY OPERATIONS RESEARCH SOCIETY MIFh MORS Workshop Analysis for Non-Traditional Security Challenges : Methods and Tools 21-23
ERIC Educational Resources Information Center
Li, Yanmei; Li, Shuhong; Wang, Lin
2010-01-01
Many standardized educational tests include groups of items based on a common stimulus, known as "testlets". Standard unidimensional item response theory (IRT) models are commonly used to model examinees' responses to testlet items. However, it is known that local dependence among testlet items can lead to biased item parameter estimates…
Self-consistent semi-analytic models of the first stars
NASA Astrophysics Data System (ADS)
Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.
2018-04-01
We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.
1986-01-01
An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.
A global multiscale mathematical model for the human circulation with emphasis on the venous system.
Müller, Lucas O; Toro, Eleuterio F
2014-07-01
We present a global, closed-loop, multiscale mathematical model for the human circulation including the arterial system, the venous system, the heart, the pulmonary circulation and the microcirculation. A distinctive feature of our model is the detailed description of the venous system, particularly for intracranial and extracranial veins. Medium to large vessels are described by one-dimensional hyperbolic systems while the rest of the components are described by zero-dimensional models represented by differential-algebraic equations. Robust, high-order accurate numerical methodology is implemented for solving the hyperbolic equations, which are adopted from a recent reformulation that includes variable material properties. Because of the large intersubject variability of the venous system, we perform a patient-specific characterization of major veins of the head and neck using MRI data. Computational results are carefully validated using published data for the arterial system and most regions of the venous system. For head and neck veins, validation is carried out through a detailed comparison of simulation results against patient-specific phase-contrast MRI flow quantification data. A merit of our model is its global, closed-loop character; the imposition of highly artificial boundary conditions is avoided. Applications in mind include a vast range of medical conditions. Of particular interest is the study of some neurodegenerative diseases, whose venous haemodynamic connection has recently been identified by medical researchers. Copyright © 2014 John Wiley & Sons, Ltd.
Determination of Realistic Fire Scenarios in Spacecraft
NASA Technical Reports Server (NTRS)
Dietrich, Daniel L.; Ruff, Gary A.; Urban, David
2013-01-01
This paper expands on previous work that examined how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or accumulation of other combustion products (e.g. carbon monoxide). The previous work introduced a simplified model that treated the fire primarily as a source of heat and combustion products and sink for oxygen prescribed (input to the model) based on terrestrial standards. The model further treated the spacecraft as a closed system with no capability to vent to the vacuum of space. The model in the present work extends this analysis to more realistically treat the pressure relief system(s) of the spacecraft, include more combustion products (e.g. HF) in the analysis and attempt to predict the fire spread and limiting fire size (based on knowledge of terrestrial fires and the known characteristics of microgravity fires) rather than prescribe them in the analysis. Including the characteristics of vehicle pressure relief systems has a dramatic mitigating effect by eliminating vehicle overpressure for all but very large fires and reducing average gas-phase temperatures.
Damage identification using inverse methods.
Friswell, Michael I
2007-02-15
This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.
Controlled Ecological Life Support System (CELSS) modeling
NASA Technical Reports Server (NTRS)
Drysdale, Alan; Thomas, Mark; Fresa, Mark; Wheeler, Ray
1992-01-01
Attention is given to CELSS, a critical technology for the Space Exploration Initiative. OCAM (object-oriented CELSS analysis and modeling) models carbon, hydrogen, and oxygen recycling. Multiple crops and plant types can be simulated. Resource recovery options from inedible biomass include leaching, enzyme treatment, aerobic digestion, and mushroom and fish growth. The benefit of using many small crops overlapping in time, instead of a single large crop, is demonstrated. Unanticipated results include startup transients which reduce the benefit of multiple small crops. The relative contributions of mass, energy, and manpower to system cost are analyzed in order to determine appropriate research directions.
NASA Astrophysics Data System (ADS)
Fisher, J. A.; Atlas, E. L.; Blake, D. R.; Barletta, B.; Thompson, C. R.; Peischl, J.; Tzompa Sosa, Z. A.; Ryerson, T. B.; Murray, L. T.
2017-12-01
Nitrogen oxides (NO + NO2 = NOx) are precursors in the formation of tropospheric ozone, contribute to the formation of aerosols, and enhance nitrogen deposition to ecosystems. While direct emissions tend to be localised over continental source regions, a significant source of NOx to the remote troposphere comes from degradation of other forms of reactive nitrogen. Long-lived, small chain alkyl nitrates (RONO2) including methyl, ethyl and propyl nitrates may be particularly significant forms of reactive nitrogen in the remote atmosphere as they are emitted directly by the ocean in regions where reactive nitrogen is otherwise very low. They also act as NOx reservoir species, sequestering NOx in source regions and releasing it far downwind—and through this process may become increasingly important reservoirs as methane, ethane, and propane emissions grow. However, small RONO2 are not consistently included in global atmospheric chemistry models, and their distributions and impacts remain poorly constrained. In this presentation, we will describe a new RONO2 simulation in the GEOS-Chem chemical transport model evaluated using a large ensemble of aircraft observations collected over a 20-year period. The observations are largely concentrated over the Pacific Ocean, beginning with PEM-Tropics in the late 1990s and continuing through the recent HIPPO and ATom campaigns. Both observations and model show enhanced RONO2 in the tropical Pacific boundary layer that is consistent with a photochemical source in seawater. The model reproduces a similarly large enhancement over the southern ocean by assuming a large pool of oceanic RONO2 here, but the source of the seawater enhancement in this environment remains uncertain. We find that including marine RONO2 in the simulation is necessary to correct a large underestimate in simulated reactive nitrogen throughout the Pacific marine boundary layer. We also find that the impacts on NOx export from continental source regions are limited as RONO2 formation competes with other NOx reservoirs such as PAN, leading to re-partitioning of reactive nitrogen rather than a net reactive nitrogen source. Further implications for NOx and ozone, as well as the impacts of recent changes in the global distribution of methane, ethane, propane, and NOx emissions, will also be discussed.
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
Ren, Jiaping; Wang, Xinjie; Manocha, Dinesh
2016-01-01
We present a biologically plausible dynamics model to simulate swarms of flying insects. Our formulation, which is based on biological conclusions and experimental observations, is designed to simulate large insect swarms of varying densities. We use a force-based model that captures different interactions between the insects and the environment and computes collision-free trajectories for each individual insect. Furthermore, we model the noise as a constructive force at the collective level and present a technique to generate noise-induced insect movements in a large swarm that are similar to those observed in real-world trajectories. We use a data-driven formulation that is based on pre-recorded insect trajectories. We also present a novel evaluation metric and a statistical validation approach that takes into account various characteristics of insect motions. In practice, the combination of Curl noise function with our dynamics model is used to generate realistic swarm simulations and emergent behaviors. We highlight its performance for simulating large flying swarms of midges, fruit fly, locusts and moths and demonstrate many collective behaviors, including aggregation, migration, phase transition, and escape responses. PMID:27187068
Consideration of VT5 etch-based OPC modeling
NASA Astrophysics Data System (ADS)
Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin
2008-03-01
Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions
NASA Technical Reports Server (NTRS)
Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William
2005-01-01
As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
A hierarchy for modeling high speed propulsion systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Deabreu, Alex
1991-01-01
General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery such as inlets, ramjets, and scramjets. The discussion is separated into four sections: (1) computational fluid dynamics model for the entire nonlinear system or high order nonlinear models; (2) high order linearized model derived from fundamental physics; (3) low order linear models obtained from other high order models; and (4) low order nonlinear models. Included are special considerations on any relevant control system designs. The methods discussed are for the quasi-one dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, moving normal shocks, hammershocks, subsonic combustion via heat addition, temperature dependent gases, detonation, and thermal choking.
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Inclusion of angular momentum in FREYA
Randrup, Jørgen; Vogt, Ramona
2015-05-18
The event-by-event fission model FREYA generates large samples of complete fission events from which any observable can extracted, including fluctuations of the observables and the correlations between them. We describe here how FREYA was recently refined to include angular momentum throughout. Subsequently we present some recent results for both neutron and photon observables.
Comprehensive model for predicting elemental composition of coal pyrolysis products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricahrds, Andrew P.; Shutt, Tim; Fletcher, Thomas H.
Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumptionmore » is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.« less
NASA Technical Reports Server (NTRS)
Maglieri, Domenic J.; Sothcott, Victor E.; Keefer, Thomas N., Jr.
1993-01-01
A study was performed to determine the feasibility of establishing if a 'shaped' sonic boom signature, experimentally shown in wind tunnel models out to about 10 body lengths, will persist out to representative flight conditions of 200 to 300 body lengths. The study focuses on the use of a relatively large supersonic remotely-piloted and recoverable vehicle. Other simulation methods that may accomplish the objective are also addressed and include the use of nonrecoverable target drones, missiles, full-scale drones, very large wind tunnels, ballistic facilities, whirling-arm techniques, rocket sled tracks, and airplane nose probes. In addition, this report will also present a background on the origin of the feasibility study including a brief review of the equivalent body concept, a listing of the basic sonic boom signature characteristics and requirements, identification of candidate vehicles in terms of desirable features/availability, and vehicle characteristics including geometries, area distributions, and resulting sonic boom signatures. A program is developed that includes wind tunnel sonic boom and force models and tests for both a basic and modified vehicles and full-scale flight tests.
A model for pion-pion scattering in large- N QCD
NASA Astrophysics Data System (ADS)
Veneziano, G.; Yankielowicz, S.; Onofri, E.
2017-04-01
Following up on recent work by Caron-Huot et al. we consider a generalization of the old Lovelace-Shapiro model as a toy model for ππ scattering satisfying (most of) the properties expected to hold in ('t Hooft's) large- N limit of massless QCD. In particular, the model has asymptotically linear and parallel Regge trajectories at positive t, a positive leading Regge intercept α 0 < 1, and an effective bending of the trajectories in the negative- t region producing a fixed branch point at J = 0 for t < t 0 < 0. Fixed (physical) angle scattering can be tuned to match the power-like behavior (including logarithmic corrections) predicted by perturbative QCD: A( s, t) ˜ s - β log( s)-γ F ( θ). Tree-level unitarity (i.e. positivity of residues for all values of s and J ) imposes strong constraints on the allowed region in the α0- β-γ parameter space, which nicely includes a physically interesting region around α 0 = 0 .5, β = 2 and γ = 3. The full consistency of the model would require an extension to multi-pion processes, a program we do not undertake in this paper.
NASA Technical Reports Server (NTRS)
Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)
2002-01-01
One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.
Modeling Jupiter's Great Red Spot with an Active Hydrological Cycle
NASA Astrophysics Data System (ADS)
Palotai, C. J.; Dowling, T. E.; Morales-Juberías, R.
2003-05-01
We are studying the interaction of Jupiter's hydrological cycle with the formation and maintenance of its long-lived vortices and jet streams using numerical simulations. We are particularly interested in establishing the importance of the large convective storm system to the northwest of Jupiter's Great Red Spot (GRS). We have adapted into the EPIC model the cloud microphysics scheme used at Colorado State University (Fowler et al. 1996, J. Cli. 9, 489), which contains prognostic equations for vapor, liquid cloud, ice cloud, rain and snow. We are focussing on the role of water, but the EPIC model can also handle multiple species (water, ammonia, etc.). Processes that are currently working in the microphysics model include large-scale condensation/deposition, cloud evaporation, melting/freezing, and Bergeron-Findeisen diffusional growth of ice from supercooled liquid. The form of precipitation on gas giants is a major unknown. We are currently using a simple scheme for precipitation, but are studying the effect that processes known to be important in terrestrial models have on our results, including formation and accretion of rain and snow, preciptation evaporation, detrainment and cloud-top entrainment. We will present comparisons of ``dry'' and ``wet'' runs of a channel Jupiter EPIC simulation covering -40S to the equator that includes various initial water-vapor profiles and a GRS model. The effects of latent heating on the energy budget and vertical transport will be discussed. This research is funded by NASA's Planetary Atmospheres and EPSCoR Programs.
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
Modeling near-wall turbulent flows
NASA Astrophysics Data System (ADS)
Marusic, Ivan; Mathis, Romain; Hutchins, Nicholas
2010-11-01
The near-wall region of turbulent boundary layers is a crucial region for turbulence production, but it is also a region that becomes increasing difficult to access and make measurements in as the Reynolds number becomes very high. Consequently, it is desirable to model the turbulence in this region. Recent studies have shown that the classical description, with inner (wall) scaling alone, is insufficient to explain the behaviour of the streamwise turbulence intensities with increasing Reynolds number. Here we will review our recent near-wall model (Marusic et al., Science 329, 2010), where the near-wall turbulence is predicted given information from only the large-scale signature at a single measurement point in the logarithmic layer, considerably far from the wall. The model is consistent with the Townsend attached eddy hypothesis in that the large-scale structures associated with the log-region are felt all the way down to the wall, but also includes a non-linear amplitude modulation effect of the large structures on the near-wall turbulence. Detailed predicted spectra across the entire near- wall region will be presented, together with other higher order statistics over a large range of Reynolds numbers varying from laboratory to atmospheric flows.
Multiresolution comparison of precipitation datasets for large-scale models
NASA Astrophysics Data System (ADS)
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Efficient design of clinical trials and epidemiological research: is it possible?
Lauer, Michael S; Gordon, David; Wei, Gina; Pearson, Gail
2017-08-01
Randomized clinical trials and large-scale, cohort studies continue to have a critical role in generating evidence in cardiovascular medicine; however, the increasing concern is that ballooning costs threaten the clinical trial enterprise. In this Perspectives article, we discuss the changing landscape of clinical research, and clinical trials in particular, focusing on reasons for the increasing costs and inefficiencies. These reasons include excessively complex design, overly restrictive inclusion and exclusion criteria, burdensome regulations, excessive source-data verification, and concerns about the effect of clinical research conduct on workflow. Thought leaders have called on the clinical research community to consider alternative, transformative business models, including those models that focus on simplicity and leveraging of digital resources. We present some examples of innovative approaches by which some investigators have successfully conducted large-scale, clinical trials at relatively low cost. These examples include randomized registry trials, cluster-randomized trials, adaptive trials, and trials that are fully embedded within digital clinical care or administrative platforms.
A note on the microeconomics of migration.
Stahl, K
1983-11-01
"The purpose of this note is to demonstrate in a simple model that an individual's migration from a small town to a large city may be rationalized purely by a consumption motive, rather than the motive of obtaining a higher income. More specifically, it is shown that in a large city an individual may derive a higher utility from spending a given amount of income than in a small town." A formal model is first developed that includes the principal forces at work and is then illustrated using a graphic example. The theoretical and empirical issues raised are considered in the concluding section. excerpt
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
NASA Astrophysics Data System (ADS)
Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen
2017-03-01
Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houweling, Antonetta C., E-mail: A.Houweling@umcutrecht.n; Philippens, Marielle E.P.; Dijkema, Tim
2010-03-15
Purpose: The dose-response relationship of the parotid gland has been described most frequently using the Lyman-Kutcher-Burman model. However, various other normal tissue complication probability (NTCP) models exist. We evaluated in a large group of patients the value of six NTCP models that describe the parotid gland dose response 1 year after radiotherapy. Methods and Materials: A total of 347 patients with head-and-neck tumors were included in this prospective parotid gland dose-response study. The patients were treated with either conventional radiotherapy or intensity-modulated radiotherapy. Dose-volume histograms for the parotid glands were derived from three-dimensional dose calculations using computed tomography scans. Stimulatedmore » salivary flow rates were measured before and 1 year after radiotherapy. A threshold of 25% of the pretreatment flow rate was used to define a complication. The evaluated models included the Lyman-Kutcher-Burman model, the mean dose model, the relative seriality model, the critical volume model, the parallel functional subunit model, and the dose-threshold model. The goodness of fit (GOF) was determined by the deviance and a Monte Carlo hypothesis test. Ranking of the models was based on Akaike's information criterion (AIC). Results: None of the models was rejected based on the evaluation of the GOF. The mean dose model was ranked as the best model based on the AIC. The TD{sub 50} in these models was approximately 39 Gy. Conclusions: The mean dose model was preferred for describing the dose-response relationship of the parotid gland.« less
Numerical Simulations of Vortical Mode Stirring: Effects of Large Scale Shear and Strain
2015-09-30
Numerical Simulations of Vortical Mode Stirring: Effects of Large-Scale Shear and Strain M.-Pascale Lelong NorthWest Research Associates...can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local ambient conditions including latitude...talk at the 1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Nonlinear Effects in Internal Waves Conference held
Statistical Analysis of Large Simulated Yield Datasets for Studying Climate Effects
NASA Technical Reports Server (NTRS)
Makowski, David; Asseng, Senthold; Ewert, Frank; Bassu, Simona; Durand, Jean-Louis; Martre, Pierre; Adam, Myriam; Aggarwal, Pramod K.; Angulo, Carlos; Baron, Chritian;
2015-01-01
Many studies have been carried out during the last decade to study the effect of climate change on crop yields and other key crop characteristics. In these studies, one or several crop models were used to simulate crop growth and development for different climate scenarios that correspond to different projections of atmospheric CO2 concentration, temperature, and rainfall changes (Semenov et al., 1996; Tubiello and Ewert, 2002; White et al., 2011). The Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013) builds on these studies with the goal of using an ensemble of multiple crop models in order to assess effects of climate change scenarios for several crops in contrasting environments. These studies generate large datasets, including thousands of simulated crop yield data. They include series of yield values obtained by combining several crop models with different climate scenarios that are defined by several climatic variables (temperature, CO2, rainfall, etc.). Such datasets potentially provide useful information on the possible effects of different climate change scenarios on crop yields. However, it is sometimes difficult to analyze these datasets and to summarize them in a useful way due to their structural complexity; simulated yield data can differ among contrasting climate scenarios, sites, and crop models. Another issue is that it is not straightforward to extrapolate the results obtained for the scenarios to alternative climate change scenarios not initially included in the simulation protocols. Additional dynamic crop model simulations for new climate change scenarios are an option but this approach is costly, especially when a large number of crop models are used to generate the simulated data, as in AgMIP. Statistical models have been used to analyze responses of measured yield data to climate variables in past studies (Lobell et al., 2011), but the use of a statistical model to analyze yields simulated by complex process-based crop models is a rather new idea. We demonstrate herewith that statistical methods can play an important role in analyzing simulated yield data sets obtained from the ensembles of process-based crop models. Formal statistical analysis is helpful to estimate the effects of different climatic variables on yield, and to describe the between-model variability of these effects.
NASA Astrophysics Data System (ADS)
Sandbach, S. D.; Lane, S. N.; Hardy, R. J.; Amsler, M. L.; Ashworth, P. J.; Best, J. L.; Nicholas, A. P.; Orfeo, O.; Parsons, D. R.; Reesink, A. J. H.; Szupiany, R. N.
2012-12-01
Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh- or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These "subgrid" elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to "unmeasured" topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers.
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Modelling radicalization: how small violent fringe sects develop into large indoctrinated societies
2017-01-01
We model radicalization in a society consisting of two competing religious, ethnic or political groups. Each of the ‘sects’ is divided into moderate and radical factions, with intra-group transitions occurring either spontaneously or through indoctrination. We also include the possibility of one group violently attacking the other. The intra-group transition rates of one group are modelled to explicitly depend on the actions and characteristics of the other, including violent episodes, effectively coupling the dynamics of the two sects. We use a game theoretic framework and assume that radical factions may tune ‘strategic’ parameters to optimize given utility functions aimed at maximizing their ranks while minimizing the damage inflicted by their rivals. Constraints include limited overall resources that must be optimally allocated between indoctrination and external attacks on the other group. Various scenarios are considered, from symmetric sects whose behaviours mirror each other, to totally asymmetric ones where one sect may have a larger population or a superior resource availability. We discuss under what conditions sects preferentially employ indoctrination or violence, and how allowing sects to readjust their strategies allows for small, violent sects to grow into large, indoctrinated communities. PMID:28879010
Modelling radicalization: how small violent fringe sects develop into large indoctrinated societies
NASA Astrophysics Data System (ADS)
Short, Martin B.; McCalla, Scott G.; D'Orsogna, Maria R.
2017-08-01
We model radicalization in a society consisting of two competing religious, ethnic or political groups. Each of the `sects' is divided into moderate and radical factions, with intra-group transitions occurring either spontaneously or through indoctrination. We also include the possibility of one group violently attacking the other. The intra-group transition rates of one group are modelled to explicitly depend on the actions and characteristics of the other, including violent episodes, effectively coupling the dynamics of the two sects. We use a game theoretic framework and assume that radical factions may tune `strategic' parameters to optimize given utility functions aimed at maximizing their ranks while minimizing the damage inflicted by their rivals. Constraints include limited overall resources that must be optimally allocated between indoctrination and external attacks on the other group. Various scenarios are considered, from symmetric sects whose behaviours mirror each other, to totally asymmetric ones where one sect may have a larger population or a superior resource availability. We discuss under what conditions sects preferentially employ indoctrination or violence, and how allowing sects to readjust their strategies allows for small, violent sects to grow into large, indoctrinated communities.
Anthropic prediction for a large multi-jump landscape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz-Perlov, Delia, E-mail: delia@perlov.com
2008-10-15
The assumption of a flat prior distribution plays a critical role in the anthropic prediction of the cosmological constant. In a previous paper we analytically calculated the distribution for the cosmological constant, including the prior and anthropic selection effects, in a large toy 'single-jump' landscape model. We showed that it is possible for the fractal prior distribution that we found to behave as an effectively flat distribution in a wide class of landscapes, but only if the single-jump size is large enough. We extend this work here by investigating a large (N{approx}10{sup 500}) toy 'multi-jump' landscape model. The jump sizesmore » range over three orders of magnitude and an overall free parameter c determines the absolute size of the jumps. We will show that for 'large' c the distribution of probabilities of vacua in the anthropic range is effectively flat, and thus the successful anthropic prediction is validated. However, we argue that for small c, the distribution may not be smooth.« less
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
On the accuracy of modelling the dynamics of large space structures
NASA Technical Reports Server (NTRS)
Diarra, C. M.; Bainum, P. M.
1985-01-01
Proposed space missions will require large scale, light weight, space based structural systems. Large space structure technology (LSST) systems will have to accommodate (among others): ocean data systems; electronic mail systems; large multibeam antenna systems; and, space based solar power systems. The structures are to be delivered into orbit by the space shuttle. Because of their inherent size, modelling techniques and scaling algorithms must be developed so that system performance can be predicted accurately prior to launch and assembly. When the size and weight-to-area ratio of proposed LSST systems dictate that the entire system be considered flexible, there are two basic modeling methods which can be used. The first is a continuum approach, a mathematical formulation for predicting the motion of a general orbiting flexible body, in which elastic deformations are considered small compared with characteristic body dimensions. This approach is based on an a priori knowledge of the frequencies and shape functions of all modes included within the system model. Alternatively, finite element techniques can be used to model the entire structure as a system of lumped masses connected by a series of (restoring) springs and possibly dampers. In addition, a computational algorithm was developed to evaluate the coefficients of the various coupling terms in the equations of motion as applied to the finite element model of the Hoop/Column.
Sustainability Indicators for Coupled Human-Earth Systems
NASA Astrophysics Data System (ADS)
Motesharrei, S.; Rivas, J. R.; Kalnay, E.
2014-12-01
Over the last two centuries, the Human System went from having a small impact on the Earth System (including the Climate System) to becoming dominant, because both population and per capita consumption have grown extremely fast, especially since about 1950. We therefore argue that Human System Models must be included into Earth System Models through bidirectional couplings with feedbacks. In particular, population should be modeled endogenously, rather than exogenously as done currently in most Integrated Assessment Models. The growth of the Human System threatens to overwhelm the Carrying Capacity of the Earth System, and may be leading to catastrophic climate change and collapse. We propose a set of Ecological and Economic "Sustainability Indicators" that can employ large data-sets for developing and assessing effective mitigation and adaptation policies. Using the Human and Nature Dynamical Model (HANDY) and Coupled Human-Climate-Water Model (COWA), we carry out experiments with this set of Sustainability Indicators and show that they are applicable to various coupled systems including Population, Climate, Water, Energy, Agriculture, and Economy. Impact of nonrenewable resources and fossil fuels could also be understood using these indicators. We demonstrate interconnections of Ecological and Economic Indicators. Coupled systems often include feedbacks and can thus display counterintuitive dynamics. This makes it difficult for even experts to see coming catastrophes from just the raw data for different variables. Sustainability Indicators boil down the raw data into a set of simple numbers that cross their sustainability thresholds with a large time-lag before variables enter their catastrophic regimes. Therefore, we argue that Sustainability Indicators constitute a powerful but simple set of tools that could be directly used for making policies for sustainability.
Radiation risk estimation: Modelling approaches for “targeted” and “non-targeted” effects
NASA Astrophysics Data System (ADS)
Ballarini, Francesca; Alloni, Daniele; Facoetti, Angelica; Mairani, Andrea; Nano, Rosanna; Ottolenghi, Andrea
The estimation of the risks from low doses of ionizing radiation - including heavy ions - is still a debated question. In particular, the action of heavy ions on biological targets needs further investigation. In this framework, we present a mechanistic model and a Monte Carlo simulation code for the induction of different types of chromosome aberrations. The model, previously validated for gamma rays and light ions, has recently started to be extended to heavy ions such as Iron and Carbon, which are of interest both for space radiation protection and for hadrontherapy. Preliminary results were found to be in agreement with experimental dose-response curves for aberration yields observed following heavy-ion irradiation of human lymphocytes treated with the Premature Chromosome Condensation technique. During the last 10 years, the "Linear No Threshold" hypothesis has been challenged by a large number of observations on the so-called "non-targeted effects" including bystander effect, which consists of the induction of cytogenetic damage in cells not directly traversed by radiation, most likely as a response to molecular messengers released by directly irradiated cells. Although it is now clear that cellular communication plays a fundamental role, our knowledge on the mechanisms underlying bystander effects is still poor, and would largely benefit from further investigations including theoretical models and simulation codes. In the present paper we will review different modelling approaches, including one that is being developed at the University of Pavia, focusing on the assumptions adopted by the various authors and on their implications in terms of low-dose radiation risk, as well as on the identification of "critical" parameters that can modulate the model outcomes.
NASA Astrophysics Data System (ADS)
Afshar, Ali
An evaluation of Lagrangian-based, discrete-phase models for multi-component liquid sprays encountered in the combustors of gas turbine engines is considered. In particular, the spray modeling capabilities of the commercial software, ANSYS Fluent, was evaluated. Spray modeling was performed for various cold flow validation cases. These validation cases include a liquid jet in a cross-flow, an airblast atomizer, and a high shear fuel nozzle. Droplet properties including velocity and diameter were investigated and compared with previous experimental and numerical results. Different primary and secondary breakup models were evaluated in this thesis. The secondary breakup models investigated include the Taylor analogy breakup (TAB) model, the wave model, the Kelvin-Helmholtz Rayleigh-Taylor model (KHRT), and the Stochastic secondary droplet (SSD) approach. The modeling of fuel sprays requires a proper treatment for the turbulence. Reynolds-averaged Navier-Stokes (RANS), large eddy simulation (LES), hybrid RANS/LES, and dynamic LES (DLES) were also considered for the turbulent flows involving sprays. The spray and turbulence models were evaluated using the available benchmark experimental data.
Formazin, Maren; Burr, Hermann; Aagestad, Cecilie; Tynes, Tore; Thorsen, Sannie Vester; Perkio-Makela, Merja; Díaz Aramburu, Clara Isabel; Pinilla García, Francisco Javier; Galiana Blanco, Luz; Vermeylen, Greet; Parent-Thirion, Agnes; Hooftman, Wendela; Houtman, Irene
2014-12-09
In most countries in the EU, national surveys are used to monitor working conditions and health. Since the development processes behind the various surveys are not necessarily theoretical, but certainly practical and political, the extent of similarity among the dimensions covered in these surveys has been unclear. Another interesting question is whether prominent models from scientific research on work and health are present in the surveys--bearing in mind that the primary focus of these surveys is on monitoring status and trends, not on mapping scientific models. Moreover, it is relevant to know which other scales and concepts not stemming from these models have been included in the surveys. The purpose of this paper is to determine (1) the similarity of dimensions covered in the surveys included and (2) the congruence of dimensions of scientific research and of dimensions present in the monitoring systems. Items from surveys representing six European countries and one European wide survey were classified into the dimensions they cover, using a taxonomy agreed upon among all involved partners from the six countries. The classification reveals that there is a large overlap of dimensions, albeit not in the formulation of items, covered in the seven surveys. Among the available items, the two prominent work-stress-models--job-demand-control-support-model (DCS) and effort-reward-imbalance-model (ERI)--are covered in most surveys even though this has not been the primary aim in the compilation of these surveys. In addition, a large variety of items included in the surveillance systems are not part of these models and are--at least partly--used in nearly all surveys. These additional items reflect concepts such as "restructuring", "meaning of work", "emotional demands" and "offensive behaviour/violence & harassment". The overlap of the dimensions being covered in the various questionnaires indicates that the interests of the parties deciding on the questionnaires in the different countries overlap. The large number of dimensions measured in the questionnaires and not being part of the DCS and ERI models is striking. These "new" dimensions could inspire the research community to further investigate their possible health and labour market effects.
NASA Astrophysics Data System (ADS)
Pan, Wen-hao; Liu, Shi-he; Huang, Li
2018-02-01
This study developed a three-layer velocity model for turbulent flow over large-scale roughness. Through theoretical analysis, this model coupled both surface and subsurface flow. Flume experiments with flat cobble bed were conducted to examine the theoretical model. Results show that both the turbulent flow field and the total flow characteristics are quite different from that in the low gradient flow over microscale roughness. The velocity profile in a shallow stream converges to the logarithmic law away from the bed, while inflecting over the roughness layer to the non-zero subsurface flow. The velocity fluctuations close to a cobble bed are different from that of a sand bed, and it indicates no sufficiently large peak velocity. The total flow energy loss deviates significantly from the 1/7 power law equation when the relative flow depth is shallow. Both the coupled model and experiments indicate non-negligible subsurface flow that accounts for a considerable proportion of the total flow. By including the subsurface flow, the coupled model is able to predict a wider range of velocity profiles and total flow energy loss coefficients when compared with existing equations.
Modeling Study of the Low-Temperature Oxidation of Large Methyl Esters from C11 to C19
Herbinet, Olivier; Biet, Joffrey; Hakka, Mohammed Hichem; Warth, Valérie; Glaude, Pierre Alexandre; Nicolle, André; Battin-Leclerc, Frédérique
2013-01-01
The modeling of the low temperature oxidation of large saturated methyl esters really representative of those found in biodiesel fuels has been investigated. Models have been developed for these species and then detailed kinetic mechanisms have been automatically generated using a new extended version of software EXGAS, which includes reactions specific to the chemistry of esters. A model generated for a binary mixture of n-decane and methyl palmitate was used to simulate experimental results obtained in a jet-stirred reactor for this fuel. This model predicts very well the reactivity of the fuel and the mole fraction profiles of most reaction products. This work also shows that a model for a middle size methyl ester such as methyl decanoate predicts fairly well the reactivity and the mole fractions of most species with a substantial decrease in computational time. Large n-alkanes such as n-hexadecane are also good surrogates for reproducing the reactivity of methyl esters, with an important gain in computational time, but they cannot account for the formation of specific products such as unsaturated esters or cyclic ethers with an ester function. PMID:23814504
Making the most of MBSE: pragmatic model-based engineering for the SKA Telescope Manager
NASA Astrophysics Data System (ADS)
Le Roux, Gerhard; Bridger, Alan; MacIntosh, Mike; Nicol, Mark; Schnetler, Hermine; Williams, Stewart
2016-08-01
Many large projects including major astronomy projects are adopting a Model Based Systems Engineering approach. How far is it possible to get value for the effort involved in developing a model that accurately represents a significant project such as SKA? Is it possible for such a large project to ensure that high-level requirements are traceable through the various system-engineering artifacts? Is it possible to utilize the tools available to produce meaningful measures for the impact of change? This paper shares one aspect of the experience gained on the SKA project. It explores some of the recommended and pragmatic approaches developed, to get the maximum value from the modeling activity while designing the Telescope Manager for the SKA. While it is too early to provide specific measures of success, certain areas are proving to be the most helpful and offering significant potential over the lifetime of the project. The experience described here has been on the 'Cameo Systems Modeler' tool-set, supporting a SysML based System Engineering approach; however the concepts and ideas covered would potentially be of value to any large project considering a Model based approach to their Systems Engineering.
NASA Astrophysics Data System (ADS)
Jiang, Zhou; Xia, Zhenhua; Shi, Yipeng; Chen, Shiyi
2018-04-01
A fully developed spanwise rotating turbulent channel flow has been numerically investigated utilizing large-eddy simulation. Our focus is to assess the performances of the dynamic variants of eddy viscosity models, including dynamic Vreman's model (DVM), dynamic wall adapting local eddy viscosity (DWALE) model, dynamic σ (Dσ ) model, and the dynamic volumetric strain-stretching (DVSS) model, in this canonical flow. The results with dynamic Smagorinsky model (DSM) and direct numerical simulations (DNS) are used as references. Our results show that the DVM has a wrong asymptotic behavior in the near wall region, while the other three models can correctly predict it. In the high rotation case, the DWALE can get reliable mean velocity profile, but the turbulence intensities in the wall-normal and spanwise directions show clear deviations from DNS data. DVSS exhibits poor predictions on both the mean velocity profile and turbulence intensities. In all three cases, Dσ performs the best.
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
A simple 2D biofilm model yields a variety of morphological features.
Hermanowicz, S W
2001-01-01
A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.
Dynamic Analyses Including Joints Of Truss Structures
NASA Technical Reports Server (NTRS)
Belvin, W. Keith
1991-01-01
Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.
Fault Diagnostics and Prognostics for Large Segmented SRMs
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry; Osipov, Viatcheslav V.; Smelyanskiy, Vadim N.; Timucin, Dogan A.; Uckun, Serdar; Hayashida, Ben; Watson, Michael; McMillin, Joshua; Shook, David; Johnson, Mont;
2009-01-01
We report progress in development of the fault diagnostic and prognostic (FD&P) system for large segmented solid rocket motors (SRMs). The model includes the following main components: (i) 1D dynamical model of internal ballistics of SRMs; (ii) surface regression model for the propellant taking into account erosive burning; (iii) model of the propellant geometry; (iv) model of the nozzle ablation; (v) model of a hole burning through in the SRM steel case. The model is verified by comparison of the spatially resolved time traces of the flow parameters obtained in simulations with the results of the simulations obtained using high-fidelity 2D FLUENT model (developed by the third party). To develop FD&P system of a case breach fault for a large segmented rocket we notice [1] that the stationary zero-dimensional approximation for the nozzle stagnation pressure is surprisingly accurate even when stagnation pressure varies significantly in time during burning tail-off. This was also found to be true for the case breach fault [2]. These results allow us to use the FD&P developed in our earlier research [3]-[6] by substituting head stagnation pressure with nozzle stagnation pressure. The axial corrections to the value of the side thrust due to the mass addition are taken into account by solving a system of ODEs in spatial dimension.
NASA Technical Reports Server (NTRS)
McGhee, D. S.
2004-01-01
Launch vehicles consume large quantities of propellant quickly, causing the mass properties and structural dynamics of the vehicle to change dramatically. Currently, structural load assessments account for this change with a large collection of structural models representing various propellant fill levels. This creates a large database of models complicating the delivery of reduced models and requiring extensive work for model changes. Presented here is a method to account for these mass changes in a more efficient manner. The method allows for the subtraction of propellant mass as the propellant is used in the simulation. This subtraction is done in the modal domain of the vehicle generalized model. Additional computation required is primarily for constructing the used propellant mass matrix from an initial propellant model and further matrix multiplications and subtractions. An additional eigenvalue solution is required to uncouple the new equations of motion; however, this is a much simplier calculation starting from a system that is already substantially uncoupled. The method was successfully tested in a simulation of Saturn V loads. Results from the method are compared to results from separate structural models for several propellant levels, showing excellent agreement. Further development to encompass more complicated propellant models, including slosh dynamics, is possible.
Lijun Liu; V. Missirian; Matthew S. Zinkgraf; Andrew Groover; V. Filkov
2014-01-01
Background: One of the great advantages of next generation sequencing is the ability to generate large genomic datasets for virtually all species, including non-model organisms. It should be possible, in turn, to apply advanced computational approaches to these datasets to develop models of biological processes. In a practical sense, working with non-model organisms...
Fast Algorithms for Mining Co-evolving Time Series
2011-09-01
Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical
Implementation and evaluation of a community-based interprofessional learning activity.
Luebbers, Ellen L; Dolansky, Mary A; Vehovec, Anton; Petty, Gayle
2017-01-01
Implementation of large-scale, meaningful interprofessional learning activities for pre-licensure students has significant barriers and requires novel approaches to ensure success. To accomplish this goal, faculty at Case Western Reserve University, Ohio, USA, used the Ottawa Model of Research Use (OMRU) framework to create, improve, and sustain a community-based interprofessional learning activity for large numbers of medical students (N = 177) and nursing students (N = 154). The model guided the process and included identification of context-specific barriers and facilitators, continual monitoring and improvement using data, and evaluation of student learning outcomes as well as programme outcomes. First year Case Western Reserve University medical students and undergraduate nursing students participated in team-structured prevention screening clinics in the Cleveland Metropolitan Public School District. Identification of barriers and facilitators assisted with overcoming logistic and scheduling issues, large class size, differing ages and skill levels of students and creating sustainability. Continual monitoring led to three distinct phases of improvement and resulted in the creation of an authentic team structure, role clarification, and relevance for students. Evaluation of student learning included both qualitative and quantitative methods, resulting in statistically significant findings and qualitative themes of learner outcomes. The OMRU implementation model provided a useful framework for successful implementation resulting in a sustainable interprofessional learning activity.
Supercontinent cycles, true polar wander, and very long-wavelength mantle convection
NASA Astrophysics Data System (ADS)
Zhong, Shijie; Zhang, Nan; Li, Zheng-Xiang; Roberts, James H.
2007-09-01
We show in this paper that mobile-lid mantle convection in a three-dimensional spherical shell with observationally constrained mantle viscosity structure, and realistic convective vigor and internal heating rate is characterized by either a spherical harmonic degree-1 planform with a major upwelling in one hemisphere and a major downwelling in the other hemisphere when continents are absent, or a degree-2 planform with two antipodal major upwellings when a supercontinent is present. We propose that due to modulation of continents, these two modes of mantle convection alternate within the Earth's mantle, causing the cyclic processes of assembly and breakup of supercontinents including Rodinia and Pangea in the last 1 Ga. Our model suggests that the largely degree-2 structure for the present-day mantle with the Africa and Pacific antipodal superplumes, is a natural consequence of this dynamic process of very long-wavelength mantle convection interacting with supercontinent Pangea. Our model explains the basic features of true polar wander (TPW) events for Rodinia and Pangea including their equatorial locations and large variability of TPW inferred from paleomagnetic studies. Our model also suggests that TPW is expected to be more variable and large during supercontinent assembly, but small after a supercontinent acquires its equatorial location and during its subsequent dispersal.
Tidal influences on vertical diffusion and diurnal variability of ozone in the mesosphere
NASA Technical Reports Server (NTRS)
Bjarnason, Gudmundur G.; Solomon, Susan; Garcia, Rolando R.
1987-01-01
Possible dynamical influences on the diurnal behavior of ozone are investigated. A time dependent one-dimensional photochemical model is developed for this purpose; all model calculations are made at 70 deg N during summer. It is shown that the vertical diffusion can vary as much as 1 order of magnitude within a day as a result of large changes in the zonal wind induced by atmospheric thermal tides. It is found that by introducing a dissipation time scale for turbulence produced by breaking gravity waves, the agreement with Poker Flat echo data is improved. Comparisons of results from photochemical model calculations, where the vertical diffusion is a function of height only, with those in which the vertical diffusion coefficient is changing in time show large differences in the diurnal behavior of ozone between 70 and 90 km. By including the dynamical effect, much better agreement with the Solar Mesosphere Explorers data is obtained. The results are, however, sensitive to the background zonally averaged wind. The influence of including time-varying vertical diffusion coefficient on the OH densities is also large, especially between 80 and 90 km. This suggests that dynamical effects are important in determining the diurnal behavior of the airglow emission from the Meinel bands.
Application service provider (ASP) financial models for off-site PACS archiving
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Liu, Brent J.; McCoy, J. Michael; Enzmann, Dieter R.
2003-05-01
For the replacement of its legacy Picture Archiving and Communication Systems (approx. annual workload of 300,000 procedures), UCLA Medical Center has evaluated and adopted an off-site data-warehousing solution based on an ASP financial with a one-time single payment per study archived. Different financial models for long-term data archive services were compared to the traditional capital/operational costs of on-site digital archives. Total cost of ownership (TCO), including direct and indirect expenses and savings, were compared for each model. Financial parameters were considered: logistic/operational advantages and disadvantages of ASP models versus traditional archiving systems. Our initial analysis demonstrated that the traditional linear ASP business model for data storage was unsuitable for large institutions. The overall cost markedly exceeds the TCO of an in-house archive infrastructure (when support and maintenance costs are included.) We demonstrated, however, that non-linear ASP pricing models can be cost-effective alternatives for large-scale data storage, particularly if they are based on a scalable off-site data-warehousing service and the prices are adapted to the specific size of a given institution. The added value of ASP is that it does not require iterative data migrations from legacy media to new storage media at regular intervals.
Integrating language models into classifiers for BCI communication: a review
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C.; Pouratian, N.
2016-06-01
Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.
Integrating language models into classifiers for BCI communication: a review.
Speier, W; Arnold, C; Pouratian, N
2016-06-01
The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Attenuation Model Using the Large-N Array from the Source Physics Experiment
NASA Astrophysics Data System (ADS)
Atterholt, J.; Chen, T.; Snelson, C. M.; Mellors, R. J.
2017-12-01
The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. SPE seeks to better characterize the influence of subsurface heterogeneities on seismic wave propagation and energy dissipation from explosions. As a part of this experiment, SPE-5, a 5000 kg TNT equivalent chemical explosion, was detonated in 2016. During the SPE-5 experiment, a Large-N array of 996 geophones (half 3-component and half z-component) was deployed. This array covered an area that includes loosely consolidated alluvium (weak rock) and weathered granite (hard rock), and recorded the SPE-5 explosion as well as 53 weight drops. We use these Large-N recordings to develop an attenuation model of the area to better characterize how geologic structures influence source energy partitioning. We found a clear variation in seismic attenuation for different rock types: high attenuation (low Q) for alluvium and low attenuation (high Q) for granite. The attenuation structure correlates well with local geology, and will be incorporated into the large simulation effort of the SPE program to validate predictive models. (LA-UR-17-26382)
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
A comprehensive study on urban true orthorectification
Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao
2005-01-01
To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.
Geeleher, Paul; Zhang, Zhenyu; Wang, Fan; Gruener, Robert F; Nath, Aritro; Morrison, Gladys; Bhutra, Steven; Grossman, Robert L; Huang, R Stephanie
2017-10-01
Obtaining accurate drug response data in large cohorts of cancer patients is very challenging; thus, most cancer pharmacogenomics discovery is conducted in preclinical studies, typically using cell lines and mouse models. However, these platforms suffer from serious limitations, including small sample sizes. Here, we have developed a novel computational method that allows us to impute drug response in very large clinical cancer genomics data sets, such as The Cancer Genome Atlas (TCGA). The approach works by creating statistical models relating gene expression to drug response in large panels of cancer cell lines and applying these models to tumor gene expression data in the clinical data sets (e.g., TCGA). This yields an imputed drug response for every drug in each patient. These imputed drug response data are then associated with somatic genetic variants measured in the clinical cohort, such as copy number changes or mutations in protein coding genes. These analyses recapitulated drug associations for known clinically actionable somatic genetic alterations and identified new predictive biomarkers for existing drugs. © 2017 Geeleher et al.; Published by Cold Spring Harbor Laboratory Press.
The NASA/MSFC global reference atmospheric model: MOD 3 (with spherical harmonic wind model)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Fletcher, G. R.; Gramling, F. E.; Pace, W. B.
1980-01-01
Improvements to the global reference atmospheric model are described. The basic model includes monthly mean values of pressure, density, temperature, and geostrophic winds, as well as quasi-biennial and small and large scale random perturbations. A spherical harmonic wind model for the 25 to 90 km height range is included. Below 25 km and above 90 km, the GRAM program uses the geostrophic wind equations and pressure data to compute the mean wind. In the altitudes where the geostrophic wind relations are used, an interpolation scheme is employed for estimating winds at low latitudes where the geostrophic wind relations being to mesh down. Several sample wind profiles are given, as computed by the spherical harmonic model. User and programmer manuals are presented.
Finite difference and Runge-Kutta methods for solving vibration problems
NASA Astrophysics Data System (ADS)
Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi
2017-11-01
The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
ERIC Educational Resources Information Center
George, Ann Cathrice; Robitzsch, Alexander
2018-01-01
This article presents a new perspective on measuring gender differences in the large-scale assessment study Trends in International Science Study (TIMSS). The suggested empirical model is directly based on the theoretical competence model of the domain mathematics and thus includes the interaction between content and cognitive sub-competencies.…
PNNL - WRF-LES - Convective - TTU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
A Predictive Model of Student Loan Default at a Two-Year Community College
ERIC Educational Resources Information Center
Brown, Chanda Denea
2015-01-01
This study explored whether a predictive model of student loan default could be developed with data from an institution's three-year cohort default rate report. The study used borrower data provided by a large two-year community college. Independent variables under investigation included total undergraduate Stafford student loan debt, total number…
ANL - WRF-LES - Convective - TTU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
LLNL - WRF-LES - Neutral - TTU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
Kosovic, Branko
2018-06-20
This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
LANL - WRF-LES - Neutral - TTU
Kosovic, Branko
2018-06-20
This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
LANL - WRF-LES - Convective - TTU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosovic, Branko
This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.
Louis R. Iverson; Anantha M. Prasad; Anantha M. Prasad
2002-01-01
Global climate change could have profound effects on the Earth's biota, including large redistributions of tree species and forest types. We used DISTRIB, a deterministic regression tree analysis model, to examine environmental drivers related to current forest-species distributions and then model potential suitable habitat under five climate change scenarios...
ASHEE: a compressible, Equilibrium-Eulerian model for volcanic ash plumes
NASA Astrophysics Data System (ADS)
Cerminara, M.; Esposti Ongaro, T.; Berselli, L. C.
2015-10-01
A new fluid-dynamic model is developed to numerically simulate the non-equilibrium dynamics of polydisperse gas-particle mixtures forming volcanic plumes. Starting from the three-dimensional N-phase Eulerian transport equations (Neri et al., 2003) for a mixture of gases and solid dispersed particles, we adopt an asymptotic expansion strategy to derive a compressible version of the first-order non-equilibrium model (Ferry and Balachandar, 2001), valid for low concentration regimes (particle volume fraction less than 10-3) and particles Stokes number (St, i.e., the ratio between their relaxation time and flow characteristic time) not exceeding about 0.2. The new model, which is called ASHEE (ASH Equilibrium Eulerian), is significantly faster than the N-phase Eulerian model while retaining the capability to describe gas-particle non-equilibrium effects. Direct numerical simulation accurately reproduce the dynamics of isotropic, compressible turbulence in subsonic regime. For gas-particle mixtures, it describes the main features of density fluctuations and the preferential concentration and clustering of particles by turbulence, thus verifying the model reliability and suitability for the numerical simulation of high-Reynolds number and high-temperature regimes in presence of a dispersed phase. On the other hand, Large-Eddy Numerical Simulations of forced plumes are able to reproduce their observed averaged and instantaneous flow properties. In particular, the self-similar Gaussian radial profile and the development of large-scale coherent structures are reproduced, including the rate of turbulent mixing and entrainment of atmospheric air. Application to the Large-Eddy Simulation of the injection of the eruptive mixture in a stratified atmosphere describes some of important features of turbulent volcanic plumes, including air entrainment, buoyancy reversal, and maximum plume height. For very fine particles (St → 0, when non-equilibrium effects are negligible) the model reduces to the so-called dusty-gas model. However, coarse particles partially decouple from the gas phase within eddies (thus modifying the turbulent structure) and preferentially concentrate at the eddy periphery, eventually being lost from the plume margins due to the concurrent effect of gravity. By these mechanisms, gas-particle non-equilibrium processes are able to influence the large-scale behavior of volcanic plumes.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across landscapes.
van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre
2017-09-01
Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.
Three-dimensional circulation dynamics of along-channel flow in stratified estuaries
NASA Astrophysics Data System (ADS)
Musiak, Jeffery Daniel
Estuaries are vital because they are the major interface between humans and the oceans and provide valuable habitat for a wide range of organisms. Therefore it is important to model estuarine circulation to gain a better comprehension of the mechanics involved and how people effect estuaries. To this end, this dissertation combines analysis of data collected in the Columbia River estuary (CRE) with novel data processing and modeling techniques to further the understanding of estuaries that are strongly forced by riverflow and tides. The primary hypothesis tested in this work is that the three- dimensional (3-D) variability in along-channel currents in a strongly forced estuary can be largely accounted for by including the lateral variations in density and bathymetry but neglecting the secondary, or lateral, flow. Of course, the forcing must also include riverflow and oceanic tides. Incorporating this simplification and the modeling ideas put forth by others with new modeling techniques and new ideas on estuarine circulation will allow me to create a semi-analytical quasi 3-D profile model. This approach was chosen because it is of intermediate complexity to purely analytical models, that, if tractable, are too simple to be useful, and 3-D numerical models which can have excellent resolution but require large amounts of time, computer memory and computing power. Validation of the model will be accomplished using velocity and density data collected in the Columbia River Estuary and by comparison to analytical solutions. Components of the modeling developed here include: (1) development of a 1-D barotropic model for tidal wave propagation in frictionally dominated systems with strong topography. This model can have multiple tidal constituents and multiply connected channels. (2) Development and verification of a new quasi 3-D semi-analytical velocity profile model applicable to estuarine systems which are strongly forced by both oceanic tides and riverflow. This model includes diurnal and semi-diurnal tidal and non- linearly generated overtide circulation and residual circulation driven by riverflow, baroclinic forcing, surface wind stress and non-linear tidal forcing. (3) Demonstration that much of the lateral variation in along-channel currents is caused by variations in along- channel density forcing and bathymetry.
The strategic use of forward contracts: Applications in power markets
NASA Astrophysics Data System (ADS)
Lien, Jeffrey Scott
This dissertation develops three theoretical models that analyze forward trading by firms with market power. The models are discussed in the context of recently restructured power markets, but the results can be applied more generally. The first model considers the profitability of large firms in markets with limited economies of scale and free entry. When large firms apply their market power, small firms benefit from the high prices without incurring the costs of restricted output. When entry is considered, and profit opportunity is determined by the cost of entry, this asymmetry creates the "curse of market power;" the long-run profits of large firms are reduced because of their market power. I suggest ways that large power producers can cope with the curse of market power, including the sale of long-term forward contracts. Past research has shown that forward contracts can demonstrate commitment to aggressive behavior to a competing duopolist. I add explicitly modeled entry to this literature, and make the potential entrants the audience of the forward sale. The existence of a forward market decreases equilibrium entry, increases the profits of large firms, and enhances economic efficiency. In the second model, a consumer representative, such as a state government or regulated distribution utility, bargains in the forward market on behalf of end-consumers who cannot organize together in the spot market. The ability to organize in forward markets allows consumers to encourage economic efficiency. When multiple producers are considered, I find that the ability to offer contracts also increases consumer surplus by decreasing the producers' profits. In some specifications of the model, consumers are able to capture the full gains from trade. The third model of this dissertation considers the ability of a large producer to take advantage of anonymity by randomly alternating between forward sales and forward purchases. The large producer uses its market power to always obtain favorable settlement on its forward transactions. Since other participants in the market cannot anticipate the large producer's eventual spot market behavior they cannot effectively arbitrage between markets. I find that forward transaction anonymity leads to spot price destabilization and cost inefficiency.
Structural mode significance using INCA. [Interactive Controls Analysis computer program
NASA Technical Reports Server (NTRS)
Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.
1990-01-01
Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.
A mathematical simulation model of a 1985-era tilt-rotor passenger aircraft
NASA Technical Reports Server (NTRS)
Mcveigh, M. A.; Widdison, C. A.
1976-01-01
A mathematical model for use in real-time piloted simulation of a 1985-era tilt rotor passenger aircraft is presented. The model comprises the basic six degrees-of-freedom equations of motion, and a large angle of attack representation of the airframe and rotor aerodynamics, together with equations and functions used to model turbine engine performance, aircraft control system and stability augmentation system. A complete derivation of the primary equations is given together with a description of the modeling techniques used. Data for the model is included in an appendix.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
NASA Astrophysics Data System (ADS)
Hain, C.; Mecikalski, J. R.; Schultz, L. A.
2009-12-01
The Atmosphere-Land Exchange Inverse (ALEXI) model was developed as an auxiliary means for estimating surface fluxes over large regions primarily using remote-sensing data. The model is unique in that no information regarding antecedent precipitation or moisture storage capacity is required - the surface moisture status is deduced from a radiometric temperature change signal. ALEXI uses the available water fraction (fAW) as a proxy for soil moisture conditions. Combining fAW with ALEXI’s ability to provide valuable information about the partitioning of the surface energy budget, which can dictated largely by soil moisture conditions, accommodates the retrieval of an average fAW from the surface to the rooting depth of the active vegetation. Using this approach has many advantages over traditional energy flux and soil moisture measurements (towers with limited range and large monetary/personnel costs) or approximation methods (parametrization of the relationship between available water and soil moisture) in that data is available both spatially and temporal over a large, non-homogeneous, sometimes densely vegetated area. Being satellite based, the model can be run anywhere thermal infrared satellite information is available. The current ALEXI climatology dates back to March 2000 and covers the continental U.S. Examples of projects underway using the ALEXI soil moisture retrieval tools include the Southern Florida Water Management Project; NASA’s Project Nile, which proposes to acquire hydrological information for the water management in the Nile River basin; and a USDA pro ject to expand the ALEXI framework to include Europe and parts of northern Africa using data from the European geostationary satellites, specifically the Meteosat Second Generation (MSG) Series.
Cloud computing for genomic data analysis and collaboration.
Langmead, Ben; Nellore, Abhinav
2018-04-01
Next-generation sequencing has made major strides in the past decade. Studies based on large sequencing data sets are growing in number, and public archives for raw sequencing data have been doubling in size every 18 months. Leveraging these data requires researchers to use large-scale computational resources. Cloud computing, a model whereby users rent computers and storage from large data centres, is a solution that is gaining traction in genomics research. Here, we describe how cloud computing is used in genomics for research and large-scale collaborations, and argue that its elasticity, reproducibility and privacy features make it ideally suited for the large-scale reanalysis of publicly available archived data, including privacy-protected data.
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iglesias, C. A.; Sterne, P. A.
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
Higher impact of female than male migration on population structure in large mammals.
Tiedemann, R; Hardy, O; Vekemans, X; Milinkovitch, M C
2000-08-01
We simulated large mammal populations using an individual-based stochastic model under various sex-specific migration schemes and life history parameters from the blue whale and the Asian elephant. Our model predicts that genetic structure at nuclear loci is significantly more influenced by female than by male migration. We identified requisite comigration of mother and offspring during gravidity and lactation as the primary cause of this phenomenon. In addition, our model predicts that the common assumption that geographical patterns of mitochondrial DNA (mtDNA) could be translated into female migration rates (Nmf) will cause biased estimates of maternal gene flow when extensive male migration occurs and male mtDNA haplotypes are included in the analysis.
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
Iglesias, C. A.; Sterne, P. A.
2018-03-16
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
An electromagnetism-like metaheuristic for open-shop problems with no buffer
NASA Astrophysics Data System (ADS)
Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi
2012-12-01
This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.
Fay, J A
2006-08-21
A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables.
NASA Technical Reports Server (NTRS)
Thompson, T. W.; Moore, H. J.
1990-01-01
Researchers developed a radar-echo model for Mars based on 12.6 cm continuous wave radio transmissions backscattered from the planet. The model broadly matches the variations in depolarized and polarized total radar cross sections with longitude observed by Goldstone in 1986 along 7 degrees S. and yields echo spectra that are generally similiar to the observed spectra. Radar map units in the model include an extensive cratered uplands unit with weak depolarized echo cross sections, average thermal inertias, moderate normal refelectivities, and moderate rms slopes; the volcanic units of Tharsis, Elysium, and Amazonis regions with strong depolarized echo cross sections, low thermal inertia, low normal reflectivities, and large rms slopes; and the northern planes units with moderate to strong depolarized echo cross sections, moderate to very high thermal inertias, moderate to large normal reflectivities, and moderate rms slopes. The relevance of the model to the interpretation of radar echoes from Mars is discussed.
Brouwer, Andrew F; Masters, Nina B; Eisenberg, Joseph N S
2018-04-20
Waterborne enteric pathogens remain a global health threat. Increasingly, quantitative microbial risk assessment (QMRA) and infectious disease transmission modeling (IDTM) are used to assess waterborne pathogen risks and evaluate mitigation. These modeling efforts, however, have largely been conducted independently for different purposes and in different settings. In this review, we examine the settings where each modeling strategy is employed. QMRA research has focused on food contamination and recreational water in high-income countries (HICs) and drinking water and wastewater in low- and middle-income countries (LMICs). IDTM research has focused on large outbreaks (predominately LMICs) and vaccine-preventable diseases (LMICs and HICs). Human ecology determines the niches that pathogens exploit, leading researchers to focus on different risk assessment research strategies in different settings. To enhance risk modeling, QMRA and IDTM approaches should be integrated to include dynamics of pathogens in the environment and pathogen transmission through populations.
NASA Astrophysics Data System (ADS)
Liu, J.; Allen, S. E.; Soontiens, N. K.
2016-02-01
Fraser River is the largest river on the west coast of Canada. It empties into the Strait of Georgia, which is a large, semi-enclosed body of water between Vancouver Island and the mainland of British Columbia. We have developed a three-dimensional model of the Strait of Georgia, including the Fraser River plume, using the NEMO model in its regional configuration. This operational model produces daily nowcasts and forecasts for salinity, temperature, currents and sea surface heights. Observational data available for evaluation of the model includes daily British Columbia ferry salinity data, profile data and surface drifter data. The salinity of the modelled Fraser River plume agrees well with ferry based measurements of salinity. However, large discrepencies exist between the modelled and observed position of the plume. Modelled surface currents compared to drifter observations show that the model has too strong along-strait velocities and too weak cross-strait velocities. We investigated the impact of river geometry. A sensitivity experiment was performed comparing the original, short, shallow river channel to an extended and deepened river channel. With the latter bathymetry, tidal amplitudes within Fraser River correspond well with observations. Comparisons to drifter tracks show that the surface currents have been improved with the new bathymetry. However, substantial discrepencies remain. We will discuss how reducing vertical eddy viscosity and other changes further improve the modelled position of the plume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
2017-09-03
Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less
Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows
NASA Astrophysics Data System (ADS)
Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang
2018-03-01
In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.
Empirical validation of an agent-based model of wood markets in Switzerland
Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver
2018-01-01
We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300
Seismic Imaging of the Source Physics Experiment Site with the Large-N Seismic Array
NASA Astrophysics Data System (ADS)
Chen, T.; Snelson, C. M.; Mellors, R. J.
2017-12-01
The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. The goal of SPE is to understand seismic wave generation and propagation from these explosions. To achieve this goal, we need an accurate geophysical model of the SPE site. A Large-N seismic array that was deployed at the SPE site during one of the chemical explosions (SPE-5) helps us construct high-resolution local geophysical model. The Large-N seismic array consists of 996 geophones, and covers an area of approximately 2 × 2.5 km. The array is located in the northern end of the Yucca Flat basin, at a transition from Climax Stock (granite) to Yucca Flat (alluvium). In addition to the SPE-5 explosion, the Large-N array also recorded 53 weight drops. Using the Large-N seismic array recordings, we perform body wave and surface wave velocity analysis, and obtain 3D seismic imaging of the SPE site for the top crust of approximately 1 km. The imaging results show clear variation of geophysical parameter with local geological structures, including heterogeneous weathering layer and various rock types. The results of this work are being incorporated in the larger 3D modeling effort of the SPE program to validate the predictive models developed for the site.
Dynamic effective connectivity in cortically embedded systems of recurrently coupled synfire chains.
Trengove, Chris; Diesmann, Markus; van Leeuwen, Cees
2016-02-01
As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.
HELICITY CONSERVATION IN NONLINEAR MEAN-FIELD SOLAR DYNAMO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pipin, V. V.; Sokoloff, D. D.; Zhang, H.
It is believed that magnetic helicity conservation is an important constraint on large-scale astrophysical dynamos. In this paper, we study a mean-field solar dynamo model that employs two different formulations of the magnetic helicity conservation. In the first approach, the evolution of the averaged small-scale magnetic helicity is largely determined by the local induction effects due to the large-scale magnetic field, turbulent motions, and the turbulent diffusive loss of helicity. In this case, the dynamo model shows that the typical strength of the large-scale magnetic field generated by the dynamo is much smaller than the equipartition value for the magneticmore » Reynolds number 10{sup 6}. This is the so-called catastrophic quenching (CQ) phenomenon. In the literature, this is considered to be typical for various kinds of solar dynamo models, including the distributed-type and the Babcock-Leighton-type dynamos. The problem can be resolved by the second formulation, which is derived from the integral conservation of the total magnetic helicity. In this case, the dynamo model shows that magnetic helicity propagates with the dynamo wave from the bottom of the convection zone to the surface. This prevents CQ because of the local balance between the large-scale and small-scale magnetic helicities. Thus, the solar dynamo can operate in a wide range of magnetic Reynolds numbers up to 10{sup 6}.« less
NASA Astrophysics Data System (ADS)
Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki
2018-03-01
We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.
NASA Astrophysics Data System (ADS)
Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki
2017-12-01
We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.
Zyvoloski, G.; Kwicklis, E.; Eddebbarh, A.-A.; Arnold, B.; Faunt, C.; Robinson, B.A.
2003-01-01
This paper presents several different conceptual models of the Large Hydraulic Gradient (LHG) region north of Yucca Mountain and describes the impact of those models on groundwater flow near the potential high-level repository site. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain. This model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The numerical model is calibrated by matching available water level measurements using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM and parameter estimation software PEST) and model setup allows for efficient calibration of multiple conceptual models. Until now, the Large Hydraulic Gradient has been simulated using a low-permeability, east-west oriented feature, even though direct evidence for this feature is lacking. In addition to this model, we investigate and calibrate three additional conceptual models of the Large Hydraulic Gradient, all of which are based on a presumed zone of hydrothermal chemical alteration north of Yucca Mountain. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the potential repository that record differences in the predicted groundwater flow regime. The results show that Large Hydraulic Gradient can be represented with the alternate conceptual models that include the hydrothermally altered zone. The predicted pathways are mildly sensitive to the choice of the conceptual model and more sensitive to the quality of calibration in the vicinity on the repository. These differences are most likely due to different degrees of fit of model to data, and do not represent important differences in hydrologic conditions for the different conceptual models. ?? 2002 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Aili, T.; Soncini, A.; Bianchi, A.; Diolaiuti, G.; D'Agata, C.; Bocchiola, D.
2018-01-01
Assessment of the future water resources in the Italian Alps under climate change is required, but the hydrological cycle of the high-altitude catchments therein is poorly studied and little understood. Hydrological monitoring and modeling in the Alps is difficult, given the lack of first hand, site specific data. Here, we present a method to model the hydrological cycle of poorly monitored high-altitude catchments in the Alps, and to project forward water resources availability under climate change. Our method builds on extensive experience recently and includes (i) gathering data of climate, of cryospheric variables, and of hydrological fluxes sparsely available; (ii) robust physically based glacio-hydrological modeling; and (iii) using glacio-hydrological projections from GCM models. We apply the method in the Mallero River, in the central (Retiche) Alps of Italy. The Mallero river covers 321 km2, with altitude between 310 and 4015 m a.s.l., and it has 27 km2 of ice cover. The glaciers included in the catchment underwent large mass loss recently, thus Mallero is largely paradigmatic of the present situation of Alpine rivers. We set up a spatially explicit glacio-hydrological model, describing the cryospheric evolution and the hydrology of the area during a control run CR, from 1981 to 2007. We then gather climate projections until 2100 from three Global Climate Models of the IPCC AR5 under RCP2.6, RCP4.5, and RCP8.5. We project forward flow statistics, flow components (rainfall, snow melt, ice melt), ice cover, and volume for two reference decades, namely 2045-2054 and 2090-2099. We foresee reduction of the ice bodies from - 62 to - 98% in volume (year 2100 vs year 1981), and subsequent large reduction of ice melt contribution to stream flows (from - 61 to - 88%, 2100 vs CR). Snow melt, now covering 47% of the stream flows yearly, would also be largely reduced (from - 19 to - 56%, 2100 vs CR). The stream flows will decrease on average at 2100 (from + 1 to - 25%, with - 7%), with potential for increased flows during fall, and winter, and large decrease in summer. Our results provide a tool for consistent modeling of the cryospheric, and hydrologic behavior, and can be used for further investigation of the high-altitude catchments in the Alps.
Validation of Satellite Retrieved Land Surface Variables
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman; Susskind, Joel
1999-01-01
The effective use of satellite observations of the land surface is limited by the lack of high spatial resolution ground data sets for validation of satellite products. Recent large scale field experiments include FIFE, HAPEX-Sahel and BOREAS which provide us with data sets that have large spatial coverage and long time coverage. It is the objective of this paper to characterize the difference between the satellite estimates and the ground observations. This study and others along similar lines will help us in utilization of satellite retrieved data in large scale modeling studies.
NASA Technical Reports Server (NTRS)
Savaglio, Clare
1989-01-01
A realistic simulation of an aircraft in the flight using the AD 100 digital computer is presented. The implementation of three model features is specifically discussed: (1) a large aerodynamic data base (130,00 function values) which is evaluated using function interpolation to obtain the aerodynamic coefficients; (2) an option to trim the aircraft in longitudinal flight; and (3) a flight control system which includes a digital controller. Since the model includes a digital controller the simulation implements not only continuous time equations but also discrete time equations, thus the model has a mixed-data structure.
Learning the organization: a model for health system analysis for new nurse administrators.
Clark, Mary Jo
2004-01-01
Health systems are large and complex organizations in which multiple components and processes influence system outcomes. In order to effectively position themselves in such organizations, nurse administrators new to a system must gain a rapid understanding of overall system operation. Such understanding is facilitated by use of a model for system analysis. The model presented here examines the dynamic interrelationships between and among internal and external elements as they affect system performance. External elements to be analyzed include environmental factors and characteristics of system clientele. Internal elements flow from the mission and goals of the system and include system culture, services, resources, and outcomes.
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
Stellar Evolution and Modelling Stars
NASA Astrophysics Data System (ADS)
Silva Aguirre, Víctor
In this chapter I give an overall description of the structure and evolution of stars of different masses, and review the main ingredients included in state-of-the-art calculations aiming at reproducing observational features. I give particular emphasis to processes where large uncertainties still exist as they have strong impact on stellar properties derived from large compilations of tracks and isochrones, and are therefore of fundamental importance in many fields of astrophysics.
Short-Term Uplift Rates and the Mountain Building Process in Southern Alaska
NASA Technical Reports Server (NTRS)
Sauber, Jeanne; Herring, Thomas A.; Meigs, Andrew; Meigs, Andrew
1998-01-01
We have used GPS at 10 stations in southern Alaska with three epochs of measurements to estimate short-term uplift rates. A number of great earthquakes as well as recent large earthquakes characterize the seismicity of the region this century. To reliably estimate uplift rates from GPS data, numerical models that included both the slip distribution in recent large earthquakes and the general slab geometry were constructed.
Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach
NASA Astrophysics Data System (ADS)
Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.
2016-09-01
The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.
NASA Astrophysics Data System (ADS)
Andersen, J. R.; Antipin, O.; Azuelos, G.; Del Debbio, L.; Del Nobile, E.; Di Chiara, S.; Hapola, T.; Järvinen, M.; Lowdon, P. J.; Maravin, Y.; Masina, I.; Nardecchia, M.; Pica, C.; Sannino, F.
2011-09-01
We provide a pedagogical introduction to extensions of the Standard Model in which the Higgs is composite. These extensions are known as models of dynamical electroweak symmetry breaking or, in brief, Technicolor. Material covered includes: motivations for Technicolor, the construction of underlying gauge theories leading to minimal models of Technicolor, the comparison with electroweak precision data, the low-energy effective theory, the spectrum of the states common to most of the Technicolor models, the decays of the composite particles and the experimental signals at the Large Hadron Collider. The level of the presentation is aimed at readers familiar with the Standard Model but who have little or no prior exposure to Technicolor. Several extensions of the Standard Model featuring a composite Higgs can be reduced to the effective Lagrangian introduced in the text. We establish the relevant experimental benchmarks for Vanilla, Running, Walking, and Custodial Technicolor, and a natural fourth family of leptons, by laying out the framework to discover these models at the Large Hadron Collider.
Robson, B; Boray, S
2018-04-01
Theoretical and methodological principles are presented for the construction of very large inference nets for odds calculations, composed of hundreds or many thousands or more of elements, in this paper generated by structured data mining. It is argued that the usual small inference nets can sometimes represent rather simple, arbitrary estimates. Examples of applications in clinical and public health data analysis, medical claims data and detection of irregular entries, and bioinformatics data, are presented. Construction of large nets benefits from application of a theory of expected information for sparse data and the Dirac notation and algebra. The extent to which these are important here is briefly discussed. Purposes of the study include (a) exploration of the properties of large inference nets and a perturbation and tacit conditionality models, (b) using these to propose simpler models including one that a physician could use routinely, analogous to a "risk score", (c) examination of the merit of describing optimal performance in a single measure that combines accuracy, specificity, and sensitivity in place of a ROC curve, and (d) relationship to methods for detecting anomalous and potentially fraudulent data. Copyright © 2018 Elsevier Ltd. All rights reserved.
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155
NASA Astrophysics Data System (ADS)
Tsai, Y. L.; Wu, T. R.; Lin, C. Y.; Chuang, M. H.; Lin, C. W.
2016-02-01
An ideal storm surge operational model should feature as: 1. Large computational domain which covers the complete typhoon life cycle. 2. Supporting both parametric and atmospheric models. 3. Capable of calculating inundation area for risk assessment. 4. Tides are included for accurate inundation simulation. Literature review shows that not many operational models reach the goals for the fast calculation, and most of the models have limited functions. In this paper, a well-developed COMCOT (COrnell Multi-grid Coupled of Tsunami Model) tsunami model is chosen as the kernel to establish a storm surge model which solves the nonlinear shallow water equations on both spherical and Cartesian coordinates directly. The complete evolution of storm surge including large-scale propagation and small-scale offshore run-up can be simulated by nested-grid scheme. The global tide model TPXO 7.2 established by Oregon State University is coupled to provide astronomical boundary conditions. The atmospheric model named WRF (Weather Research and Forecasting Model) is also coupled to provide metrological fields. The high-efficiency thin-film method is adopted to evaluate the storm surge inundation. Our in-house model has been optimized by OpenMp (Open Multi-Processing) with the performance which is 10 times faster than the original version and makes it an early-warning storm surge model. In this study, the thorough simulation of 2013 Typhoon Haiyan is performed. The detailed results will be presented in Oceanic Science Meeting of 2016 in terms of surge propagation and high-resolution inundation areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Sonia; Yang, Christopher; Gibbs, Michael
California aims to reduce greenhouse gas (GHG) emissions to 40% below 1990 levels by 2030. We compare six energy models that have played various roles in informing the state policymakers in setting climate policy goals and targets. These models adopt a range of modeling structures, including stock-turnover back-casting models, a least-cost optimization model, macroeconomic/macro-econometric models, and an electricity dispatch model. Results from these models provide useful insights in terms of the transformations in the energy system required, including efficiency improvements in cars, trucks, and buildings, electrification of end-uses, low- or zero-carbon electricity and fuels, aggressive adoptions of zero-emission vehicles (ZEVs),more » demand reduction, and large reductions of non-energy GHG emissions. Some of these studies also suggest that the direct economic costs can be fairly modest or even generate net savings, while the indirect macroeconomic benefits are large, as shifts in employment and capital investments could have higher economic returns than conventional energy expenditures. These models, however, often assume perfect markets, perfect competition, and zero transaction costs. They also do not provide specific policy guidance on how these transformative changes can be achieved. Greater emphasis on modeling uncertainty, consumer behaviors, heterogeneity of impacts, and spatial modeling would further enhance policymakers' ability to design more effective and targeted policies. Here, this paper presents an example of how policymakers, energy system modelers and stakeholders interact and work together to develop and evaluate long-term state climate policy targets. Lastly, even though this paper focuses on California, the process of dialogue and interactions, modeling results, and lessons learned can be generally adopted across different regions and scales.« less
New parton distributions from large-x and low-Q 2 data
Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; ...
2010-02-11
We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing modelsmore » the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.« less
Limited temperature response to the very large AD 1258 volcanic eruption
NASA Astrophysics Data System (ADS)
Timmreck, Claudia; Lorenz, Stephan J.; Crowley, Thomas J.; Kinne, Stefan; Raddatz, Thomas J.; Thomas, Manu A.; Jungclaus, Johann H.
2009-11-01
The large AD 1258 eruption had a stratospheric sulfate load approximately ten times greater than the 1991 Pinatubo eruption. Yet surface cooling was not substantially larger than for Pinatubo (˜0.4 K). We apply a comprehensive Earth System Model to demonstrate that the size of the aerosol particles needs to be included in simulations, especially to explain the climate response to large eruptions. The temperature response weakens because increased density of particles increases collision rate and therefore aerosol growth. Only aerosol particle sizes substantially larger than observed after the Pinatubo eruption yield temperature changes consistent with terrestrial Northern Hemisphere summer temperature reconstructions. These results challenge an oft-held assumption of volcanic impacts not only with respect to the immediate or longer-term temperature response, but also any ecosystem response, including extinctions.
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
Large-eddy simulation of turbulent flow with a surface-mounted two-dimensional obstacle
NASA Technical Reports Server (NTRS)
Yang, Kyung-Soo; Ferziger, Joel H.
1993-01-01
In this paper, we perform a large eddy simulation (LES) of turbulent flow in a channel containing a two-dimensional obstacle on one wall using a dynamic subgrid-scale model (DSGSM) at Re = 3210, based on bulk velocity above the obstacle and obstacle height; the wall layers are fully resolved. The low Re enables us to perform a DNS (Case 1) against which to validate the LES results. The LES with the DSGSM is designated Case 2. In addition, an LES with the conventional fixed model constant (Case 3) is conducted to allow identification of improvements due to the DSGSM. We also include LES at Re = 82,000 (Case 4) using conventional Smagorinsky subgrid-scale model and a wall-layer model. The results will be compared with the experiment of Dimaczek et al.
Inner-outer predictive wall model for wall-bounded turbulence in hypersonic flow
NASA Astrophysics Data System (ADS)
Martin, M. Pino; Helm, Clara M.
2017-11-01
The inner-outer predictive wall model of Mathis et al. is modified for hypersonic turbulent boundary layers. The model is based on a modulation of the energized motions in the inner layer by large scale momentum fluctuations in the logarithmic layer. Using direct numerical simulation (DNS) data of turbulent boundary layers with free stream Mach number 3 to 10, it is shown that the variation of the fluid properties in the compressible flows leads to large Reynolds number (Re) effects in the outer layer and facilitate the modulation observed in high Re incompressible flows. The modulation effect by the large scale increases with increasing free-stream Mach number. The model is extended to include spanwise and wall-normal velocity fluctuations and is generalized through Morkovin scaling. Temperature fluctuations are modeled using an appropriate Reynolds Analogy. Density fluctuations are calculated using an equation of state and a scaling with Mach number. DNS data are used to obtain the universal signal and parameters. The model is tested by using the universal signal to reproduce the flow conditions of Mach 3 and Mach 7 turbulent boundary layer DNS data and comparing turbulence statistics between the modeled flow and the DNS data. This work is supported by the Air Force Office of Scientific Research under Grant FA9550-17-1-0104.
NASA Astrophysics Data System (ADS)
Jiang, Mingshun; Charette, Matthew A.; Measures, Christopher I.; Zhu, Yiwu; Zhou, Meng
2013-06-01
The seasonal cycle of circulation and transport in the Antarctic Peninsula shelf region is investigated using a high-resolution (˜2 km) regional model based on the Regional Oceanic Modeling System (ROMS). The model also includes a naturally occurring tracer with a strong source over the shelf (radium isotope 228Ra, t1/2=5.8 years) to investigate the sediment Fe input and its transport. The model is spun-up for three years using climatological boundary and surface forcing and then run for the 2004-2006 period using realistic forcing. Model results suggest a persistent and coherent circulation system throughout the year consisting of several major components that converge water masses from various sources toward Elephant Island. These currents are largely in geostrophic balance, driven by surface winds, topographic steering, and large-scale forcing. Strong off-shelf transport of the Fe-rich shelf waters takes place over the northeastern shelf/slope of Elephant Island, driven by a combination of topographic steering, extension of shelf currents, and strong horizontal mixing between the ACC and shelf waters. These results are generally consistent with recent and historical observational studies. Both the shelf circulation and off-shelf transport show a significant seasonality, mainly due to the seasonal changes of surface winds and large-scale circulation. Modeled and observed distributions of 228Ra suggest that a majority of Fe-rich upper layer waters exported off-shelf around Elephant Island are carried by the shelfbreak current and the Bransfield Strait Current from the shallow sills between Gerlache Strait and Livingston Island, and northern shelf of the South Shetland Islands, where strong winter mixing supplies much of the sediment derived nutrients (including Fe) input to the surface layer.
Grainger, Matthew James; Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions.
Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments
NASA Technical Reports Server (NTRS)
Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph
2013-01-01
Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.
Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions. PMID:29389949
Scale-Similar Models for Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Sarghini, F.
1999-01-01
Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.
Fractional Brownian motion and long term clinical trial recruitment
Zhang, Qiang; Lai, Dejian
2015-01-01
Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations. PMID:26347306
Fractional Brownian motion and long term clinical trial recruitment.
Zhang, Qiang; Lai, Dejian
2011-05-01
Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations.
Modeling the Webgraph: How Far We Are
NASA Astrophysics Data System (ADS)
Donato, Debora; Laura, Luigi; Leonardi, Stefano; Millozzi, Stefano
The following sections are included: * Introduction * Preliminaries * WebBase * In-degree and out-degree * PageRank * Bipartite cliques * Strongly connected components * Stochastic models of the webgraph * Models of the webgraph * A multi-layer model * Large scale simulation * Algorithmic techniques for generating and measuring webgraphs * Data representation and multifiles * Generating webgraphs * Traversal with two bits for each node * Semi-external breadth first search * Semi-external depth first search * Computation of the SCCs * Computation of the bow-tie regions * Disjoint bipartite cliques * PageRank * Summary and outlook
NASA Technical Reports Server (NTRS)
Farrell, C. E.; Krauze, L. D.
1983-01-01
The IDEAS computer of NASA is a tool for interactive preliminary design and analysis of LSS (Large Space System). Nine analysis modules were either modified or created. These modules include the capabilities of automatic model generation, model mass properties calculation, model area calculation, nonkinematic deployment modeling, rigid-body controls analysis, RF performance prediction, subsystem properties definition, and EOS science sensor selection. For each module, a section is provided that contains technical information, user instructions, and programmer documentation.
Expert systems and simulation models; Proceedings of the Seminar, Tucson, AZ, November 18, 19, 1985
NASA Technical Reports Server (NTRS)
1986-01-01
The seminar presents papers on modeling and simulation methodology, artificial intelligence and expert systems, environments for simulation/expert system development, and methodology for simulation/expert system development. Particular attention is given to simulation modeling concepts and their representation, modular hierarchical model specification, knowledge representation, and rule-based diagnostic expert system development. Other topics include the combination of symbolic and discrete event simulation, real time inferencing, and the management of large knowledge-based simulation projects.
Model for dynamic self-assembled magnetic surface structures
NASA Astrophysics Data System (ADS)
Belkin, M.; Glatz, A.; Snezhko, A.; Aranson, I. S.
2010-07-01
We propose a first-principles model for the dynamic self-assembly of magnetic structures at a water-air interface reported in earlier experiments. The model is based on the Navier-Stokes equation for liquids in shallow water approximation coupled to Newton equations for interacting magnetic particles suspended at a water-air interface. The model reproduces most of the observed phenomenology, including spontaneous formation of magnetic snakelike structures, generation of large-scale vortex flows, complex ferromagnetic-antiferromagnetic ordering of the snake, and self-propulsion of bead-snake hybrids.
Flight dynamics simulation modeling and control of a large flexible tiltrotor aircraft
NASA Astrophysics Data System (ADS)
Juhasz, Ondrej
A high order rotorcraft mathematical model is developed and validated against the XV-15 and a Large Civil Tiltrotor (LCTR) concept. The mathematical model is generic and allows for any rotorcraft configuration, from single main rotor helicopters to coaxial and tiltrotor aircraft. Rigid-body and inflow states, as well as flexible wing and blade states are used in the analysis. The separate modeling of each rotorcraft component allows for structural flexibility to be included, which is important when modeling large aircraft where structural modes affect the flight dynamics frequency ranges of interest, generally 1 to 20 rad/sec. Details of the formulation of the mathematical model are given, including derivations of structural, aerodynamic, and inertial loads. The linking of the components of the aircraft is developed using an approach similar to multibody analyses by exploiting a tree topology, but without equations of constraints. Assessments of the effects of wing flexibility are given. Flexibility effects are evaluated by looking at the nature of the couplings between rigid-body modes and wing structural modes and vice versa. The effects of various different forms of structural feedback on aircraft dynamics are analyzed. A proportional-integral feedback on the structural acceleration is deemed to be most effective at both improving the damping and reducing the overall excitation of a structural mode. A model following control architecture is then implemented on full order flexible LCTR models. For this aircraft, the four lowest frequency structural modes are below 20 rad/sec, and are thus needed for control law development and analysis. The impact of structural feedback on both Attitude-Command, Attitude-Hold (ACAH) and Translational Rate Command (TRC) response types are investigated. A rigid aircraft model has optimistic performance characteristics, and a control system designed for a rigid aircraft could potentially destabilize a flexible one. The various control systems are flown in a fixed-base simulator. Pilot inputs and aircraft performance are recorded and analyzed.
Sequestering the standard model vacuum energy.
Kaloper, Nemanja; Padilla, Antonio
2014-03-07
We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future.
The Hungtsaiping landslide:A kinematic model based on morphology
NASA Astrophysics Data System (ADS)
Huang, W.-K.; Chu, H.-K.; Lo, C.-M.; Lin, M.-L.
2012-04-01
A large and deep-seated landslide at Hungtsaiping was triggered by the 7.3 magnitude 1999 Chi-Chi earthquake. Extensive site investigations of the landslide were conducted including field reconnaissance, geophysical exploration, borehole logs, and laboratory experiments. Thick colluvium was found around the landslide area and indicated the occurrence of a large ancient landslide. This study presents the catastrophic landslide event which occurred during the Chi-Chi earthquake. The mechanism of the 1999 landslide which cannot be revealed by the underground exploration data alone, is clarified. This research include investigations of the landslide kinematic process and the deposition geometry. A 3D discrete element method (program), PFC3D, was used to model the kinematic process that led to the landslide. The proposed procedure enables a rational and efficient way to simulate the landslide dynamic process. Key word: Hungtsaiping catastrophic landslide, kinematic process, deposition geometry, discrete element method
A noncoherent model for microwave emissions and backscattering from the sea surface
NASA Technical Reports Server (NTRS)
Wu, S. T.; Fung, A. K.
1973-01-01
The two-scale (small irregularities superimposed upon large undulations) scattering theory proposed by Semyonov was extended and used to compute microwave apparent temperature and the backscattering cross section from ocean surfaces. The effect of the small irregularities upon the scattering characteristics of the large undulations is included by modifying the Fresnel reflection coefficients; whereas the effect of the large undulations upon those of the small irregularities is taken into account by averaging over the surface normals of the large undulations. The same set of surface parameters is employed for a given wind speed to predict both the scattering and the emission characteristics at both polarizations.
Impacts of increasing the aerosol complexity in the Met Office global NWP model
NASA Astrophysics Data System (ADS)
Mulcahy, Jane; Walters, David; Bellouin, Nicolas; Milton, Sean
2014-05-01
Inclusion of the direct and indirect radiative effects of aerosols in high resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing longwave radiation over West Africa due to a better representation of dust. Inclusion of the indirect aerosol effects has significant impacts on the SW radiation particularly at high latitudes due to lower cloud amounts in high latitude clean air regions. This leads to improved surface radiation biases at the North Slope of Alaska ARM site. Verification of temperature and height forecasts is also improved in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short range forecasts. However, the indirect aerosol effect leads to a strengthening of the low level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. This study highlights the importance of including a more realistic treatment of aerosol-cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes. This work is distributed under the Creative Commons Attribution 3.0 Unported License together with an author copyright. This license does not conflict with the regulations of the Crown Copyright.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L.
2014-12-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.
Missouri River Recovery Management Plan and Environmental Impact Statement
2014-04-11
Proficient in hydrologic and hydraulic engineering computer models, particularly ResSim and HEC - RAS ; working experience with large river systems including...to help study teams determine ecosystem responses to changes in the flow regime of a river or connected wetland. HEC -EFM analyses involve: 1...Description of the Model and How It Will Be Applied in the Study Approval Status HEC - RAS The function of this model is to conduct one-dimensional hydraulic
Wind Energy System Time-domain (WEST) analyzers using hybrid simulation techniques
NASA Technical Reports Server (NTRS)
Hoffman, J. A.
1979-01-01
Two stand-alone analyzers constructed for real time simulation of the complex dynamic characteristics of horizontal-axis wind energy systems are described. Mathematical models for an aeroelastic rotor, including nonlinear aerodynamic and elastic loads, are implemented with high speed digital and analog circuitry. Models for elastic supports, a power train, a control system, and a rotor gimbal system are also included. Limited correlation efforts show good comparisons between results produced by the analyzers and results produced by a large digital simulation. The digital simulation results correlate well with test data.
How far does the CO2 travel beyond a leaky point?
NASA Astrophysics Data System (ADS)
Kong, X.; Delshad, M.; Wheeler, M.
2012-12-01
Xianhui Kong, Mojdeh Delshad, Mary F. Wheeler The University of Texas at Austin Numerous research studies have been carried out to investigate the long term feasibility of safe storage of large volumes of CO2 in subsurface saline aquifers. The injected CO2 will undergo complex petrophysical and geochemical processes. During these processes, part of CO2 will be trapped while some will remain as a mobile phase, causing a leakage risk. The comprehensive and accurate characterizations of the trapping and leakage mechanisms are critical for accessing the safety of sequestration, and are challenges in this research area. We have studied different leakage scenarios using realistic aquifer properties including heterogeneity and put forward a comprehensive trapping model for CO2 in deep saline aquifer. The reservoir models include several geological layers and caprocks up to the near surface. Leakage scenarios, such as fracture, high permeability pathways, abandoned wells, are studied. In order to accurately model the fractures, very fine grids are needed near the fracture. Considering that the aquifer usually has a large volume and reservoir model needs large number of grid blocks, simulation would be computational expensive. To deal with this challenge, we carried out the simulations using our in-house parallel reservoir simulator. Our study shows the significance of capillary pressure and permeability-porosity variations on CO2 trapping and leakage. The improved understanding on trapping and leakage will provide confidence in future implementation of sequestration projects.
Small-Scale Drop-Size Variability: Empirical Models for Drop-Size-Dependent Clustering in Clouds
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Knyazikhin, Yuri; Larsen, Michael L.; Wiscombe, Warren J.
2005-01-01
By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)), where 0 less than or equals D(r) less than or equals 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.
A spatial age-structured model for describing sea lamprey (Petromyzon marinus) population dynamics
Robinson, Jason M.; Wilberg, Michael J.; Adams, Jean V.; Jones, Michael L.
2013-01-01
The control of invasive sea lampreys (Petromyzon marinus) presents large scale management challenges in the Laurentian Great Lakes. No modeling approach has been developed that describes spatial dynamics of lamprey populations. We developed and validated a spatial and age-structured model and applied it to a sea lamprey population in a large river in the Great Lakes basin. We considered 75 discrete spatial areas, included a stock-recruitment function, spatial recruitment patterns, natural mortality, chemical treatment mortality, and larval metamorphosis. Recruitment was variable, and an upstream shift in recruitment location was observed over time. From 1993–2011 recruitment, larval abundance, and the abundance of metamorphosing individuals decreased by 80, 84, and 86%, respectively. The model successfully identified areas of high larval abundance and showed that areas of low larval density contribute significantly to the population. Estimated treatment mortality was less than expected but had a large population-level impact. The results and general approach of this work have applications for sea lamprey control throughout the Great Lakes and for the restoration and conservation of native lamprey species globally.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
NASA Astrophysics Data System (ADS)
Ou, G.; Nijssen, B.; Nearing, G. S.; Newman, A. J.; Mizukami, N.; Clark, M. P.
2016-12-01
The Structure for Unifying Multiple Modeling Alternatives (SUMMA) provides a unifying modeling framework for process-based hydrologic modeling by defining a general set of conservation equations for mass and energy, with the capability to incorporate multiple choices for spatial discretizations and flux parameterizations. In this study, we provide a first demonstration of large-scale hydrologic simulations using SUMMA through an application to the Columbia River Basin (CRB) in the northwestern United States and Canada for a multi-decadal simulation period. The CRB is discretized into 11,723 hydrologic response units (HRUs) according to the United States Geologic Service Geospatial Fabric. The soil parameters are derived from the Natural Resources Conservation Service Soil Survey Geographic (SSURGO) Database. The land cover parameters are based on the National Land Cover Database from the year 2001 created by the Multi-Resolution Land Characteristics (MRLC) Consortium. The forcing data, including hourly air pressure, temperature, specific humidity, wind speed, precipitation, shortwave and longwave radiations, are based on Phase 2 of the North American Land Data Assimilation System (NLDAS-2) and averaged for each HRU. The simulation results are compared to simulations with the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS). We are particularly interested in SUMMA's capability to mimic model behaviors of the other two models through the selection of appropriate model parameterizations in SUMMA.
NASA Technical Reports Server (NTRS)
Bergan, Andrew C.; Leone, Frank A., Jr.
2016-01-01
A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.
NASA Astrophysics Data System (ADS)
Choi, Hyun-Jung; Lee, Hwa Woon; Sung, Kyoung-Hee; Kim, Min-Jung; Kim, Yoo-Keun; Jung, Woo-Sik
In order to incorporate correctly the large or local scale circulation in the model, a nudging term is introduced into the equation of motion. Nudging effects should be included properly in the model to reduce the uncertainties and improve the air flow field. To improve the meteorological components, the nudging coefficient should perform the adequate influence on complex area for the model initialization technique which related to data reliability and error suppression. Several numerical experiments have been undertaken in order to evaluate the effects on air quality modeling by comparing the performance of the meteorological result with variable nudging coefficient experiment. All experiments are calculated by the upper wind conditions (synoptic or asynoptic condition), respectively. Consequently, it is important to examine the model response to nudging effect of wind and mass information. The MM5-CMAQ model was used to assess the ozone differences in each case, during the episode day in Seoul, Korea and we revealed that there were large differences in the ozone concentration for each run. These results suggest that for the appropriate simulation of large or small-scale circulations, nudging considering the synoptic and asynoptic nudging coefficient does have a clear advantage over dynamic initialization, so appropriate limitation of these nudging coefficient values on its upper wind conditions is necessary before making an assessment. The statistical verifications showed that adequate nudging coefficient for both wind and temperature data throughout the model had a consistently positive impact on the atmospheric and air quality field. On the case dominated by large-scale circulation, a large nudging coefficient shows a minor improvement in the atmospheric and air quality field. However, when small-scale convection is present, the large nudging coefficient produces consistent improvement in the atmospheric and air quality field.
NASA Astrophysics Data System (ADS)
Flint, A. L.; Flint, L. E.
2010-12-01
The characterization of hydrologic response to current and future climates is of increasing importance to many countries around the world that rely heavily on changing and uncertain water supplies. Large-scale models that can calculate a spatially distributed water balance and elucidate groundwater recharge and surface water flows for large river basins provide a basis of estimates of changes due to future climate projections. Unfortunately many regions in the world have very sparse data for parameterization or calibration of hydrologic models. For this study, the Tigris and Euphrates River basins were used for the development of a regional water balance model at 180-m spatial scale, using the Basin Characterization Model, to estimate historical changes in groundwater recharge and surface water flows in the countries of Turkey, Syria, Iraq, Iran, and Saudi Arabia. Necessary input parameters include precipitation, air temperature, potential evapotranspiration (PET), soil properties and thickness, and estimates of bulk permeability from geologic units. Data necessary for calibration includes snow cover, reservoir volumes (from satellite data and historic, pre-reservoir elevation data) and streamflow measurements. Global datasets for precipitation, air temperature, and PET were available at very large spatial scales (50 km) through the world scale databases, finer scale WorldClim climate data, and required downscaling to fine scales for model input. Soils data were available through world scale soil maps but required parameterization on the basis of textural data to estimate soil hydrologic properties. Soil depth was interpreted from geomorphologic interpretation and maps of quaternary deposits, and geologic materials were categorized from generalized geologic maps of each country. Estimates of bedrock permeability were made on the basis of literature and data on driller’s logs and adjusted during calibration of the model to streamflow measurements where available. Results of historical water balance calculations throughout the Tigris and Euphrates River basins will be shown along with details of processing input data to provide spatial continuity and downscaling. Basic water availability analysis for recharge and runoff is readily available from a determinisitic solar radiation energy balance model and a global potential evapotranspiration model and global estimates of precipitation and air temperature. Future climate estimates can be readily applied to the same water and energy balance models to evaluate future water availability for countries around the globe.
LITHO1.0: An Updated Crust and Lithosphere Model of the Earth
NASA Astrophysics Data System (ADS)
Masters, G.; Ma, Z.; Laske, G.; Pasyanos, M. E.
2011-12-01
We are developing LITHO1.0: an updated crust and lithosphere model of the Earth. The overall plan is to take the popular CRUST2.0 model - a global model of crustal structure with a relatively poor representation of the uppermost mantle - and improve its nominal resolution to 1 degree and extend the model to include lithospheric structure. The new model, LITHO1.0, will be constrained by many different datasets including extremely large new datasets of relatively short period group velocity data. Other data sets include (but are not limited to) compilations of receiver function constraints and active source studies. To date, we have completed the compilation of extremely large global datasets of group velocity for Rayleigh and Love waves from 10mHz to 40mHz using a cluster analysis technique. We have also extended the method to measure phase velocity and are complementing the group velocity with global data sets of longer period phase data that help to constrain deep lithosphere properties. To model these data, we require a starting model for the crust at a nominal resolution of 1 degree. This has been developed by constructing a map of crustal thickness using data from receiver function and active source experiments where available, and by using CRUST2.0 where other constraints are not available. Particular care has been taken to make sure that the locations of sharp changes in crustal thickness are accurately represented. This map is then used as a template to extend CRUST2.0 to 1 degree nominal resolution and to develop starting maps of all crustal properties. We are currently modeling the data using two techniques. The first is a linearized inversion about the 3D crustal starting model. Note that it is important to use local eigenfunctions to compute Frechet derivatives due to the extreme variations in crustal structure. Another technique uses a targeted grid search method. A preliminary model for the crustal part of the model will be presented.
Lessons learned from LNG safety research.
Koopman, Ronald P; Ermak, Donald L
2007-02-20
During the period from 1977 to 1989, the Lawrence Livermore National Laboratory (LLNL) conducted a liquefied gaseous fuels spill effects program under the sponsorship of the US Department of Energy, Department of Transportation, Gas Research Institute and others. The goal of this program was to develop and validate tools that could be used to predict the effects of a large liquefied gas spill through the execution of large scale field experiments and the development of computer models to make predictions for conditions under which tests could not be performed. Over the course of the program, three series of LNG spill experiments were performed to study cloud formation, dispersion, combustion and rapid phase transition (RPT) explosions. The purpose of this paper is to provide an overview of this program, the lessons learned from 12 years of research as well as some recommendations for the future. The general conclusion from this program is that cold, dense gas related phenomena can dominate the dispersion of a large volume, high release rate spill of LNG especially under low ambient wind speed and stable atmospheric conditions, and therefore, it is necessary to include a detailed and validated description of these phenomena in computer models to adequately predict the consequences of a release. Specific conclusions include: * LNG vapor clouds are lower and wider than trace gas clouds and tend to follow the downhill slope of terrain due to dampened vertical turbulence and gravity flow within the cloud. Under low wind speed, stable atmospheric conditions, a bifurcated, two lobed structure develops. * Navier-Stokes models provide the most complete description of LNG dispersion, while more highly parameterized Lagrangian models were found to be well suited to emergency response applications. * The measured heat flux from LNG vapor cloud burns exceeded levels necessary for third degree burns and were large enough to ignite most flammable materials. * RPTs are of two types, source generated and enrichment generated, and were observed to increase the burn area by a factor of two and to extend the downwind burn distance by 65%. Additional large scale experiments and model development are recommended.
NASA Astrophysics Data System (ADS)
Heimann, M.; Prentice, I. C.; Foley, J.; Hickler, T.; Kicklighter, D. W.; McGuire, A. D.; Melillo, J. M.; Ramankutty, N.; Sitch, S.
2001-12-01
Models of biophysical and biogeochemical proceses are being used -either offline or in coupled climate-carbon cycle (C4) models-to assess climate- and CO2-induced feedbacks on atmospheric CO2. Observations of atmospheric CO2 concentration, and supplementary tracers including O2 concentrations and isotopes, offer unique opportunities to evaluate the large-scale behaviour of models. Global patterns, temporal trends, and interannual variability of the atmospheric CO2 concentration and its seasonal cycle provide crucial benchmarks for simulations of regionally-integrated net ecosystem exchange; flux measurements by eddy correlation allow a far more demanding model test at the ecosystem scale than conventional indicators, such as measurements of annual net primary production; and large-scale manipulations, such as the Duke Forest Free Air Carbon Enrichment (FACE) experiment, give a standard to evaluate modelled phenomena such as ecosystem-level CO2 fertilization. Model runs including historical changes of CO2, climate and land use allow comparison with regional-scale monthly CO2 balances as inferred from atmospheric measurements. Such comparisons are providing grounds for some confidence in current models, while pointing to processes that may still be inadequately treated. Current plans focus on (1) continued benchmarking of land process models against flux measurements across ecosystems and experimental findings on the ecosystem-level effects of enhanced CO2, reactive N inputs and temperature; (2) improved representation of land use, forest management and crop metabolism in models; and (3) a strategy for the evaluation of C4 models in a historical observational context.
Large-Scale Simulation of Multi-Asset Ising Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2017-03-01
We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.
NASA Technical Reports Server (NTRS)
Pope, Kevin O.
1994-01-01
The Chicxulub Crater in Yucatan, Mexico, is the primary candidate for the impact that caused mass extinctions at the Cretaceous/Tertiary boundary. The target rocks at Chicxulub contain 750 to 1500 m of anhydrite (CaSO4), which was vaporized upon impact, creating a large sulfuric acid aerosol cloud. In this study we apply a hydrocode model of asteroid impact to calculate the amount of sulfuric acid produced. We then apply a radiative transfer model to determine the atmospheric effects. Results include 6 to 9 month period of darkness followed by 12 to 26 years of cooling.
Large longitudinal spin alignment generated in inelastic nuclear reactions
NASA Astrophysics Data System (ADS)
Hoff, D. E. M.; Potel, G.; Brown, K. W.; Charity, R. J.; Pruitt, C. D.; Sobotka, L. G.; Webb, T. B.; Roeder, B.; Saastamoinen, A.
2018-05-01
Large longitudinal spin alignment of E /A =24 MeV 7Li projectiles inelastically excited by Be, C, and Al targets was observed when the latter remain in their ground state. This alignment is a consequence of an angular-momentum-excitation-energy mismatch, which is well described by a DWBA cluster-model (α +t ). The longitudinal alignment of several other systems is also well described by DWBA calculations, including one where a cluster model is inappropriate, demonstrating that the alignment mechanism is a more general phenomenon. Predictions are made for inelastic excitation of 12C for beam energies above and below the mismatch threshold.
A survey of decentralized control techniques for large space structures
NASA Technical Reports Server (NTRS)
Lindner, D. K.; Reichard, K.
1987-01-01
Preliminary results on the design of decentralized controllers for the COFS I Mast are reported. A nine mode finite element model is used along with second order model of the actuators. It is shown that without actuator dynamics, the system is stable with collocated rate feedback and has acceptable performace. However, when actuator dynamics are included, the system is unstable.
LaWen T. Hollingsworth; Laurie L. Kurth; Bernard R. Parresol; Roger D. Ottmar; Susan J. Prichard
2012-01-01
Landscape-scale fire behavior analyses are important to inform decisions on resource management projects that meet land management objectives and protect values from adverse consequences of fire. Deterministic and probabilistic geospatial fire behavior analyses are conducted with various modeling systems including FARSITE, FlamMap, FSPro, and Large Fire Simulation...
ERIC Educational Resources Information Center
Hadfield, Mark; Jopling, Michael
2014-01-01
This paper discusses the development of a model targeted at non-specialist practitioners implementing innovations that involve information and communication technology (ICT) in education. It is based on data from a national evaluation of ICT-based projects in initial teacher education, which included a large-scale questionnaire survey and six…
Field Model: An Object-Oriented Data Model for Fields
NASA Technical Reports Server (NTRS)
Moran, Patrick J.
2001-01-01
We present an extensible, object-oriented data model designed for field data entitled Field Model (FM). FM objects can represent a wide variety of fields, including fields of arbitrary dimension and node type. FM can also handle time-series data. FM achieves generality through carefully selected topological primitives and through an implementation that leverages the potential of templated C++. FM supports fields where the nodes values are paired with any cell type. Thus FM can represent data where the field nodes are paired with the vertices ("vertex-centered" data), fields where the nodes are paired with the D-dimensional cells in R(sup D) (often called "cell-centered" data), as well as fields where nodes are paired with edges or other cell types. FM is designed to effectively handle very large data sets; in particular FM employs a demand-driven evaluation strategy that works especially well with large field data. Finally, the interfaces developed for FM have the potential to effectively abstract field data based on adaptive meshes. We present initial results with a triangular adaptive grid in R(sup 2) and discuss how the same design abstractions would work equally well with other adaptive-grid variations, including meshes in R(sup 3).
Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.
Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M
2013-04-02
A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.
Schmoldt, D.L.; Peterson, D.L.; Keane, R.E.; Lenihan, J.M.; McKenzie, D.; Weise, D.R.; Sandberg, D.V.
1999-01-01
A team of fire scientists and resource managers convened 17-19 April 1996 in Seattle, Washington, to assess the effects of fire disturbance on ecosystems. Objectives of this workshop were to develop scientific recommendations for future fire research and management activities. These recommendations included a series of numerically ranked scientific and managerial questions and responses focusing on (1) links among fire effects, fuels, and climate; (2) fire as a large-scale disturbance; (3) fire-effects modeling structures; and (4) managerial concerns, applications, and decision support. At the present time, understanding of fire effects and the ability to extrapolate fire-effects knowledge to large spatial scales are limited, because most data have been collected at small spatial scales for specific applications. Although we clearly need more large-scale fire-effects data, it will be more expedient to concentrate efforts on improving and linking existing models that simulate fire effects in a georeferenced format while integrating empirical data as they become available. A significant component of this effort should be improved communication between modelers and managers to develop modeling tools to use in a planning context. Another component of this modeling effort should improve our ability to predict the interactions of fire and potential climatic change at very large spatial scales. The priority issues and approaches described here provide a template for fire science and fire management programs in the next decade and beyond.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Aspects of modelling the tectonics of large volcanoes on the terrestrial planets
NASA Technical Reports Server (NTRS)
Mcgovern, Patrick J.; Solomon, Sean C.
1993-01-01
Analytic solutions for the response of planetary lithospheres to volcanic loads have been used to model faulting and infer elastic plate thicknesses. Predictions of the distribution of faulting around volcanic loads, based on the application of Anderson's criteria for faulting to the results of the models, do not agree well with observations. Such models do not give the stress state and stress history within the edifice. The effects of episodic load growth can also be treated. When these effects are included, models give much better agreement with observations.
NCI HPC Scaling and Optimisation in Climate, Weather, Earth system science and the Geosciences
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Bermous, I.; Freeman, J.; Roberts, D. S.; Ward, M. L.; Yang, R.
2016-12-01
The Australian National Computational Infrastructure (NCI) has a national focus in the Earth system sciences including climate, weather, ocean, water management, environment and geophysics. NCI leads a Program across its partners from the Australian science agencies and research communities to identify priority computational models to scale-up. Typically, these cases place a large overall demand on the available computer time, need to scale to higher resolutions, use excessive scarce resources such as large memory or bandwidth that limits, or in some cases, need to meet requirements for transition to a separate operational forecasting system, with set time-windows. The model codes include the UK Met Office Unified Model atmospheric model (UM), GFDL's Modular Ocean Model (MOM), both the UK Met Office's GC3 and Australian ACCESS coupled-climate systems (including sea ice), 4D-Var data assimilation and satellite processing, the Regional Ocean Model (ROMS), and WaveWatch3 as well as geophysics codes including hazards, magentuellerics, seismic inversions, and geodesy. Many of these codes use significant compute resources both for research applications as well as within the operational systems. Some of these models are particularly complex, and their behaviour had not been critically analysed for effective use of the NCI supercomputer or how they could be improved. As part of the Program, we have established a common profiling methodology that uses a suite of open source tools for performing scaling analyses. The most challenging cases are profiling multi-model coupled systems where the component models have their own complex algorithms and performance issues. We have also found issues within the current suite of profiling tools, and no single tool fully exposes the nature of the code performance. As a result of this work, international collaborations are now in place to ensure that improvements are incorporated within the community models, and our effort can be targeted in a coordinated way. The coordinations have involved user stakeholders, the model developer community, and dependent software libraries. For example, we have spent significant time characterising I/O scalability, and improving the use of libraries such as NetCDF and HDF5.
Assembly and control of large microtubule complexes
NASA Astrophysics Data System (ADS)
Korolev, Kirill; Ishihara, Keisuke; Mitchison, Timothy
Motility, division, and other cellular processes require rapid assembly and disassembly of microtubule structures. We report a new mechanism for the formation of asters, radial microtubule complexes found in very large cells. The standard model of aster growth assumes elongation of a fixed number of microtubules originating from the centrosomes. However, aster morphology in this model does not scale with cell size, and we found evidence for microtubule nucleation away from centrosomes. By combining polymerization dynamics and auto-catalytic nucleation of microtubules, we developed a new biophysical model of aster growth. The model predicts an explosive transition from an aster with a steady-state radius to one that expands as a travelling wave. At the transition, microtubule density increases continuously, but aster growth rate discontinuously jumps to a nonzero value. We tested our model with biochemical perturbations in egg extract and confirmed main theoretical predictions including the jump in the growth rate. Our results show that asters can grow even though individual microtubules are short and unstable. The dynamic balance between microtubule collapse and nucleation could be a general framework for the assembly and control of large microtubule complexes. NIH GM39565; Simons Foundation 409704; Honjo International 486 Scholarship Foundation.
Impact of heavy sterile neutrinos on the triple Higgs coupling
NASA Astrophysics Data System (ADS)
Baglio, J.; Weiland, C.
2017-07-01
New physics beyond the Standard Model is required to give mass to the light neutrinos. One of the simplest ideas is to introduce new heavy, gauge singlet fermions that play the role of right-handed neutrinos in a seesaw mechanism. They could have large Yukawa couplings to the Higgs boson, affecting the Higgs couplings and in particular the triple Higgs coupling $\\lambda_{HHH}^{}$, the measure of which is one of the major goals of the LHC and of future colliders. We present a study of the impact of these heavy neutrinos on $\\lambda_{HHH}^{}$ at the one-loop level, first in a simplified 3+1 model with one heavy Dirac neutrino and then in the inverse seesaw model. Taking into account all possible experimental constraints, we find that sizeable deviations of the order of 35% are possible, large enough to be detected at future colliders, making the triple Higgs coupling a new, viable observable to constrain neutrino mass models. The effects are generic and are expected in any new physics model including TeV-scale fermions with large Yukawa couplings to the Higgs boson, such as those using the neutrino portal.
An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements
NASA Astrophysics Data System (ADS)
Zhai, Zhongxu; Blanton, Michael; Slosar, Anže; Tinker, Jeremy
2017-12-01
We compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtaining data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.
NASA Astrophysics Data System (ADS)
Garnero, E.; McNamara, A. K.; Shim, S. H. D.
2014-12-01
The term large low shear velocity province (LLSVP) represents large lowermost mantle regions of reduced shear velocities (Vs) relative to 1D reference models. There are two LLSVPs: one beneath the central Pacific Ocean, and one beneath the southern Atlantic Ocean and Africa. While LLSVP existence has been well known for several decades, more recently evidence from forward modeling has brought to light relatively sharp margins of the LLSVPs, i.e., the transition from low-to-"normal" Vs occurs over a short lateral distance (probably < ~100 km). This finding is further supported by the strongest lateral dVs gradients in tomography coinciding with locations of sharp LLSVP sides in high-resolution studies. Surface hotspot and large igneous province origination locations mostly map above the present day LLSVP edges. Combined with geochemical arguments that a deep mantle long-lived (possibly primordial) reservoir exists, and geodynamics experiments that demonstrate a dense basal reservoir would be swept by convection to reside beneath upwellings and plumes, a strong argument can be made for dense, chemically distinct material explaining LLSVPs. This presentation will present additional seismic information that needs to be considered for a self-consistent geodynamic and mineralogical framework. For example, there does not appear to be consistency between Vp and Vs reductions defining LLSVPs; however, this comparison is complicated by lowermost mantle Vp models exhibiting greater divergence from each other than Vs models. LLSVP forward modeling usually involves a trade-off between dVs within the LLSVP and LLSVP height/shape; thus continued mapping of heterogeneity within LLSVP is critical. ULVZs might relate to LLSVP chemistry, temperature, and evolution, and thus will be discussed. The chemistry that can explain large and old thermochemical piles is as of yet unconstrained; other mineralogical considerations include understanding the possible role of the post-perovskite phase transition within and outside LLSVPs (which may affect Vs differently from Vp), and the evolution of pile chemistry over time, since geodynamics work demonstrates how mantle material (including deeply subducted MORB) can become downward entrained into piles.
Romero-Durán, Francisco J; Alonso, Nerea; Yañez, Matilde; Caamaño, Olga; García-Mera, Xerardo; González-Díaz, Humberto
2016-04-01
The use of Cheminformatics tools is gaining importance in the field of translational research from Medicinal Chemistry to Neuropharmacology. In particular, we need it for the analysis of chemical information on large datasets of bioactive compounds. These compounds form large multi-target complex networks (drug-target interactome network) resulting in a very challenging data analysis problem. Artificial Neural Network (ANN) algorithms may help us predict the interactions of drugs and targets in CNS interactome. In this work, we trained different ANN models able to predict a large number of drug-target interactions. These models predict a dataset of thousands of interactions of central nervous system (CNS) drugs characterized by > 30 different experimental measures in >400 different experimental protocols for >150 molecular and cellular targets present in 11 different organisms (including human). The model was able to classify cases of non-interacting vs. interacting drug-target pairs with satisfactory performance. A second aim focus on two main directions: the synthesis and assay of new derivatives of TVP1022 (S-analogues of rasagiline) and the comparison with other rasagiline derivatives recently reported. Finally, we used the best of our models to predict drug-target interactions for the best new synthesized compound against a large number of CNS protein targets. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
Results of a search for new phenomena in events with an energetic photon and large missing transverse momentum with the ATLAS experiment at the Large Hadron Collider are reported. The data were collected in proton-proton collisions at a centre-of-mass energy of 13 TeV and correspond to an integrated luminosity of 3.2 fb -1. The observed data are in agreement with the Standard Model expectations. Exclusion limits are presented in models of new phenomena including pair production of dark matter candidates or large extra spatial dimensions. In a simplified model of dark matter and an axial-vector mediator, the search excludes mediatormore » masses below 710 GeV for dark matter candidate masses below 150 GeV. In an effective theory of dark matter production, values of the suppression scale M * up to 570 GeV are excluded and the effect of truncation for various coupling values is reported. Finally, for the ADD large extra spatial dimension model the search places more stringent limits than earlier searches in the same event topology, excluding M D up to about 2.3 (2.8) TeV for two (six) additional spatial dimensions; the limits are reduced by 20-40% depending on the number of additional spatial dimensions when applying a truncation procedure.« less
NASA Astrophysics Data System (ADS)
Huang, Z.; Jia, X.; Rubin, M.; Fougere, N.; Gombosi, T. I.; Tenishev, V.; Combi, M. R.; Bieler, A. M.; Toth, G.; Hansen, K. C.; Shou, Y.
2014-12-01
We study the plasma environment of the comet Churyumov-Gerasimenko, which is the target of the Rosetta mission, by performing large scale numerical simulations. Our model is based on BATS-R-US within the Space Weather Modeling Framework that solves the governing multifluid MHD equations, which describe the behavior of the cometary heavy ions, the solar wind protons, and electrons. The model includes various mass loading processes, including ionization, charge exchange, dissociative ion-electron recombination, as well as collisional interactions between different fluids. The neutral background used in our MHD simulations is provided by a kinetic Direct Simulation Monte Carlo (DSMC) model. We will simulate how the cometary plasma environment changes at different heliocentric distances.
Nucleosynthesis of Short-lived Radioactivities in Massive Stars
NASA Technical Reports Server (NTRS)
Meyer, B. S.
2004-01-01
A leading model for the source of many of the short-lived radioactivities in the early solar nebula is direct incorporation from a massive star [1]. A recent and promising incarnation of this model includes an injection mass cut, which is a boundary between the stellar ejecta that become incorporated into the solar cloud and those ejecta that do not [2-4]. This model also includes a delay time between ejection from the star and incorporation into early solar system solid bodies. While largely successful, this model requires further validation and comparison against data. Such evaluation becomes easier if we have a better sense of the nature of the synthesis of the various radioactivities in the star. That is the goal of this brief abstract.
Tank Investigation of a Powered Dynamic Model of a Large Long-Range Flying Boat
NASA Technical Reports Server (NTRS)
Parkinson, John B; Olson, Roland E; Harr, Marvin I
1947-01-01
Principles for designing the optimum hull for a large long-range flying boat to meet the requirements of seaworthiness, minimum drag, and ability to take off and land at all operational gross loads were incorporated in a 1/12-size powered dynamic model of a four-engine transport flying boat having a design gross load of 165,000 pounds. These design principles included the selection of a moderate beam loading, ample forebody length, sufficient depth of step, and close adherence to the form of a streamline body. The aerodynamic and hydrodynamic characteristics of the model were investigated in Langley tank no. 1. Tests were made to determine the minimum allowable depth of step for adequate landing stability, the suitability of the fore-and-aft location of the step, the take-off performance, the spray characteristics, and the effects of simple spray-control devices. The application of the design criterions used and test results should be useful in the preliminary design of similar large flying boats.
Analysis and Ground Testing for Validation of the Inflatable Sunshield in Space (ISIS) Experiment
NASA Technical Reports Server (NTRS)
Lienard, Sebastien; Johnston, John; Adams, Mike; Stanley, Diane; Alfano, Jean-Pierre; Romanacci, Paolo
2000-01-01
The Next Generation Space Telescope (NGST) design requires a large sunshield to protect the large aperture mirror and instrument module from constant solar exposure at its L2 orbit. The structural dynamics of the sunshield must be modeled in order to predict disturbances to the observatory attitude control system and gauge effects on the line of site jitter. Models of large, non-linear membrane systems are not well understood and have not been successfully demonstrated. To answer questions about sunshield dynamic behavior and demonstrate controlled deployment, the NGST project is flying a Pathfinder experiment, the Inflatable Sunshield in Space (ISIS). This paper discusses in detail the modeling and ground-testing efforts performed at the Goddard Space Flight Center to: validate analytical tools for characterizing the dynamic behavior of the deployed sunshield, qualify the experiment for the Space Shuttle, and verify the functionality of the system. Included in the discussion will be test parameters, test setups, problems encountered, and test results.
Structural Similitude and Scaling Laws for Plates and Shells: A Review
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Starnes, J. H., Jr.; Rezaeepazhand, J.
2000-01-01
This paper deals with the development and use of scaled-down models in order to predict the structural behavior of large prototypes. The concept is fully described and examples are presented which demonstrate its applicability to beam-plates, plates and cylindrical shells of laminated construction. The concept is based on the use of field equations, which govern the response behavior of both the small model as well as the large prototype. The conditions under which the experimental data of a small model can be used to predict the behavior of a large prototype are called scaling laws or similarity conditions and the term that best describes the process is structural similitude. Moreover, since the term scaling is used to describe the effect of size on strength characteristics of materials, a discussion is included which should clarify the difference between "scaling law" and "size effect". Finally, a historical review of all published work in the broad area of structural similitude is presented for completeness.
Large historical growth in global terrestrial gross primary production
Campbell, J. E.; Berry, J. A.; Seibt, U.; ...
2017-04-05
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Exploratory studies into seasonal flow forecasting potential for large lakes
NASA Astrophysics Data System (ADS)
Sene, Kevin; Tych, Wlodek; Beven, Keith
2018-01-01
In seasonal flow forecasting applications, one factor which can help predictability is a significant hydrological response time between rainfall and flows. On account of storage influences, large lakes therefore provide a useful test case although, due to the spatial scales involved, there are a number of modelling challenges related to data availability and understanding the individual components in the water balance. Here some possible model structures are investigated using a range of stochastic regression and transfer function techniques with additional insights gained from simple analytical approximations. The methods were evaluated using records for two of the largest lakes in the world - Lake Malawi and Lake Victoria - with forecast skill demonstrated several months ahead using water balance models formulated in terms of net inflows. In both cases slight improvements were obtained for lead times up to 4-5 months from including climate indices in the data assimilation component. The paper concludes with a discussion of the relevance of the results to operational flow forecasting systems for other large lakes.
Large historical growth in global terrestrial gross primary production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J. E.; Berry, J. A.; Seibt, U.
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Ayus, J C; Bellido, T; Negri, A L
2017-05-01
The Fracture Risk Assessment Tool (FRAX®) was developed by the WHO Collaborating Centre for metabolic bone diseases to evaluate fracture risk of patients. It is based on patient models that integrate the risk associated with clinical variables and bone mineral density (BMD) at the femoral neck. The clinical risk factors included in FRAX were chosen to include only well-established and independent variables related to skeletal fracture risk. The FRAX tool has acquired worldwide acceptance despite having several limitations. FRAX models have not included biochemical derangements in estimation of fracture risk due to the lack of validation in large prospective studies. Recently, there has been an increasing number of studies showing a relationship between hyponatremia and the occurrence of fractures. Hyponatremia is the most frequent electrolyte abnormality measured in the clinic, and serum sodium concentration is a very reproducible, affordable, and readily obtainable measurement. Thus, we think that hyponatremia should be further studied as a biochemical risk factor for skeletal fractures prediction, particularly those at the hip which carries the greatest morbidity and mortality. To achieve this will require the collection of large patient cohorts from diverse geographical locations that include a measure of serum sodium in addition to the other FRAX variables in large numbers, in both sexes, over a wide age range and with wide geographical representation. It would also require the inclusion of data on duration and severity of hyponatremia. Information will be required both on the risk of fracture associated with the occurrence and length of exposure to hyponatremia and to the relationship with the other risk variables included in FRAX and also the independent effect on the occurrence of death which is increased by hyponatremia.
A holographic model of the Kondo effect
NASA Astrophysics Data System (ADS)
Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Wu, Jackson
2013-12-01
We propose a model of the Kondo effect based on the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence, also known as holography. The Kondo effect is the screening of a magnetic impurity coupled anti-ferromagnetically to a bath of conduction electrons at low temperatures. In a (1+1)-dimensional CFT description, the Kondo effect is a renormalization group flow triggered by a marginally relevant (0+1)-dimensional operator between two fixed points with the same Kac-Moody current algebra. In the large- N limit, with spin SU( N) and charge U(1) symmetries, the Kondo effect appears as a (0+1)-dimensional second-order mean-field transition in which the U(1) charge symmetry is spontaneously broken. Our holographic model, which combines the CFT and large- N descriptions, is a Chern-Simons gauge field in (2+1)-dimensional AdS space, AdS 3, dual to the Kac-Moody current, coupled to a holographic superconductor along an AdS 2 sub-space. Our model exhibits several characteristic features of the Kondo effect, including a dynamically generated scale, a resistivity with power-law behavior in temperature at low temperatures, and a spectral flow producing a phase shift. Our holographic Kondo model may be useful for studying many open problems involving impurities, including for example the Kondo lattice problem.
Galaxy Evolution Across The Redshift Desert
NASA Astrophysics Data System (ADS)
Kotulla, Ralf
2010-01-01
GALEV evolutionary synthesis models are an ideal tool to study the formation and evolution of galaxies. I present a large model grid that contains undisturbed E and Sa-Sd type galaxies as well as a wide range of models undergoing starbursts of various strengths and at different times and also includes the subsequent post-starburst phases for these galaxies. This model grid not only allows to describe and refine currently used color selection criteria for Lyman Break Galaxies, BzK galaxies, Extremely Red Objects (ERO) and both Distant and Luminous Red Galaxies (DRG, LRG). It also gives accurate stellar masses, gas fractions, star formation rates, metallicities and burst strengths for an unprecedentedly large sample of galaxies with multi-band photometry. We find, amongst other things, that LBGs are most likely progenitors of local early type spiral galaxies and low-mass ellipticals. We are for the first time able to reproduce E+A features in EROs by post-starbursts as an alternative to dusty starforming galaxies and predict how to discriminate between these scenarios. Our results from photometric analyses perfectly agree with all available spectroscopic information and open up a much wider perspective, including the bulk of the less luminous and more typical galaxy population, in the redshift desert and beyond. All model data are available online at http://www.galev.org.
Zagallo, Patricia; Meddleton, Shanice; Bolger, Molly S.
2016-01-01
We present our design for a cell biology course to integrate content with scientific practices, specifically data interpretation and model-based reasoning. A 2-yr research project within this course allowed us to understand how students interpret authentic biological data in this setting. Through analysis of written work, we measured the extent to which students’ data interpretations were valid and/or generative. By analyzing small-group audio recordings during in-class activities, we demonstrated how students used instructor-provided models to build and refine data interpretations. Often, students used models to broaden the scope of data interpretations, tying conclusions to a biological significance. Coding analysis revealed several strategies and challenges that were common among students in this collaborative setting. Spontaneous argumentation was present in 82% of transcripts, suggesting that data interpretation using models may be a way to elicit this important disciplinary practice. Argumentation dialogue included frequent co-construction of claims backed by evidence from data. Other common strategies included collaborative decoding of data representations and noticing data patterns before making interpretive claims. Focusing on irrelevant data patterns was the most common challenge. Our findings provide evidence to support the feasibility of supporting students’ data-interpretation skills within a large lecture course. PMID:27193288
A Cavity of Large Grains in the Disk around the Group II Herbig Ae/Be Star HD 142666
NASA Astrophysics Data System (ADS)
Rubinstein, A. E.; Macías, E.; Espaillat, C. C.; Zhang, K.; Calvet, N.; Robinson, C.
2018-06-01
Herbig Ae/Be (HAeBe) stars have been classified into Group I or Group II, and were initially thought to be flared and flat disks, respectively. Several Group I sources have been shown to have large gaps, suggesting ongoing planet formation, while no large gaps have been found in the disks of Group II sources. We analyzed the disk around the Group II source, HD 142666, using irradiated accretion disk modeling of the broadband spectral energy distribution along with the 1.3 mm spatial brightness distribution traced by Atacama Large Millimeter and Submillimeter Array (ALMA) observations. Our model reproduces the available data, predicting a high degree of dust settling in the disk, which is consistent with the Group II classification of HD 142666. In addition, the observed visibilities and synthesized image could only be reproduced when including a depletion of large grains out to ∼ 16 au in our disk model, although the ALMA observations did not have enough angular resolution to fully resolve the inner parts of the disk. These results may suggest that some disks around Group II HAeBe stars have cavities of large grains as well. Further ALMA observations of Group II sources are needed to discern how commonly cavities occur in this class of objects, as well as to reveal their possible origins.
Park, Haesuk; Rascati, Karen L; Keith, Michael S
2015-06-01
From January 2016, payment for oral-only renal medications (including phosphate binders and cinacalcet) was expected to be included in the new Medicare bundled end-stage renal disease (ESRD) prospective payment system (PPS). The implementation of the ESRD PPS has generated concern within the nephrology community because of the potential for inadequate funding and the impact on patient quality of care. To estimate the potential economic impact of the new Medicare bundled ESRD PPS reimbursement from the perspective of a large dialysis organization in the United States. We developed an interactive budget impact model to evaluate the potential economic implications of Medicare payment changes to large dialysis organizations treating patients with ESRD who are receiving phosphate binders. In this analysis, we focused on the budget impact of the intended 2016 integration of oral renal drugs, specifically oral phosphate binders, into the PPS. We also utilized the model to explore the budgetary impact of a variety of potential shifts in phosphate binder market shares under the bundled PPS from 2013 to 2016. The base model predicts that phosphate binder costs will increase to $34.48 per dialysis session in 2016, with estimated U.S. total costs for phosphate binders of over $682 million. Based on these estimates, a projected Medicare PPS $33.44 reimbursement rate for coverage of all oral-only renal medications (i.e., phosphate binders and cinacalcet) would be insufficient to cover these costs. A potential renal drugs and services budget shortfall for large dialysis organizations of almost $346 million was projected. Our findings suggest that large dialysis organizations will be challenged to manage phosphate binder expenditures within the planned Medicare bundled rate structure. As a result, large dialysis organizations may have to make treatment choices in light of potential inadequate funding, which could have important implications for the quality of care for patients with ESRD.
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...
2018-02-06
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.
Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less
Exact model reduction of combinatorial reaction networks
Conzelmann, Holger; Fey, Dirk; Gilles, Ernst D
2008-01-01
Background Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. Results We introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs) to a model with 87 ODEs. Conclusion The methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks. PMID:18755034
NASA Astrophysics Data System (ADS)
Liu, Jiechao; Jayakumar, Paramsothy; Stein, Jeffrey L.; Ersal, Tulga
2016-11-01
This paper investigates the level of model fidelity needed in order for a model predictive control (MPC)-based obstacle avoidance algorithm to be able to safely and quickly avoid obstacles even when the vehicle is close to its dynamic limits. The context of this work is large autonomous ground vehicles that manoeuvre at high speed within unknown, unstructured, flat environments and have significant vehicle dynamics-related constraints. Five different representations of vehicle dynamics models are considered: four variations of the two degrees-of-freedom (DoF) representation as lower fidelity models and a fourteen DoF representation with combined-slip Magic Formula tyre model as a higher fidelity model. It is concluded that the two DoF representation that accounts for tyre nonlinearities and longitudinal load transfer is necessary for the MPC-based obstacle avoidance algorithm in order to operate the vehicle at its limits within an environment that includes large obstacles. For less challenging environments, however, the two DoF representation with linear tyre model and constant axle loads is sufficient.
NASA Astrophysics Data System (ADS)
Cich, Matthew J.; Guillaume, Alexandre; Drouin, Brian; Benner, D. Chris
2017-06-01
Multispectrum analysis can be a challenge for a variety of reasons. It can be computationally intensive to fit a proper line shape model especially for high resolution experimental data. Band-wide analyses including many transitions along with interactions, across many pressures and temperatures are essential to accurately model, for example, atmospherically relevant systems. Labfit is a fast multispectrum analysis program originally developed by D. Chris Benner with a text-based interface. More recently at JPL a graphical user interface was developed with the goal of increasing the ease of use but also the number of potential users. The HTP lineshape model has been added to Labfit keeping it up-to-date with community standards. Recent analyses using labfit will be shown to demonstrate its ability to competently handle large experimental datasets, including high order lineshape effects, that are otherwise unmanageable.
Computer-Aided Air-Traffic Control In The Terminal Area
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
1995-01-01
Developmental computer-aided system for automated management and control of arrival traffic at large airport includes three integrated subsystems. One subsystem, called Traffic Management Advisor, another subsystem, called Descent Advisor, and third subsystem, called Final Approach Spacing Tool. Data base that includes current wind measurements and mathematical models of performances of types of aircraft contributes to effective operation of system.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
NASA Astrophysics Data System (ADS)
Pennington, D. D.; Vincent, S.
2017-12-01
The NSF-funded project "Employing Model-Based Reasoning in Socio-Environmental Synthesis (EMBeRS)" has developed a generic model for exchanging knowledge across disciplines that is based on findings from the cognitive, learning, social, and organizational sciences addressing teamwork in complex problem solving situations. Two ten-day summer workshops for PhD students from large, NSF-funded interdisciplinary projects working on a variety of water issues were conducted in 2016 and 2017, testing the model by collecting a variety of data, including surveys, interviews, audio/video recordings, material artifacts and documents, and photographs. This presentation will introduce the EMBeRS model, the design of workshop activities based on the model, and results from surveys and interviews with the participating students. Findings suggest that this approach is very effective for developing a shared, integrated research vision across disciplines, compared with activities typically provided by most large research projects, and that students believe the skills developed in the EMBeRS workshops are unique and highly desireable.
Computational Modeling in Structural Materials Processing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.
Hybrid LES RANS technique based on a one-equation near-wall model
NASA Astrophysics Data System (ADS)
Breuer, M.; Jaffrézic, B.; Arora, K.
2008-05-01
In order to reduce the high computational effort of wall-resolved large-eddy simulations (LES), the present paper suggests a hybrid LES RANS approach which splits up the simulation into a near-wall RANS part and an outer LES part. Generally, RANS is adequate for attached boundary layers requiring reasonable CPU-time and memory, where LES can also be applied but demands extremely large resources. Contrarily, RANS often fails in flows with massive separation or large-scale vortical structures. Here, LES is without a doubt the best choice. The basic concept of hybrid methods is to combine the advantages of both approaches yielding a prediction method, which, on the one hand, assures reliable results for complex turbulent flows, including large-scale flow phenomena and massive separation, but, on the other hand, consumes much fewer resources than LES, especially for high Reynolds number flows encountered in technical applications. In the present study, a non-zonal hybrid technique is considered (according to the signification retained by the authors concerning the terms zonal and non-zonal), which leads to an approach where the suitable simulation technique is chosen more or less automatically. For this purpose the hybrid approach proposed relies on a unique modeling concept. In the LES mode a subgrid-scale model based on a one-equation model for the subgrid-scale turbulent kinetic energy is applied, where the length scale is defined by the filter width. For the viscosity-affected near-wall RANS mode the one-equation model proposed by Rodi et al. (J Fluids Eng 115:196 205, 1993) is used, which is based on the wall-normal velocity fluctuations as the velocity scale and algebraic relations for the length scales. Although the idea of combined LES RANS methods is not new, a variety of open questions still has to be answered. This includes, in particular, the demand for appropriate coupling techniques between LES and RANS, adaptive control mechanisms, and proper subgrid-scale and RANS models. Here, in addition to the study on the behavior of the suggested hybrid LES RANS approach, special emphasis is put on the investigation of suitable interface criteria and the adjustment of the RANS model. To investigate these issues, two different test cases are considered. Besides the standard plane channel flow test case, the flow over a periodic arrangement of hills is studied in detail. This test case includes a pressure-induced flow separation and subsequent reattachment. In comparison with a wall-resolved LES prediction encouraging results are achieved.
A program for the investigation of the Multibody Modeling, Verification, and Control Laboratory
NASA Technical Reports Server (NTRS)
Tobbe, Patrick A.; Christian, Paul M.; Rakoczy, John M.; Bulter, Marlon L.
1993-01-01
The Multibody Modeling, Verification, and Control (MMVC) Laboratory is under development at NASA MSFC in Huntsville, Alabama. The laboratory will provide a facility in which dynamic tests and analyses of multibody flexible structures representative of future space systems can be conducted. The purpose of the tests are to acquire dynamic measurements of the flexible structures undergoing large angle motions and use the data to validate the multibody modeling code, TREETOPS, developed under sponsorship of NASA. Advanced control systems design and system identification methodologies will also be implemented in the MMVC laboratory. This paper describes the ground test facility, the real-time control system, and the experiments. A top-level description of the TREETOPS code is also included along with the validation plan for the MMVC program. Dynamic test results from component testing are also presented and discussed. A detailed discussion of the test articles, which manifest the properties of large flexible space structures, is included along with a discussion of the various candidate control methodologies to be applied in the laboratory.
EXPLORING THE ROLE OF SUB-MICRON-SIZED DUST GRAINS IN THE ATMOSPHERES OF RED L0–L6 DWARFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiranaka, Kay; Cruz, Kelle L.; Baldassare, Vivienne F.
We examine the hypothesis that the red near-infrared colors of some L dwarfs could be explained by a “dust haze” of small particles in their upper atmospheres. This dust haze would exist in conjunction with the clouds found in dwarfs with more typical colors. We developed a model that uses Mie theory and the Hansen particle size distributions to reproduce the extinction due to the proposed dust haze. We apply our method to 23 young L dwarfs and 23 red field L dwarfs. We constrain the properties of the dust haze including particle size distribution and column density using Markovmore » Chain Monte Carlo methods. We find that sub-micron-range silicate grains reproduce the observed reddening. Current brown dwarf atmosphere models include large-grain (1–100 μ m) dust clouds but not sub-micron dust grains. Our results provide a strong proof of concept and motivate a combination of large and small dust grains in brown dwarf atmosphere models.« less
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
NASA Astrophysics Data System (ADS)
Volo, T. J.; Vivoni, E. R.; Martin, C. A.; Wang, Z.; Ruddell, B.
2012-12-01
Through the past several decades, rapid population growth in the arid American Southwest has dramatically changed patterns of plant-available water through municipal and residential irrigation systems that provide supplemental water to designed and managed urban landscape vegetation. Urban irrigation, including diversion of rainwater and addition of imported water, has thereby enabled the transformation of areas once covered by bare soil and low water-use, native desert plant species to large tracts of exotic, high water-use turf grass and shade trees. Despite the large percentage of residential water appropriated to irrigation purposes, models of urban hydrology often fail to include the impact that this anthropogenic input has on water, energy, and biomass conditions. This study utilizes two one-dimensional soil moisture models to examine the importance of representing different processes in a quantitative urban ecohydrology model under irrigation scenarios. Such processes include sub-daily energy fluxes, vertical redistribution of soil moisture, saturation- and infiltration-excess runoff mechanisms, seasonally variable irrigation scheduling, and soil moisture control on evapotranspiration rates. The analysis is informed by soil moisture observations from an experimental sensor network in the Phoenix, Arizona metropolitan area. The network includes data from several different landscape and irrigation treatments representative of pre- and post-development conditions in the region. By interpreting soil moisture levels in terms of plant water stress, this study analyzes the effectiveness of urban irrigation practices in arid climates. Furthermore, by identifying the necessary hydrologic processes to represent in an urban ecohydrology model, our results inform future work in adapting a distributed hydrologic model to desert urban settings where irrigation plays a significant role in minimizing plant water stress. An appropriate model of water and energy balances, calibrated using local meteorological forcing, can facilitate discussions with water managers and homeowners regarding optimal irrigation frequency, volume, duration, and seasonality for individual landscapes, while also aiding in water-efficient landscape design for growing cities in desert regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, N. A.; Myers, S. C.; Johannesson, G.
[1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trendsmore » as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.« less
NASA Astrophysics Data System (ADS)
Seibert, S. P.; Skublics, D.; Ehret, U.
2014-09-01
The coordinated operation of reservoirs in large-scale river basins has great potential to improve flood mitigation. However, this requires large scale hydrological models to translate the effect of reservoir operation to downstream points of interest, in a quality sufficient for the iterative development of optimized operation strategies. And, of course, it requires reservoirs large enough to make a noticeable impact. In this paper, we present and discuss several methods dealing with these prerequisites for reservoir operation using the example of three major floods in the Bavarian Danube basin (45,000 km2) and nine reservoirs therein: We start by presenting an approach for multi-criteria evaluation of model performance during floods, including aspects of local sensitivity to simulation quality. Then we investigate the potential of joint hydrologic-2d-hydrodynamic modeling to improve model performance. Based on this, we evaluate upper limits of reservoir impact under idealized conditions (perfect knowledge of future rainfall) with two methods: Detailed simulations and statistical analysis of the reservoirs' specific retention volume. Finally, we investigate to what degree reservoir operation strategies optimized for local (downstream vicinity to the reservoir) and regional (at the Danube) points of interest are compatible. With respect to model evaluation, we found that the consideration of local sensitivities to simulation quality added valuable information not included in the other evaluation criteria (Nash-Sutcliffe efficiency and Peak timing). With respect to the second question, adding hydrodynamic models to the model chain did, contrary to our expectations, not improve simulations, despite the fact that under idealized conditions (using observed instead of simulated lateral inflow) the hydrodynamic models clearly outperformed the routing schemes of the hydrological models. Apparently, the advantages of hydrodynamic models could not be fully exploited when fed by output from hydrological models afflicted with systematic errors in volume and timing. This effect could potentially be reduced by joint calibration of the hydrological-hydrodynamic model chain. Finally, based on the combination of the simulation-based and statistical impact assessment, we identified one reservoir potentially useful for coordinated, regional flood mitigation for the Danube. While this finding is specific to our test basin, the more interesting and generally valid finding is that operation strategies optimized for local and regional flood mitigation are not necessarily mutually exclusive, sometimes they are identical, sometimes they can, due to temporal offsets, be pursued simultaneously.
Hydro-economic modelling in mining catchments
NASA Astrophysics Data System (ADS)
Ossa Moreno, J. S.; McIntyre, N.; Rivera, D.; Smart, J. C. R.
2017-12-01
Hydro-economic models are gaining momentum because of their capacity to model both the physical processes related to water supply, and socio-economic factors determining water demand. This is particularly valuable in the midst of the large uncertainty upon future climate conditions and social trends. Agriculture, urban uses and environmental flows have received a lot of attention from researchers, as these tend to be the main consumers of water in most catchments. Mine water demand, although very important in several small and medium-sized catchments worldwide, has received less attention and only few models have attempted to reproduce its dynamics with other users. This paper describes an on-going project that addresses this gap, by developing a hydro-economic model in the upper Aconcagua River in Chile. This is a mountain catchment with large scale mining and hydro-power users at high altitudes, and irrigation areas in a downstream valley. Relevant obstacles to the model included the lack of input climate data, which is a common feature in several mining areas, the complex hydrological processes in the area and the difficulty of quantifying the value of water used by mines. A semi-distributed model developed within the Water Evaluation and Planning System (WEAP), was calibrated to reproduce water supply, and this was complemented with an analysis of the value of water for mining based on two methods; water markets and an analysis of its production processes. Agriculture and other users were included through methods commonly used in similar models. The outputs help understanding the value of water in the catchment, and its sensitivity to changes in climate variables, market prices, environmental regulations and changes in the production of minerals, crops and energy. The results of the project highlight the importance of merging hydrology and socio-economic calculations in mining regions, in order to better understand trade-offs and cost of opportunity of using water for an economic activity with high revenues, averse to water risks and with potentially large catchment impacts.
Rotation and magnetism in intermediate-mass stars
NASA Astrophysics Data System (ADS)
Quentin, Léo G.; Tout, Christopher A.
2018-06-01
Rotation and magnetism are increasingly recognized as important phenomena in stellar evolution. Surface magnetic fields from a few to 20 000 G have been observed and models have suggested that magnetohydrodynamic transport of angular momentum and chemical composition could explain the peculiar composition of some stars. Stellar remnants such as white dwarfs have been observed with fields from a few to more than 109 G. We investigate the origin of and the evolution, on thermal and nuclear rather than dynamical time-scales, of an averaged large-scale magnetic field throughout a star's life and its coupling to stellar rotation. Large-scale magnetic fields sustained until late stages of stellar evolution with conservation of magnetic flux could explain the very high fields observed in white dwarfs. We include these effects in the Cambridge stellar evolution code using three time-dependant advection-diffusion equations coupled to the structural and composition equations of stars to model the evolution of angular momentum and the two components of the magnetic field. We present the evolution in various cases for a 3 M_{⊙} star from the beginning to the late stages of its life. Our particular model assumes that turbulent motions, including convection, favour small-scale field at the expense of large-scale field. As a result, the large-scale field concentrates in radiative zones of the star and so is exchanged between the core and the envelope of the star as it evolves. The field is sustained until the end of the asymptotic giant branch, when it concentrates in the degenerate core.