Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
A Reduced Order Model for Whole-Chip Thermal Analysis of Microfluidic Lab-on-a-Chip Systems
Wang, Yi; Song, Hongjun; Pant, Kapil
2013-01-01
This paper presents a Krylov subspace projection-based Reduced Order Model (ROM) for whole microfluidic chip thermal analysis, including conjugate heat transfer. Two key steps in the reduced order modeling procedure are described in detail, including (1) the acquisition of a 3D full-scale computational model in the state-space form to capture the dynamic thermal behavior of the entire microfluidic chip; and (2) the model order reduction using the Block Arnoldi algorithm to markedly lower the dimension of the full-scale model. Case studies using practically relevant thermal microfluidic chip are undertaken to establish the capability and to evaluate the computational performance of the reduced order modeling technique. The ROM is compared against the full-scale model and exhibits good agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) and over three orders-of-magnitude acceleration in computational speed. The salient model reusability and real-time simulation capability renders it amenable for operational optimization and in-line thermal control and management of microfluidic systems and devices. PMID:24443647
Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G
2010-09-14
Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).
NASA Astrophysics Data System (ADS)
Herath, Narmada; Del Vecchio, Domitilla
2018-03-01
Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.
Sun, Jianyu; Liang, Peng; Yan, Xiaoxu; Zuo, Kuichang; Xiao, Kang; Xia, Junlin; Qiu, Yong; Wu, Qing; Wu, Shijia; Huang, Xia; Qi, Meng; Wen, Xianghua
2016-04-15
Reducing the energy consumption of membrane bioreactors (MBRs) is highly important for their wider application in wastewater treatment engineering. Of particular significance is reducing aeration in aerobic tanks to reduce the overall energy consumption. This study proposed an in situ ammonia-N-based feedback control strategy for aeration in aerobic tanks; this was tested via model simulation and through a large-scale (50,000 m(3)/d) engineering application. A full-scale MBR model was developed based on the activated sludge model (ASM) and was calibrated to the actual MBR. The aeration control strategy took the form of a two-step cascaded proportion-integration (PI) feedback algorithm. Algorithmic parameters were optimized via model simulation. The strategy achieved real-time adjustment of aeration amounts based on feedback from effluent quality (i.e., ammonia-N). The effectiveness of the strategy was evaluated through both the model platform and the full-scale engineering application. In the former, the aeration flow rate was reduced by 15-20%. In the engineering application, the aeration flow rate was reduced by 20%, and overall specific energy consumption correspondingly reduced by 4% to 0.45 kWh/m(3)-effluent, using the present practice of regulating the angle of guide vanes of fixed-frequency blowers. Potential energy savings are expected to be higher for MBRs with variable-frequency blowers. This study indicated that the ammonia-N-based aeration control strategy holds promise for application in full-scale MBRs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Austin, Åsa N.; Hansen, Joakim P.; Donadi, Serena; Eklöf, Johan S.
2017-01-01
Field surveys often show that high water turbidity limits cover of aquatic vegetation, while many small-scale experiments show that vegetation can reduce turbidity by decreasing water flow, stabilizing sediments, and competing with phytoplankton for nutrients. Here we bridged these two views by exploring the direction and strength of causal relationships between aquatic vegetation and turbidity across seasons (spring and late summer) and spatial scales (local and regional), using causal modeling based on data from a field survey along the central Swedish Baltic Sea coast. The two best-fitting regional-scale models both suggested that in spring, high cover of vegetation reduces water turbidity. In summer, the relationships differed between the two models; in the first model high vegetation cover reduced turbidity; while in the second model reduction of summer turbidity by high vegetation cover in spring had a positive effect on summer vegetation which suggests a positive feedback of vegetation on itself. Nitrogen load had a positive effect on turbidity in both seasons, which was comparable in strength to the effect of vegetation on turbidity. To assess whether the effect of vegetation was primarily caused by sediment stabilization or a reduction of phytoplankton, we also tested models where turbidity was replaced by phytoplankton fluorescence or sediment-driven turbidity. The best-fitting regional-scale models suggested that high sediment-driven turbidity in spring reduces vegetation cover in summer, which in turn has a negative effect on sediment-driven turbidity in summer, indicating a potential positive feedback of sediment-driven turbidity on itself. Using data at the local scale, few relationships were significant, likely due to the influence of unmeasured variables and/or spatial heterogeneity. In summary, causal modeling based on data from a large-scale field survey suggested that aquatic vegetation can reduce turbidity at regional scales, and that high vegetation cover vs. high sediment-driven turbidity may represent two self-enhancing, alternative states of shallow bay ecosystems. PMID:28854185
Austin, Åsa N; Hansen, Joakim P; Donadi, Serena; Eklöf, Johan S
2017-01-01
Field surveys often show that high water turbidity limits cover of aquatic vegetation, while many small-scale experiments show that vegetation can reduce turbidity by decreasing water flow, stabilizing sediments, and competing with phytoplankton for nutrients. Here we bridged these two views by exploring the direction and strength of causal relationships between aquatic vegetation and turbidity across seasons (spring and late summer) and spatial scales (local and regional), using causal modeling based on data from a field survey along the central Swedish Baltic Sea coast. The two best-fitting regional-scale models both suggested that in spring, high cover of vegetation reduces water turbidity. In summer, the relationships differed between the two models; in the first model high vegetation cover reduced turbidity; while in the second model reduction of summer turbidity by high vegetation cover in spring had a positive effect on summer vegetation which suggests a positive feedback of vegetation on itself. Nitrogen load had a positive effect on turbidity in both seasons, which was comparable in strength to the effect of vegetation on turbidity. To assess whether the effect of vegetation was primarily caused by sediment stabilization or a reduction of phytoplankton, we also tested models where turbidity was replaced by phytoplankton fluorescence or sediment-driven turbidity. The best-fitting regional-scale models suggested that high sediment-driven turbidity in spring reduces vegetation cover in summer, which in turn has a negative effect on sediment-driven turbidity in summer, indicating a potential positive feedback of sediment-driven turbidity on itself. Using data at the local scale, few relationships were significant, likely due to the influence of unmeasured variables and/or spatial heterogeneity. In summary, causal modeling based on data from a large-scale field survey suggested that aquatic vegetation can reduce turbidity at regional scales, and that high vegetation cover vs. high sediment-driven turbidity may represent two self-enhancing, alternative states of shallow bay ecosystems.
Experimental and analytical studies of advanced air cushion landing systems
NASA Technical Reports Server (NTRS)
Lee, E. G. S.; Boghani, A. B.; Captain, K. M.; Rutishauser, H. J.; Farley, H. L.; Fish, R. B.; Jeffcoat, R. L.
1981-01-01
Several concepts are developed for air cushion landing systems (ACLS) which have the potential for improving performance characteristics (roll stiffness, heave damping, and trunk flutter), and reducing fabrication cost and complexity. After an initial screening, the following five concepts were evaluated in detail: damped trunk, filled trunk, compartmented trunk, segmented trunk, and roll feedback control. The evaluation was based on tests performed on scale models. An ACLS dynamic simulation developed earlier is updated so that it can be used to predict the performance of full-scale ACLS incorporating these refinements. The simulation was validated through scale-model tests. A full-scale ACLS based on the segmented trunk concept was fabricated and installed on the NASA ACLS test vehicle, where it is used to support advanced system development. A geometrically-scaled model (one third full scale) of the NASA test vehicle was fabricated and tested. This model, evaluated by means of a series of static and dynamic tests, is used to investigate scaling relationships between reduced and full-scale models. The analytical model developed earlier is applied to simulate both the one third scale and the full scale response.
Land-Atmosphere Coupling in the Multi-Scale Modelling Framework
NASA Astrophysics Data System (ADS)
Kraus, P. M.; Denning, S.
2015-12-01
The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.
The regionalization of national-scale SPARROW models for stream nutrients
Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.
Demonstration of reduced-order urban scale building energy models
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...
2017-09-08
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
Demonstration of reduced-order urban scale building energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
Martin, Natasha K.; Skaathun, Britt; Vickerman, Peter; Stuart, David
2017-01-01
Background People who inject drugs (PWID) and HIV-infected men who have sex with men (MSM) are key risk groups for hepatitis C virus (HCV) transmission. Mathematical modeling studies can help elucidate what level and combination of prevention intervention scale-up is required to control or eliminate epidemics among these key populations. Methods We discuss the evidence surrounding HCV prevention interventions and provide an overview of the mathematical modeling literature projecting the impact of scaled-up HCV prevention among PWID and HIV-infected MSM. Results Harm reduction interventions such as opiate substitution therapy and needle and syringe programs are effective in reducing HCV incidence among PWID. Modeling and limited empirical data indicate HCV treatment could additionally be used for prevention. No studies have evaluated the effectiveness of behavior change interventions to reduce HCV incidence among MSM, but existing interventions to reduce HIV risk could be effective. Mathematical modeling and empirical data indicates that scale-up of harm reduction could reduce HCV transmission, but in isolation is unlikely to eliminate HCV among PWID. By contrast, elimination is possibly achievable through combination scale-up of harm reduction and HCV treatment. Similarly, among HIV-infected MSM, eliminating the emerging epidemics will likely require HCV treatment scale-up in combination with additional interventions to reduce HCV-related risk behaviors. Conclusions Elimination of HCV will likely require combination prevention efforts among both PWID and HIV-infected MSM populations. Further empirical research is required to validate HCV treatment as prevention among these populations, and to identify effective behavioral interventions to reduce HCV incidence among MSM. PMID:28534885
Polarizable molecular interactions in condensed phase and their equivalent nonpolarizable models.
Leontyev, Igor V; Stuchebrukhov, Alexei A
2014-07-07
Earlier, using phenomenological approach, we showed that in some cases polarizable models of condensed phase systems can be reduced to nonpolarizable equivalent models with scaled charges. Examples of such systems include ionic liquids, TIPnP-type models of water, protein force fields, and others, where interactions and dynamics of inherently polarizable species can be accurately described by nonpolarizable models. To describe electrostatic interactions, the effective charges of simple ionic liquids are obtained by scaling the actual charges of ions by a factor of 1/√(ε(el)), which is due to electronic polarization screening effect; the scaling factor of neutral species is more complicated. Here, using several theoretical models, we examine how exactly the scaling factors appear in theory, and how, and under what conditions, polarizable Hamiltonians are reduced to nonpolarizable ones. These models allow one to trace the origin of the scaling factors, determine their values, and obtain important insights on the nature of polarizable interactions in condensed matter systems.
Influence of a large-scale field on energy dissipation in magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Zhdankin, Vladimir; Boldyrev, Stanislav; Mason, Joanne
2017-07-01
In magnetohydrodynamic (MHD) turbulence, the large-scale magnetic field sets a preferred local direction for the small-scale dynamics, altering the statistics of turbulence from the isotropic case. This happens even in the absence of a total magnetic flux, since MHD turbulence forms randomly oriented large-scale domains of strong magnetic field. It is therefore customary to study small-scale magnetic plasma turbulence by assuming a strong background magnetic field relative to the turbulent fluctuations. This is done, for example, in reduced models of plasmas, such as reduced MHD, reduced-dimension kinetic models, gyrokinetics, etc., which make theoretical calculations easier and numerical computations cheaper. Recently, however, it has become clear that the turbulent energy dissipation is concentrated in the regions of strong magnetic field variations. A significant fraction of the energy dissipation may be localized in very small volumes corresponding to the boundaries between strongly magnetized domains. In these regions, the reduced models are not applicable. This has important implications for studies of particle heating and acceleration in magnetic plasma turbulence. The goal of this work is to systematically investigate the relationship between local magnetic field variations and magnetic energy dissipation, and to understand its implications for modelling energy dissipation in realistic turbulent plasmas.
Active Learning of Classification Models with Likert-Scale Feedback.
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.
Active Learning of Classification Models with Likert-Scale Feedback
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone. PMID:28979827
Impact of reduced scale free network on wireless sensor network
NASA Astrophysics Data System (ADS)
Keshri, Neha; Gupta, Anurag; Mishra, Bimal Kumar
2016-12-01
In heterogeneous wireless sensor network (WSN) each data-packet traverses through multiple hops over restricted communication range before it reaches the sink. The amount of energy required to transmit a data-packet is directly proportional to the number of hops. To balance the energy costs across the entire network and to enhance the robustness in order to improve the lifetime of WSN becomes a key issue of researchers. Due to high dimensionality of an epidemic model of WSN over a general scale free network, it is quite difficult to have close study of network dynamics. To overcome this complexity, we simplify a general scale free network by partitioning all of its motes into two classes: higher-degree motes and lower-degree motes, and equating the degrees of all higher-degree motes with lower-degree motes, yielding a reduced scale free network. We develop an epidemic model of WSN based on reduced scale free network. The existence of unique positive equilibrium is determined with some restrictions. Stability of the system is proved. Furthermore, simulation results show improvements made in this paper have made the entire network have a better robustness to the network failure and the balanced energy costs. This reduced model based on scale free network theory proves more applicable to the research of WSN.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
The Regionalization of National-Scale SPARROW Models for Stream Nutrients
Schwarz, G.E.; Alexander, R.B.; Smith, R.A.; Preston, S.D.
2011-01-01
This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ??100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Sandu, Mihnea; Nastase, Ilinca; Bode, Florin; Croitoru, CristianaVerona; Tacutu, Laurentiu
2018-02-01
The paper focus on the air quality inside the Crew Quarters on board of the International Space Station. Several issues to improve were recorded by NASA and ESA and most important of them are the following: noise level reduction, CO2 accumulation reduction and dust accumulation reduction. The study in this paper is centred on a reduced scaled model used to provide simulations related to the air diffusion inside the CQ. It is obvious that a new ventilation system is required to achieve the three issues mentioned above, and the solutions obtained by means of numerical simulation need to be validated by experimental approach. First of all we have built a reduced scaled physical model to simulate the flow pattern inside the CQ and the equipment inside the CQ has been reproduced using a geometrical scale ratio. The flow pattern was considered isothermal and incompressible. The similarity criteria used was the Reynolds number to characterize the flow pattern and the length scale was set at value 1/4. Water has been used inside the model to simulate air. Velocity magnitude vectors have been obtained using PIV measurement techniques.
Fire Detection Organizing Questions
NASA Technical Reports Server (NTRS)
2004-01-01
Verified models of fire precursor transport in low and partial gravity: a. Development of models for large-scale transport in reduced gravity. b. Validated CFD simulations of transport of fire precursors. c. Evaluation of the effect of scale on transport and reduced gravity fires. Advanced fire detection system for gaseous and particulate pre-fire and fire signaturesa: a. Quantification of pre-fire pyrolysis products in microgravity. b. Suite of gas and particulate sensors. c. Reduced gravity evaluation of candidate detector technologies. d. Reduced gravity verification of advanced fire detection system. e. Validated database of fire and pre-fire signatures in low and partial gravity.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
Rees, Erin E; Petukhova, Tatiana; Mascarenhas, Mariola; Pelcat, Yann; Ogden, Nicholas H
2018-05-08
Zika virus (ZIKV) spread rapidly in the Americas in 2015. Targeting effective public health interventions for inhabitants of, and travellers to and from, affected countries depends on understanding the risk of ZIKV emergence (and re-emergence) at the local scale. We explore the extent to which environmental, social and neighbourhood disease intensity variables influenced emergence dynamics. Our objective was to characterise population vulnerability given the potential for sustained autochthonous ZIKV transmission and the timing of emergence. Logistic regression models estimated the probability of reporting at least one case of ZIKV in a given municipality over the course of the study period as an indicator for sustained transmission; while accelerated failure time (AFT) survival models estimated the time to a first reported case of ZIKV in week t for a given municipality as an indicator for timing of emergence. Sustained autochthonous ZIKV transmission was best described at the temporal scale of the study period (almost one year), such that high levels of study period precipitation and low mean study period temperature reduced the probability. Timing of ZIKV emergence was best described at the weekly scale for precipitation in that high precipitation in the current week delayed reporting. Both modelling approaches detected an effect of high poverty on reducing/slowing case detection, especially when inter-municipal road connectivity was low. We also found that proximity to municipalities reporting ZIKV had an effect to reduce timing of emergence when located, on average, less than 100 km away. The different modelling approaches help distinguish between large temporal scale factors driving vector habitat suitability and short temporal scale factors affecting the speed of spread. We find evidence for inter-municipal movements of infected people as a local-scale driver of spatial spread. The negative association with poverty suggests reduced case reporting in poorer areas. Overall, relatively simplistic models may be able to predict the vulnerability of populations to autochthonous ZIKV transmission at the local scale.
Mark Morrison; Thomas C. Brown
2009-01-01
Stated preference methods such as contingent valuation and choice modeling are subject to various biases that may lead to differences between actual and hypothetical willingness to pay. Cheap talk, follow-up certainty scales, and dissonance minimization are three techniques for reducing this hypothetical bias. Cheap talk and certainty scales have received considerable...
Modal resonant dynamics of cables with a flexible support: A modulated diffraction problem
NASA Astrophysics Data System (ADS)
Guo, Tieding; Kang, Houjun; Wang, Lianhua; Liu, Qijian; Zhao, Yueyu
2018-06-01
Modal resonant dynamics of cables with a flexible support is defined as a modulated (wave) diffraction problem, and investigated by asymptotic expansions of the cable-support coupled system. The support-cable mass ratio, which is usually very large, turns out to be the key parameter for characterizing cable-support dynamic interactions. By treating the mass ratio's inverse as a small perturbation parameter and scaling the cable tension properly, both cable's modal resonant dynamics and the flexible support dynamics are asymptotically reduced by using multiple scale expansions, leading finally to a reduced cable-support coupled model (i.e., on a slow time scale). After numerical validations of the reduced coupled model, cable-support coupled responses and the flexible support induced coupling effects on the cable, are both fully investigated, based upon the reduced model. More explicitly, the dynamic effects on the cable's nonlinear frequency and force responses, caused by the support-cable mass ratio, the resonant detuning parameter and the support damping, are carefully evaluated.
Petri Net controller synthesis based on decomposed manufacturing models.
Dideban, Abbas; Zeraatkar, Hashem
2018-06-01
Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Cytoskeletal dynamics in fission yeast: a review of models for polarization and division
Drake, Tyler; Vavylonis, Dimitrios
2010-01-01
We review modeling studies concerning cytoskeletal activity of fission yeast. Recent models vary in length and time scales, describing a range of phenomena from cellular morphogenesis to polymer assembly. The components of cytoskeleton act in concert to mediate cell-scale events and interactions such as polarization. The mathematical models reduce these events and interactions to their essential ingredients, describing the cytoskeleton by its bulk properties. On a smaller scale, models describe cytoskeletal subcomponents and how bulk properties emerge. PMID:21119765
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-09-14
Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less
Revealing modified gravity signals in matter and halo hierarchical clustering
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Koyama, Kazuya; Bose, Benjamin; Zhao, Gong-Bo
2017-07-01
We use a set of N-body simulations employing a modified gravity (MG) model with Vainshtein screening to study matter and halo hierarchical clustering. As test-case scenarios we consider two normal branch Dvali-Gabadadze-Porrati (nDGP) gravity models with mild and strong growth rate enhancement. We study higher-order correlation functions ξn(R ) up to n =9 and associated reduced cumulants Sn(R )≡ξn(R )/σ (R )2n -2. We find that the matter probability distribution functions are strongly affected by the fifth force on scales up to 50 h-1 Mpc , and the deviations from general relativity (GR) are maximized at z =0 . For reduced cumulants Sn, we find that at small scales R ≤6 h-1 Mpc the MG is characterized by lower values, with the deviation growing from 7% in the reduced skewness up to even 40% in S5. To study the halo clustering we use a simple abundance matching and divide haloes into thee fixed number density samples. The halo two-point functions are weakly affected, with a relative boost of the order of a few percent appearing only at the smallest pair separations (r ≤5 h-1 Mpc ). In contrast, we find a strong MG signal in Sn(R )'s, which are enhanced compared to GR. The strong model exhibits a >3 σ level signal at various scales for all halo samples and in all cumulants. In this context, we find that the reduced kurtosis to be an especially promising cosmological probe of MG. Even the mild nDGP model leaves a 3 σ imprint at small scales R ≤3 h-1 Mpc , while the stronger model deviates from a GR signature at nearly all scales with a significance of >5 σ . Since the signal is persistent in all halo samples and over a range of scales, we advocate that the reduced kurtosis estimated from galaxy catalogs can potentially constitute a strong MG-model discriminatory as well as GR self-consistency test.
Dynamic Behavior of Sand: Annual Report FY 11
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antoun, T; Herbold, E; Johnson, S
2012-03-15
Currently, design of earth-penetrating munitions relies heavily on empirical relationships to estimate behavior, making it difficult to design novel munitions or address novel target situations without expensive and time-consuming full-scale testing with relevant system and target characteristics. Enhancing design through numerical studies and modeling could help reduce the extent and duration of full-scale testing if the models have enough fidelity to capture all of the relevant parameters. This can be separated into three distinct problems: that of the penetrator structural and component response, that of the target response, and that of the coupling between the two. This project focuses onmore » enhancing understanding of the target response, specifically granular geomaterials, where the temporal and spatial multi-scale nature of the material controls its response. As part of the overarching goal of developing computational capabilities to predict the performance of conventional earth-penetrating weapons, this project focuses specifically on developing new models and numerical capabilities for modeling sand response in ALE3D. There is general recognition that granular materials behave in a manner that defies conventional continuum approaches which rely on response locality and which degrade in the presence of strong response nonlinearities, localization, and phase gradients. There are many numerical tools available to address parts of the problem. However, to enhance modeling capability, this project is pursuing a bottom-up approach of building constitutive models from higher fidelity, smaller spatial scale simulations (rather than from macro-scale observations of physical behavior as is traditionally employed) that are being augmented to address the unique challenges of mesoscale modeling of dynamically loaded granular materials. Through understanding response and sensitivity at the grain-scale, it is expected that better reduced order representations of response can be formulated at the continuum scale as illustrated in Figure 1 and Figure 2. The final result of this project is to implement such reduced order models in the ALE3D material library for general use.« less
Simulation of pump-turbine prototype fast mode transition for grid stability support
NASA Astrophysics Data System (ADS)
Nicolet, C.; Braun, O.; Ruchonnet, N.; Hell, J.; Béguin, A.; Avellan, F.
2017-04-01
The paper explores the additional services that Full Size Frequency Converter, FSFC, solution can provide for the case of an existing pumped storage power plant of 2x210 MW, for which conversion from fixed speed to variable speed is investigated with a focus on fast mode transition. First, reduced scale model tests experiments of fast transition of Francis pump-turbine which have been performed at the ANDRITZ HYDRO Hydraulic Laboratory in Linz Austria are presented. The tests consist of linear speed transition from pump to turbine and vice versa performed with constant guide vane opening. Then existing pumped storage power plant with pump-turbine quasi homologous to the reduced scale model is modelled using the simulation software SIMSEN considering the reservoirs, penstocks, the two Francis pump-turbines, the two downstream surge tanks, and the tailrace tunnel. For the electrical part, an FSFC configuration is considered with a detailed electrical model. The transitions from turbine to pump and vice versa are simulated, and similarities between prototype simulation results and reduced scale model experiments are highlighted.
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
A Reduced Form Model (RFM) is a mathematical relationship between the inputs and outputs of an air quality model, permitting estimation of additional modeling without costly new regional-scale simulations. A 21-year Community Multiscale Air Quality (CMAQ) simulation for the con...
Phosphorus transfer in surface runoff from intensive pasture systems at various scales: a review.
Dougherty, Warwick J; Fleming, Nigel K; Cox, Jim W; Chittleborough, David J
2004-01-01
Phosphorus transfer in runoff from intensive pasture systems has been extensively researched at a range of scales. However, integration of data from the range of scales has been limited. This paper presents a conceptual model of P transfer that incorporates landscape effects and reviews the research relating to P transfer at a range of scales in light of this model. The contribution of inorganic P sources to P transfer is relatively well understood, but the contribution of organic P to P transfer is still relatively poorly defined. Phosphorus transfer has been studied at laboratory, profile, plot, field, and watershed scales. The majority of research investigating the processes of P transfer (as distinct from merely quantifying P transfer) has been undertaken at the plot scale. However, there is a growing need to integrate data gathered at a range of scales so that more effective strategies to reduce P transfer can be identified. This has been hindered by the lack of a clear conceptual framework to describe differences in the processes of P transfer at the various scales. The interaction of hydrological (transport) factors with P source factors, and their relationship to scale, require further examination. Runoff-generating areas are highly variable, both temporally and spatially. Improvement in the understanding and identification of these areas will contribute to increased effectiveness of strategies aimed at reducing P transfers in runoff. A thorough consideration of scale effects using the conceptual model of P transfer outlined in this paper will facilitate the development of improved strategies for reducing P losses in runoff.
Reduced Complexity Modelling of Urban Floodplain Inundation
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Brasington, J.; Mihir, M.
2004-12-01
Significant recent advances in floodplain inundation modelling have been achieved by directly coupling 1d channel hydraulic models with a raster storage cell approximation for floodplain flows. The strengths of this reduced-complexity model structure derive from its explicit dependence on a digital elevation model (DEM) to parameterize flows through riparian areas, providing a computationally efficient algorithm to model heterogeneous floodplains. Previous applications of this framework have generally used mid-range grid scales (101-102 m), showing the capacity of the models to simulate long reaches (103-104 m). However, the increasing availability of precision DEMs derived from airborne laser altimetry (LIDAR) enables their use at very high spatial resolutions (100-101 m). This spatial scale offers the opportunity to incorporate the complexity of the built environment directly within the floodplain DEM and simulate urban flooding. This poster describes a series of experiments designed to explore model functionality at these reduced scales. Important questions are considered, raised by this new approach, about the reliability and representation of the floodplain topography and built environment, and the resultant sensitivity of inundation forecasts. The experiments apply a raster floodplain model to reconstruct a 1:100 year flood event on the River Granta in eastern England, which flooded 72 properties in the town of Linton in October 2001. The simulations use a nested-scale model to maintain efficiency. A 2km by 4km urban zone is represented by a high-resolution DEM derived from single-pulse LIDAR data supplied by the UK Environment Agency, together with surveyed data and aerial photography. Novel methods of processing the raw data to provide the individual structure detail required are investigated and compared. This is then embedded within a lower-resolution model application at the reach scale which provides boundary conditions based on recorded flood stage. The high resolution predictions on a scale commensurate with urban structures make possible a multi-criteria validation which combines verification of reach-scale characteristics such as downstream flow and inundation extent with internal validation of flood depth at individual sites.
Regional-scale air quality models are being used to demonstrate attainment of the ozone air quality standard. In current regulatory applications, a regional-scale air quality model is applied for a base year and a future year with reduced emissions using the same meteorological ...
Chola, Lumbwe; Michalow, Julia; Tugendhaft, Aviva; Hofman, Karen
2015-04-17
Diarrhoea is one of the leading causes of morbidity and mortality in South African children, accounting for approximately 20% of under-five deaths. Though progress has been made in scaling up multiple interventions to reduce diarrhoea in the last decade, challenges still remain. In this paper, we model the cost and impact of scaling up 13 interventions to prevent and treat childhood diarrhoea in South Africa. Modelling was done using the Lives Saved Tool (LiST). Using 2014 as the baseline, intervention coverage was increased from 2015 until 2030. Three scale up scenarios were compared: by 2030, 1) coverage of all interventions increased by ten percentage points; 2) intervention coverage increased by 20 percentage points; 3) and intervention coverage increased to 99%. The model estimates 13 million diarrhoea cases at baseline. Scaling up intervention coverage averted between 3 million and 5.3 million diarrhoea cases. In 2030, diarrhoeal deaths are expected to reduce from an estimated 5,500 in 2014 to 2,800 in scenario one, 1,400 in scenario two and 100 in scenario three. The additional cost of implementing all 13 interventions will range from US$510 million (US$9 per capita) to US$960 million (US$18 per capita), of which the health system costs range between US$40 million (less than US$1 per capita) and US$170 million (US$3 per capita). Scaling up 13 essential interventions could have a substantial impact on reducing diarrhoeal deaths in South African children, which would contribute toward reducing child mortality in the post-MDG era. Preventive measures are key and the government should focus on improving water, sanitation and hygiene. The investments required to achieve these results seem feasible considering current health expenditure.
Ataman, Meric
2017-01-01
Genome-scale metabolic reconstructions have proven to be valuable resources in enhancing our understanding of metabolic networks as they encapsulate all known metabolic capabilities of the organisms from genes to proteins to their functions. However the complexity of these large metabolic networks often hinders their utility in various practical applications. Although reduced models are commonly used for modeling and in integrating experimental data, they are often inconsistent across different studies and laboratories due to different criteria and detail, which can compromise transferability of the findings and also integration of experimental data from different groups. In this study, we have developed a systematic semi-automatic approach to reduce genome-scale models into core models in a consistent and logical manner focusing on the central metabolism or subsystems of interest. The method minimizes the loss of information using an approach that combines graph-based search and optimization methods. The resulting core models are shown to be able to capture key properties of the genome-scale models and preserve consistency in terms of biomass and by-product yields, flux and concentration variability and gene essentiality. The development of these “consistently-reduced” models will help to clarify and facilitate integration of different experimental data to draw new understanding that can be directly extendable to genome-scale models. PMID:28727725
Quasi-coarse-grained dynamics: modelling of metallic materials at mesoscales
NASA Astrophysics Data System (ADS)
Dongare, Avinash M.
2014-12-01
A computationally efficient modelling method called quasi-coarse-grained dynamics (QCGD) is developed to expand the capabilities of molecular dynamics (MD) simulations to model behaviour of metallic materials at the mesoscales. This mesoscale method is based on solving the equations of motion for a chosen set of representative atoms from an atomistic microstructure and using scaling relationships for the atomic-scale interatomic potentials in MD simulations to define the interactions between representative atoms. The scaling relationships retain the atomic-scale degrees of freedom and therefore energetics of the representative atoms as would be predicted in MD simulations. The total energetics of the system is retained by scaling the energetics and the atomic-scale degrees of freedom of these representative atoms to account for the missing atoms in the microstructure. This scaling of the energetics renders improved time steps for the QCGD simulations. The success of the QCGD method is demonstrated by the prediction of the structural energetics, high-temperature thermodynamics, deformation behaviour of interfaces, phase transformation behaviour, plastic deformation behaviour, heat generation during plastic deformation, as well as the wave propagation behaviour, as would be predicted using MD simulations for a reduced number of representative atoms. The reduced number of atoms and the improved time steps enables the modelling of metallic materials at the mesoscale in extreme environments.
NASA Astrophysics Data System (ADS)
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL
The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...
Geometry and Reynolds-Number Scaling on an Iced Business-Jet Wing
NASA Technical Reports Server (NTRS)
Lee, Sam; Ratvasky, Thomas P.; Thacker, Michael; Barnhart, Billy P.
2005-01-01
A study was conducted to develop a method to scale the effect of ice accretion on a full-scale business jet wing model to a 1/12-scale model at greatly reduced Reynolds number. Full-scale, 5/12-scale, and 1/12-scale models of identical airfoil section were used in this study. Three types of ice accretion were studied: 22.5-minute ice protection system failure shape, 2-minute initial ice roughness, and a runback shape that forms downstream of a thermal anti-ice system. The results showed that the 22.5-minute failure shape could be scaled from full-scale to 1/12-scale through simple geometric scaling. The 2-minute roughness shape could be scaled by choosing an appropriate grit size. The runback ice shape exhibited greater Reynolds number effects and could not be scaled by simple geometric scaling of the ice shape.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Due to the computational cost of running regional-scale numerical air quality models, reduced form models (RFM) have been proposed as computationally efficient simulation tools for characterizing the pollutant response to many different types of emission reductions. The U.S. Envi...
A Bayesian method for assessing multiscalespecies-habitat relationships
Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.
2017-01-01
ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships.
NASA Astrophysics Data System (ADS)
Qi, D.; Majda, A.
2017-12-01
A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.
Male body dissatisfaction scale (MBDS): proposal for a reduced model.
da Silva, Wanderson Roberto; Marôco, João; Ochner, Christopher N; Campos, Juliana Alvares Duarte Bonini
2017-09-01
To evaluate the psychometric properties of the male body dissatisfaction scale (MBDS) in Brazilian and Portuguese university students; to present a reduced model of the scale; to compare two methods of computing global scores for participants' body dissatisfaction; and to estimate the prevalence of participants' body dissatisfaction. A total of 932 male students participated in this study. A confirmatory factor analysis (CFA) was used to assess the scale's psychometric properties. Multi-group analysis was used to test transnational invariance and invariance in independent samples. The body dissatisfaction score was calculated using two methods (mean and matrix of weights in the CFA), which were compared. Finally, individuals were classified according to level of body dissatisfaction, using the best method. The MBDS model did not show adequate fit for the sample and was, therefore, refined. Thirteen items were excluded and two factors were combined. A reduced model of 12 items and 2 factors was proposed and shown to have adequate psychometric properties. There was a significant difference (p < 0.001) between the methods for calculating the score for body dissatisfaction, since the mean overestimated the scores. Among student participants, the prevalence of body dissatisfaction with musculature and general appearance was 11.2 and 5.3%, respectively. The reduced bi-factorial model of the MBDS showed adequate validity, reliability, and transnational invariance and invariance in independent samples for Brazilian and Portuguese students. The new proposal for calculating the global score was able to more accurately show their body dissatisfaction. No level of evidence Basic Science.
Scale Model of 9x6 Thermal Structures Tunnel
1960-07-19
Scale Model of 9x6 Thermal Structures Tunnel: Image L-7256.01 is a Drawing Figure 12 in NASA Document L-1265. The Major components of the 9-by6-Foot Thermal Structures Tunnel. The 97 foot-long diffuser was added in 1960 to reduce noise.
An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles
NASA Technical Reports Server (NTRS)
Brown, Clifford; Bridges, James
2003-01-01
Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.
NASA Astrophysics Data System (ADS)
Flamand, Olivier
2017-12-01
Wind engineering problems are commonly studied by wind tunnel experiments at a reduced scale. This introduces several limitations and calls for a careful planning of the tests and the interpretation of the experimental results. The talk first revisits the similitude laws and discusses how they are actually applied in wind engineering. It will also remind readers why different scaling laws govern in different wind engineering problems. Secondly, the paper focuses on the ways to simplify a detailed structure (bridge, building, platform) when fabricating the downscaled models for the tests. This will be illustrated by several examples from recent engineering projects. Finally, under the most severe weather conditions, manmade structures and equipment should remain operational. What “recreating the climate” means and aims to achieve will be illustrated through common practice in climatic wind tunnel modelling.
Minimal non-abelian supersymmetric Twin Higgs
Badziak, Marcin; Harigaya, Keisuke
2017-10-17
We propose a minimal supersymmetric Twin Higgs model that can accommodate tuning of the electroweak scale for heavy stops better than 10% with high mediation scales of supersymmetry breaking. A crucial ingredient of this model is a new SU(2) X gauge symmetry which provides a D-term potential that generates a large SU(4) invariant coupling for the Higgs sector and only small set of particles charged under SU(2) X , which allows the model to be perturbative around the Planck scale. The new gauge interaction drives the top yukawa coupling small at higher energy scales, which also reduces the tuning.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, Katherine; Hamlington, Peter; Pinardi, Nadia; Zavatarelli, Marco
2017-04-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions that can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parameterizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17) that follows the chemical functional group approach, which allows for non-Redfield stoichiometric ratios and the exchange of matter through units of carbon, nitrate, and phosphate. This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time-series Study and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
Reduced-Order Biogeochemical Flux Model for High-Resolution Multi-Scale Biophysical Simulations
NASA Astrophysics Data System (ADS)
Smith, K.; Hamlington, P.; Pinardi, N.; Zavatarelli, M.; Milliff, R. F.
2016-12-01
Biogeochemical tracers and their interactions with upper ocean physical processes such as submesoscale circulations and small-scale turbulence are critical for understanding the role of the ocean in the global carbon cycle. These interactions can cause small-scale spatial and temporal heterogeneity in tracer distributions which can, in turn, greatly affect carbon exchange rates between the atmosphere and interior ocean. For this reason, it is important to take into account small-scale biophysical interactions when modeling the global carbon cycle. However, explicitly resolving these interactions in an earth system model (ESM) is currently infeasible due to the enormous associated computational cost. As a result, understanding and subsequently parametrizing how these small-scale heterogeneous distributions develop and how they relate to larger resolved scales is critical for obtaining improved predictions of carbon exchange rates in ESMs. In order to address this need, we have developed the reduced-order, 17 state variable Biogeochemical Flux Model (BFM-17). This model captures the behavior of open-ocean biogeochemical systems without substantially increasing computational cost, thus allowing the model to be combined with computationally-intensive, fully three-dimensional, non-hydrostatic large eddy simulations (LES). In this talk, we couple BFM-17 with the Princeton Ocean Model and show good agreement between predicted monthly-averaged results and Bermuda testbed area field data (including the Bermuda-Atlantic Time Series and Bermuda Testbed Mooring). Through these tests, we demonstrate the capability of BFM-17 to accurately model open-ocean biochemistry. Additionally, we discuss the use of BFM-17 within a multi-scale LES framework and outline how this will further our understanding of turbulent biophysical interactions in the upper ocean.
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Study of providing omnidirectional vibration isolation to entire space shuttle payload packages
NASA Technical Reports Server (NTRS)
Chang, C. S.; Robinson, G. D.; Weber, D. E.
1974-01-01
Techniques to provide omnidirectional vibration isolation for a space shuttle payload package were investigated via a reduced-scale model. Development, design, fabrication, assembly and test evaluation of a 0.125-scale isolation model are described. Final drawings for fabricated mechanical components are identified, and prints of all drawings are included.
Recursive renormalization group theory based subgrid modeling
NASA Technical Reports Server (NTRS)
Zhou, YE
1991-01-01
Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.
Numerical evaluation of the scale problem on the wind flow of a windbreak
Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong
2014-01-01
The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174
Genome-scale biological models for industrial microbial systems.
Xu, Nan; Ye, Chao; Liu, Liming
2018-04-01
The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Marchand, R.; Ackerman, T. P.
2016-12-01
Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford K.; Ortega, Jesus D.; Christian, Joshua Mark
Novel designs to increase light trapping and thermal efficiency of concentrating solar receivers at multiple length scales have been conceived, designed, and tested. The fractal-like geometries and features are introduced at both macro (meters) and meso (millimeters to centimeters) scales. Advantages include increased solar absorptance, reduced thermal emittance, and increased thermal efficiency. Radial and linear structures at the meso (tube shape and geometry) and macro (total receiver geometry and configuration) scales redirect reflected solar radiation toward the interior of the receiver for increased absorptance. Hotter regions within the interior of the receiver can reduce thermal emittance due to reduced localmore » view factors to the environment, and higher concentration ratios can be employed with similar surface irradiances to reduce the effective optical aperture, footprint, and thermal losses. Coupled optical/fluid/thermal models have been developed to evaluate the performance of these designs relative to conventional designs. Modeling results showed that fractal-like structures and geometries can increase the effective solar absorptance by 5 – 20% and the thermal efficiency by several percentage points at both the meso and macro scales, depending on factors such as intrinsic absorptance. Meso-scale prototypes were fabricated using additive manufacturing techniques, and a macro-scale bladed receiver design was fabricated using Inconel 625 tubes. On-sun tests were performed using the solar furnace and solar tower at the National Solar Thermal Test facility. The test results demonstrated enhanced solar absorptance and thermal efficiency of the fractal-like designs.« less
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Scale and modeling issues in water resources planning
Lins, H.F.; Wolock, D.M.; McCabe, G.J.
1997-01-01
Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slater, Lee; Ntarlagiannis, Dimitrios; Personna, Yves R.
2007-10-01
The authors measured Spectral Induced Polarization (SIP) signatures in sand columns during (1) FeS biomineralization produced by sulfate reducing bacteria (D. vulgaris) under anaerboci conditions, and (2) subsequent biomineral dissolution upon return to an aerobic state. The low-frequency (0.1-10 Hz peak) relaxations produced during biomineralization can be modeled with a Cole-Cole formulation, from which the evolution of the polarization magnitude and relaxation length scale can be estimated. They find that the modeled time constant is consistent with the polarizable elements being biomineral encrused pores. Evolution of the model parameters is consistent with FeS surface area increases and pore-size reduction duringmore » biomineral growth, and subsequent biomineral dissolution (FeS surface area decreases and pore expansion) upon return to the aerobic state. They conclude that SIP signatures are diagnostic of pore-scale geometrical changes associated with FeS biomineralization by sulfate reducing bacteria.« less
Scale models: A proven cost-effective tool for outage planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, R.; Segroves, R.
1995-03-01
As generation costs for operating nuclear stations have risen, more nuclear utilities have initiated efforts to improve cost effectiveness. Nuclear plant owners are also being challenged with lower radiation exposure limits and new revised radiation protection related regulations (10 CFR 20), which places further stress on their budgets. As source term reduction activities continue to lower radiation fields, reducing the amount of time spent in radiation fields becomes one of the most cost-effective ways of reducing radiation exposure. An effective approach for minimizing time spent in radiation areas is to use a physical scale model for worker orientation planning andmore » monitoring maintenance, modifications, and outage activities. To meet the challenge of continued reduction in the annual cumulative radiation exposures, new cost-effective tools are required. One field-tested and proven tool is the physical scale model.« less
Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta,Lucas G.
2011-01-01
Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.
Intermittency and Alignment in Strong RMHD Turbulence
NASA Astrophysics Data System (ADS)
Chandran, B. D. G.; Schekochihin, A. A.; Mallet, A.
2015-12-01
Intermittency is one of the critical unsolved problems in solar-wind turbulence. Intermittency is important not just because it affects the observable properties of turbulence in the inertial range, but also because it modifies the nature of turbulent dissipation at small scales. In this talk, I will present recent work by colleagues A. Schekochihin, A. Mallet, and myself that focuses on the development of intermittency within the inertial range of solar-wind turbulence. We restrict our analysis to the transverse, non-compressive component of the turbulence. Previous work has shown that this component of the turbulence is anisotropic, varying most rapidly in directions perpendicular to the magnetic field. We argue that, deep within the inertial range, this component of the turbulence is well modeled by the equations of reduced magnetohydrodynamics (RMHD). We then develop an analytic model of intermittent, three-dimensional, strong, reduced magnetohydrodynamic turbulence with zero cross helicity. We take the fluctuation amplitudes to have a log-Poisson distribution and incorporate into the model a new phenomenology of scale-dependent dynamic alignment. The log-Poisson distribution in our model is characterized by two parameters. To calculate these parameters, we make use of two assumptions: that the energy cascade rate is independent of scale within the inertial range and that the most intense coherent structures at scale lambda are sheet-like with a volume filling factor proportional to lambda. We then compute the scalings of the power spectrum, the kurtosis, higher-order structure functions, and three different average alignment angles. We also carry out a direct numerical simulation of RMHD turbulence. The scalings in our model are similar to the scalings in this simulation as well as the structure-function scalings observed in the slow solar wind.
Hot-bench simulation of the active flexible wing wind-tunnel model
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.; Houck, Jacob A.
1990-01-01
Two simulations, one batch and one real-time, of an aeroelastically-scaled wind-tunnel model were developed. The wind-tunnel model was a full-span, free-to-roll model of an advanced fighter concept. The batch simulation was used to generate and verify the real-time simulation and to test candidate control laws prior to implementation. The real-time simulation supported hot-bench testing of a digital controller, which was developed to actively control the elastic deformation of the wind-tunnel model. Time scaling was required for hot-bench testing. The wind-tunnel model, the mathematical models for the simulations, the techniques employed to reduce the hot-bench time-scale factors, and the verification procedures are described.
NASA Astrophysics Data System (ADS)
De, S.; Agarwal, N. K.; Hazra, Anupam; Chaudhari, Hemantkumar S.; Sahai, A. K.
2018-04-01
The interaction between cloud and large scale circulation is much less explored area in climate science. Unfolding the mechanism of coupling between these two parameters is imperative for improved simulation of Indian summer monsoon (ISM) and to reduce imprecision in climate sensitivity of global climate model. This work has made an effort to explore this mechanism with CFSv2 climate model experiments whose cloud has been modified by changing the critical relative humidity (CRH) profile of model during ISM. Study reveals that the variable CRH in CFSv2 has improved the nonlinear interactions between high and low frequency oscillations in wind field (revealed as internal dynamics of monsoon) and modulates realistically the spatial distribution of interactions over Indian landmass during the contrasting monsoon season compared to the existing CRH profile of CFSv2. The lower tropospheric wind error energy in the variable CRH simulation of CFSv2 appears to be minimum due to the reduced nonlinear convergence of error to the planetary scale range from long and synoptic scales (another facet of internal dynamics) compared to as observed from other CRH experiments in normal and deficient monsoons. Hence, the interplay between cloud and large scale circulation through CRH may be manifested as a change in internal dynamics of ISM revealed from scale interactive quasi-linear and nonlinear kinetic energy exchanges in frequency as well as in wavenumber domain during the monsoon period that eventually modify the internal variance of CFSv2 model. Conversely, the reduced wind bias and proper modulation of spatial distribution of scale interaction between the synoptic and low frequency oscillations improve the eastward and northward extent of water vapour flux over Indian landmass that in turn give feedback to the realistic simulation of cloud condensates attributing improved ISM rainfall in CFSv2.
NASA Technical Reports Server (NTRS)
Costen, Robert C.; Heinbockel, John H.; Miner, Gilda A.; Meador, Willard E., Jr.; Tabibi, Bagher M.; Lee, Ja H.; Williams, Michael D.
1995-01-01
A numerical rate equation model for a continuous wave iodine laser with longitudinally flowing gaseous lasant is validated by approximating two experiments that compare the perfluoroalkyl iodine lasants n-C3F7I and t-C4F9I. The salient feature of the simulations is that the production rate of the dimer (C4F9)2 is reduced by one order of magnitude relative to the dimer (C3F7)2. The model is then used to investigate the kinetic effects of this reduced dimer production, especially how it improves output power. Related parametric and scaling studies are also presented. When dimer production is reduced, more monomer radicals (t-C4F9) are available to combine with iodine ions, thus enhancing depletion of the laser lower level and reducing buildup of the principal quencher, molecular iodine. Fewer iodine molecules result in fewer downward transitions from quenching and more transitions from stimulated emission of lasing photons. Enhanced depletion of the lower level reduces the absorption of lasing photons. The combined result is more lasing photons and proportionally increased output power.
On the Scaling Laws for Jet Noise in Subsonic and Supersonic Flow
NASA Technical Reports Server (NTRS)
Vu, Bruce; Kandula, Max
2003-01-01
The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are examined with regard to their applicability to deduce full scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full-scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. It is shown that the jet Mach number (jet exit velocity/sound speed at jet exit) is a more general and convenient parameter for noise scaling purposes than the ratio of jet exit velocity to ambient speed of sound. A similarity spectrum is also proposed, which accounts for jet Mach number, angle to the jet axis, and jet density ratio. The proposed spectrum reduces nearly to the well-known similarity spectra proposed by Tam for the large-scale and the fine-scale turbulence noise in the appropriate limit.
On the Subgrid-Scale Modeling of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Squires, Kyle; Zeman, Otto
1990-01-01
A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.
Scaling laws for ignition at the National Ignition Facility from first principles.
Cheng, Baolian; Kwan, Thomas J T; Wang, Yi-Ming; Batha, Steven H
2013-10-01
We have developed an analytical physics model from fundamental physics principles and used the reduced one-dimensional model to derive a thermonuclear ignition criterion and implosion energy scaling laws applicable to inertial confinement fusion capsules. The scaling laws relate the fuel pressure and the minimum implosion energy required for ignition to the peak implosion velocity and the equation of state of the pusher and the hot fuel. When a specific low-entropy adiabat path is used for the cold fuel, our scaling laws recover the ignition threshold factor dependence on the implosion velocity, but when a high-entropy adiabat path is chosen, the model agrees with recent measurements.
Merriman-Hoehne, Katherine R.; Russell, Amy M.; Rachol, Cynthia M.; Daggupati, Prasad; Srinivasan, Raghavan; Hayhurst, Brett A.; Stuntebeck, Todd D.
2018-01-01
Subwatersheds within the Great Lakes “Priority Watersheds” were targeted by the Great Lakes Restoration Initiative (GLRI) to determine the effectiveness of the various best management practices (BMPs) from the U.S. Department of Agriculture-Natural Resources Conservation Service National Conservation Planning (NCP) Database. A Soil and Water Assessment Tool (SWAT) model is created for Alger Creek, a 50 km2 tributary watershed to the Saginaw River in Michigan. Monthly calibration yielded very good Nash–Sutcliffe efficiency (NSE) ratings for flow, sediment, total phosphorus (TP), dissolved reactive phosphorus (DRP), and total nitrogen (TN) (0.90, 0.79, 0.87, 0.88, and 0.77, respectively), and satisfactory NSE rating for nitrate (0.51). Two-year validation results in at least satisfactory NSE ratings for flow, sediment, TP, DRP, and TN (0.83, 0.54, 0.73, 0.53, and 0.60, respectively), and unsatisfactory NSE rating for nitrate (0.28). The model estimates the effect of BMPs at the field and watershed scales. At the field-scale, the most effective single practice at reducing sediment, TP, and DRP is no-tillage followed by cover crops (CC); CC are the most effective single practice at reducing nitrate. The most effective BMP combinations include filter strips, which can have a sizable effect on reducing sediment and phosphorus loads. At the watershed scale, model results indicate current NCP BMPs result in minimal sediment and nutrient reductions (<10%).
High-resolution LES of the rotating stall in a reduced scale model pump-turbine
NASA Astrophysics Data System (ADS)
Pacot, Olivier; Kato, Chisachi; Avellan, François
2014-03-01
Extending the operating range of modern pump-turbines becomes increasingly important in the course of the integration of renewable energy sources in the existing power grid. However, at partial load condition in pumping mode, the occurrence of rotating stall is critical to the operational safety of the machine and on the grid stability. The understanding of the mechanisms behind this flow phenomenon yet remains vague and incomplete. Past numerical simulations using a RANS approach often led to inconclusive results concerning the physical background. For the first time, the rotating stall is investigated by performing a large scale LES calculation on the HYDRODYNA pump-turbine scale model featuring approximately 100 million elements. The computations were performed on the PRIMEHPC FX10 of the University of Tokyo using the overset Finite Element open source code FrontFlow/blue with the dynamic Smagorinsky turbulence model and the no-slip wall condition. The internal flow computed is the one when operating the pump-turbine at 76% of the best efficiency point in pumping mode, as previous experimental research showed the presence of four rotating cells. The rotating stall phenomenon is accurately reproduced for a reduced Reynolds number using the LES approach with acceptable computing resources. The results show an excellent agreement with available experimental data from the reduced scale model testing at the EPFL Laboratory for Hydraulic Machines. The number of stall cells as well as the propagation speed corroborates the experiment.
Conners' Teacher Rating Scale for Preschool Children: A Revised, Brief, Age-Specific Measure
ERIC Educational Resources Information Center
Purpura, David J.; Lonigan, Christopher J.
2009-01-01
The Conners' Teacher Rating Scale-Revised (CTRS-R) is one of the most commonly used measures of child behavior problems. However, the scale length and the appropriateness of some of the items on the scale may reduce the usefulness of the CTRS-R for use with preschoolers. In this study, a Graded Response Model analysis based on Item Response Theory…
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
NASA Astrophysics Data System (ADS)
van Maanen, Barend; Nicholls, Robert J.; French, Jon R.; Barkwith, Andrew; Bonaldo, Davide; Burningham, Helene; Brad Murray, A.; Payo, Andres; Sutherland, James; Thornhill, Gillian; Townend, Ian H.; van der Wegen, Mick; Walkden, Mike J. A.
2016-03-01
Coastal and shoreline management increasingly needs to consider morphological change occurring at decadal to centennial timescales, especially that related to climate change and sea-level rise. This requires the development of morphological models operating at a mesoscale, defined by time and length scales of the order 101 to 102 years and 101 to 102 km. So-called 'reduced complexity' models that represent critical processes at scales not much smaller than the primary scale of interest, and are regulated by capturing the critical feedbacks that govern landform behaviour, are proving effective as a means of exploring emergent coastal behaviour at a landscape scale. Such models tend to be computationally efficient and are thus easily applied within a probabilistic framework. At the same time, reductionist models, built upon a more detailed description of hydrodynamic and sediment transport processes, are capable of application at increasingly broad spatial and temporal scales. More qualitative modelling approaches are also emerging that can guide the development and deployment of quantitative models, and these can be supplemented by varied data-driven modelling approaches that can achieve new explanatory insights from observational datasets. Such disparate approaches have hitherto been pursued largely in isolation by mutually exclusive modelling communities. Brought together, they have the potential to facilitate a step change in our ability to simulate the evolution of coastal morphology at scales that are most relevant to managing erosion and flood risk. Here, we advocate and outline a new integrated modelling framework that deploys coupled mesoscale reduced complexity models, reductionist coastal area models, data-driven approaches, and qualitative conceptual models. Integration of these heterogeneous approaches gives rise to model compositions that can potentially resolve decadal- to centennial-scale behaviour of diverse coupled open coast, estuary and inner shelf settings. This vision is illustrated through an idealised composition of models for a ~ 70 km stretch of the Suffolk coast, eastern England. A key advantage of model linking is that it allows a wide range of real-world situations to be simulated from a small set of model components. However, this process involves more than just the development of software that allows for flexible model coupling. The compatibility of radically different modelling assumptions remains to be carefully assessed and testing as well as evaluating uncertainties of models in composition are areas that require further attention.
Urban Stream Burial Increases Watershed-Scale Nitrate Export.
Beaulieu, Jake J; Golden, Heather E; Knightes, Christopher D; Mayer, Paul M; Kaushal, Sujay S; Pennino, Michael J; Arango, Clay P; Balz, David A; Elonen, Colleen M; Fritz, Ken M; Hill, Brian H
2015-01-01
Nitrogen (N) uptake in streams is an important ecosystem service that reduces nutrient loading to downstream ecosystems. Here we synthesize studies that investigated the effects of urban stream burial on N-uptake in two metropolitan areas and use simulation modeling to scale our measurements to the broader watershed scale. We report that nitrate travels on average 18 times farther downstream in buried than in open streams before being removed from the water column, indicating that burial substantially reduces N uptake in streams. Simulation modeling suggests that as burial expands throughout a river network, N uptake rates increase in the remaining open reaches which somewhat offsets reduced N uptake in buried reaches. This is particularly true at low levels of stream burial. At higher levels of stream burial, however, open reaches become rare and cumulative N uptake across all open reaches in the watershed rapidly declines. As a result, watershed-scale N export increases slowly at low levels of stream burial, after which increases in export become more pronounced. Stream burial in the lower, more urbanized portions of the watershed had a greater effect on N export than an equivalent amount of stream burial in the upper watershed. We suggest that stream daylighting (i.e., uncovering buried streams) can increase watershed-scale N retention.
Urban Stream Burial Increases Watershed-Scale Nitrate Export
Beaulieu, Jake J.; Golden, Heather E.; Knightes, Christopher D.; Mayer, Paul M.; Kaushal, Sujay S.; Pennino, Michael J.; Arango, Clay P.; Balz, David A.; Elonen, Colleen M.; Fritz, Ken M.; Hill, Brian H.
2015-01-01
Nitrogen (N) uptake in streams is an important ecosystem service that reduces nutrient loading to downstream ecosystems. Here we synthesize studies that investigated the effects of urban stream burial on N-uptake in two metropolitan areas and use simulation modeling to scale our measurements to the broader watershed scale. We report that nitrate travels on average 18 times farther downstream in buried than in open streams before being removed from the water column, indicating that burial substantially reduces N uptake in streams. Simulation modeling suggests that as burial expands throughout a river network, N uptake rates increase in the remaining open reaches which somewhat offsets reduced N uptake in buried reaches. This is particularly true at low levels of stream burial. At higher levels of stream burial, however, open reaches become rare and cumulative N uptake across all open reaches in the watershed rapidly declines. As a result, watershed-scale N export increases slowly at low levels of stream burial, after which increases in export become more pronounced. Stream burial in the lower, more urbanized portions of the watershed had a greater effect on N export than an equivalent amount of stream burial in the upper watershed. We suggest that stream daylighting (i.e., uncovering buried streams) can increase watershed-scale N retention. PMID:26186731
NASA Astrophysics Data System (ADS)
Eekhout, Joris P. C.; de Vente, Joris
2017-04-01
Climate change has strong implications for many essential ecosystem services, such as provision of drinking and irrigation water, soil erosion and flood control. Especially large impacts are expected in the Mediterranean, already characterised by frequent floods and droughts. The projected higher frequency of extreme weather events under climate change will lead to an increase of plant water stress, reservoir inflow and sediment yield. Sustainable Land Management (SLM) practices are increasingly promoted as climate change adaptation strategy and to increase resilience against extreme events. However, there is surprisingly little known about their impacts and trade-offs on ecosystem services at regional scales. The aim of this research is to provide insight in the potential of SLM for climate change adaptation, focusing on catchment-scale impacts on soil and water resources. We applied a spatially distributed hydrological model (SPHY), coupled with an erosion model (MUSLE) to the Segura River catchment (15,978 km2) in SE Spain. We run the model for three periods: one reference (1981-2000) and two future scenarios (2031-2050 and 2081-2100). Climate input data for the future scenarios were based on output from 9 Regional Climate Models and for different emission scenarios (RCP 4.5 and RCP 8.5). Realistic scenarios of SLM practices were developed based on a local stakeholder consultation process. The evaluated SLM scenarios focussed on reduced tillage and organic amendments under tree and cereal crops, covering 24% and 15% of the catchment, respectively. In the reference scenario, implementation of SLM at the field-scale led to an increase of the infiltration capacity of the soil and a reduction of surface runoff up to 29%, eventually reducing catchment-scale reservoir inflow by 6%. This led to a reduction of field-scale sediment yield of more than 50% and a reduced catchment-scale sediment flux to reservoirs of 5%. SLM was able to fully mitigate the effect of climate change at the field-scale and partly at the catchment-scale. Therefore, we conclude that large-scale adoption of SLM can effectively contribute to climate change adaptation by increasing the soil infiltration capacity, the soil water retention capacity and soil moisture content in the rootzone, leading to less crop stress. These findings of regional scale impacts of SLM are of high relevance for land-owners, -managers and policy makers to design effective climate change adaptation strategies.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
The impact of mesoscale convective systems on global precipitation: A modeling study
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo
2017-04-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-01-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
Item Response Modeling of Forced-Choice Questionnaires
ERIC Educational Resources Information Center
Brown, Anna; Maydeu-Olivares, Alberto
2011-01-01
Multidimensional forced-choice formats can significantly reduce the impact of numerous response biases typically associated with rating scales. However, if scored with classical methodology, these questionnaires produce ipsative data, which lead to distorted scale relationships and make comparisons between individuals problematic. This research…
NASA Astrophysics Data System (ADS)
Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua
2015-07-01
A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).
Oxley, Tim; Dore, Anthony J; ApSimon, Helen; Hall, Jane; Kryza, Maciej
2013-11-01
Integrated assessment modelling has evolved to support policy development in relation to air pollutants and greenhouse gases by providing integrated simulation tools able to produce quick and realistic representations of emission scenarios and their environmental impacts without the need to re-run complex atmospheric dispersion models. The UK Integrated Assessment Model (UKIAM) has been developed to investigate strategies for reducing UK emissions by bringing together information on projected UK emissions of SO2, NOx, NH3, PM10 and PM2.5, atmospheric dispersion, criteria for protection of ecosystems, urban air quality and human health, and data on potential abatement measures to reduce emissions, which may subsequently be linked to associated analyses of costs and benefits. We describe the multi-scale model structure ranging from continental to roadside, UK emission sources, atmospheric dispersion of emissions, implementation of abatement measures, integration with European-scale modelling, and environmental impacts. The model generates outputs from a national perspective which are used to evaluate alternative strategies in relation to emissions, deposition patterns, air quality metrics and ecosystem critical load exceedance. We present a selection of scenarios in relation to the 2020 Business-As-Usual projections and identify potential further reductions beyond those currently being planned. © 2013.
Spatial characterization of riparian buffer effects on sediment loads from watershed systems.
Momm, Henrique G; Bingner, Ronald L; Yuan, Yongping; Locke, Martin A; Wells, Robert R
2014-09-01
Understanding all watershed systems and their interactions is a complex, but critical, undertaking when developing practices designed to reduce topsoil loss and chemical/nutrient transport from agricultural fields. The presence of riparian buffer vegetation in agricultural landscapes can modify the characteristics of overland flow, promoting sediment deposition and nutrient filtering. Watershed simulation tools, such as the USDA-Annualized Agricultural Non-Point Source (AnnAGNPS) pollution model, typically require detailed information for each riparian buffer zone throughout the watershed describing the location, width, vegetation type, topography, and possible presence of concentrated flow paths through the riparian buffer zone. Research was conducted to develop GIS-based technology designed to spatially characterize riparian buffers and to estimate buffer efficiency in reducing sediment loads in a semiautomated fashion at watershed scale. The methodology combines modeling technology at different scales, at individual concentrated flow paths passing through the riparian zone, and at watershed scales. At the concentrated flow path scale, vegetative filter strip models are applied to estimate the sediment-trapping efficiency for each individual flow path, which are aggregated based on the watershed subdivision and used in the determination of the overall impact of the riparian vegetation at the watershed scale. This GIS-based technology is combined with AnnAGNPS to demonstrate the effect of riparian vegetation on sediment loadings from sheet and rill and ephemeral gully sources. The effects of variability in basic input parameters used to characterize riparian buffers, onto generated outputs at field scale (sediment trapping efficiency) and at watershed scale (sediment loadings from different sources) were evaluated and quantified. The AnnAGNPS riparian buffer component represents an important step in understanding and accounting for the effect of riparian vegetation, existing and/or managed, in reducing sediment loads at the watershed scale. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staebler, G. M.; Candy, J.; Howard, N. T.
2016-06-15
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less
NASA Astrophysics Data System (ADS)
Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.
2016-04-01
The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.
A Ground-Based Research Vehicle for Base Drag Studies at Subsonic Speeds
NASA Technical Reports Server (NTRS)
Diebler, Corey; Smith, Mark
2002-01-01
A ground research vehicle (GRV) has been developed to study the base drag on large-scale vehicles at subsonic speeds. Existing models suggest that base drag is dependent upon vehicle forebody drag, and for certain configurations, the total drag of a vehicle can be reduced by increasing its forebody drag. Although these models work well for small projectile shapes, studies have shown that they do not provide accurate predictions when applied to large-scale vehicles. Experiments are underway at the NASA Dryden Flight Research Center to collect data at Reynolds numbers to a maximum of 3 x 10(exp 7), and to formulate a new model for predicting the base drag of trucks, buses, motor homes, reentry vehicles, and other large-scale vehicles. Preliminary tests have shown errors as great as 70 percent compared to Hoerner's two-dimensional base drag prediction. This report describes the GRV and its capabilities, details the studies currently underway at NASA Dryden, and presents preliminary results of both the effort to formulate a new base drag model and the investigation into a method of reducing total drag by manipulating forebody drag.
Tropospheric ozone simulated by a global-multi-regional two-way coupling model system
NASA Astrophysics Data System (ADS)
Yan, Y.; Lin, J.; Chen, J.; Hu, L.
2015-12-01
Current global chemical transport models are limited by horizontal resolutions (100-500 km), and they cannot capture small-scale processes affecting tropospheric ozone (O3). Here we use a recently built two-way coupling system of GEOS-Chem to simulate the global tropospheric O3 in 2009. The system couples the global model (~ 200 km) and its three nested models (~ 50 km) covering Asia, North America and Europe, respectively. Benefiting from the high resolution, the nested models better capture small-scale processes than the global model alone. In the coupling system, the nested models provide results to modify the global model simulation within respective nested domains while taking the lateral boundary conditions from the global model. Due to the "coupling" effects, the two-way system significantly improves the tropospheric O3 simulation upon the global model alone, as found by comparisons with a suite of ground (1420 sites from WDCGG, GMD, EMEP, and AQS), aircraft (HIPPO and MOZAIC), and satellite measurements (two OMI products). Compared to the global model alone, the two-way coupled simulation enhances the correlation in day-to-day variation of afternoon mean O3 with the ground measurements from 0.53 to 0.68 and reduces the mean model bias from 10.8 to 6.7 ppb. Regionally, the coupled model reduces the bias by 4.6 ppb over Europe, 3.9 ppb over North America, and 3.1 ppb over other regions. The two-way coupling brings O3 vertical profiles much closer to the HIPPO and MOZAIC data, reducing the tropospheric (0-9 km) mean bias by 3-10 ppb at most MOZAIC sites and by 5.3 ppb for HIPPO profiles. The two-way coupled simulation also reduces the global tropospheric column ozone by 3.0 DU (9.5%), bringing them closer to the OMI data in all seasons. Simulation improvements are more significant in the northern hemisphere, and are primarily a result of improved representation of the nonlinear ozone chemistry, including but not limited to urban-rural contrast. The two-way coupled simulation also reduces the global tropospheric mean hydroxyl radical by 5% with enhancements by 5% in lifetimes of methyl chloroform and methane, bringing them closer to observation-based estimates. Therefore improving model representations of small-scale processes are a critical step forward to understanding the global tropospheric chemistry.
Spectrum of perturbations in anisotropic inflationary universe with vector hair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Himmetoglu, Burak, E-mail: burak@physics.umn.edu
2010-03-01
We study both the background evolution and cosmological perturbations of anisotropic inflationary models supported by coupled scalar and vector fields. The models we study preserve the U(1) gauge symmetry associated with the vector field, and therefore do not possess instabilities associated with longitudinal modes (which instead plague some recently proposed models of vector inflation and curvaton). We first intoduce a model in which the background anisotropy slowly decreases during inflation; we then confirm the stability of the background solution by studying the quadratic action for all the perturbations of the model. We then compute the spectrum of the h{sub ×}more » gravitational wave polarization. The spectrum we find breaks statistical isotropy at the largest scales and reduces to the standard nearly scale invariant form at small scales. We finally discuss the possible relevance of our results to the large scale CMB anomalies.« less
Meyer-Rath, Gesine; Over, Mead
2012-01-01
Policy discussions about the feasibility of massively scaling up antiretroviral therapy (ART) to reduce HIV transmission and incidence hinge on accurately projecting the cost of such scale-up in comparison to the benefits from reduced HIV incidence and mortality. We review the available literature on modelled estimates of the cost of providing ART to different populations around the world, and suggest alternative methods of characterising cost when modelling several decades into the future. In past economic analyses of ART provision, costs were often assumed to vary by disease stage and treatment regimen, but for treatment as prevention, in particular, most analyses assume a uniform cost per patient. This approach disregards variables that can affect unit cost, such as differences in factor prices (i.e., the prices of supplies and services) and the scale and scope of operations (i.e., the sizes and types of facilities providing ART). We discuss several of these variables, and then present a worked example of a flexible cost function used to determine the effect of scale on the cost of a proposed scale-up of treatment as prevention in South Africa. Adjusting previously estimated costs of universal testing and treatment in South Africa for diseconomies of small scale, i.e., more patients being treated in smaller facilities, adds 42% to the expected future cost of the intervention. PMID:22802731
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valocchi, Albert; Werth, Charles; Liu, Wen-Tso
Bioreduction is being actively investigated as an effective strategy for subsurface remediation and long-term management of DOE sites contaminated by metals and radionuclides (i.e. U(VI)). These strategies require manipulation of the subsurface, usually through injection of chemicals (e.g., electron donor) which mix at varying scales with the contaminant to stimulate metal reducing bacteria. There is evidence from DOE field experiments suggesting that mixing limitations of substrates at all scales may affect biological growth and activity for U(VI) reduction. Although current conceptual models hold that biomass growth and reduction activity is limited by physical mixing processes, a growing body of literaturemore » suggests that reaction could be enhanced by cell-to-cell interaction occurring over length scales extending tens to thousands of microns. Our project investigated two potential mechanisms of enhanced electron transfer. The first is the formation of single- or multiple-species biofilms that transport electrons via direct electrical connection such as conductive pili (i.e. ‘nanowires’) through biofilms to where the electron acceptor is available. The second is through diffusion of electron carriers from syntrophic bacteria to dissimilatory metal reducing bacteria (DMRB). The specific objectives of this work are (i) to quantify the extent and rate that electrons are transported between microorganisms in physical mixing zones between an electron donor and electron acceptor (e.g. U(IV)), (ii) to quantify the extent that biomass growth and reaction are enhanced by interspecies electron transport, and (iii) to integrate mixing across scales (e.g., microscopic scale of electron transfer and macroscopic scale of diffusion) in an integrated numerical model to quantify these mechanisms on overall U(VI) reduction rates. We tested these hypotheses with five tasks that integrate microbiological experiments, unique micro-fluidics experiments, flow cell experiments, and multi-scale numerical models. Continuous fed-batch reactors were used to derive kinetic parameters for DMRB, and to develop an enrichment culture for elucidation of syntrophic relationships in a complex microbial community. Pore and continuum scale experiments using microfluidic and bench top flow cells were used to evaluate the impact of cell-to-cell and microbial interactions on reaction enhancement in mixing-limited bioactive zones, and the mechanisms of this interaction. Some of the microfluidic experiments were used to develop and test models that considers direct cell-to-cell interactions during metal reduction. Pore scale models were incorporated into a multi-scale hybrid modeling framework that combines pore scale modeling at the reaction interface with continuum scale modeling. New computational frameworks for combining continuum and pore-scale models were also developed« less
NASA Astrophysics Data System (ADS)
Adams, R.; Quinn, P. F.; Bowes, M. J.
2014-09-01
A model for simulating runoff pathways and water quality fluxes has been developed using the Minimum Information (MIR) approach. The model, the Catchment Runoff Attenuation Tool (CRAFT) is applicable to meso-scale catchments which focusses primarily on hydrological pathways that mobilise nutrients. Hence CRAFT can be used investigate the impact of management intervention strategies designed to reduce the loads of nutrients into receiving watercourses. The model can help policy makers, for example in Europe, meet water quality targets and consider methods to obtain "good" ecological status. A case study of the 414 km2 Frome catchment, Dorset UK, has been described here as an application of the CRAFT model. The model was primarily calibrated on ten years of weekly data to reproduce the observed flows and nutrient (nitrate nitrogen - N - and phosphorus - P) concentrations. Also data from two years of sub-daily high resolution monitoring at the same site were also analysed. These data highlighted some additional signals in the nutrient flux, particularly of soluble reactive phosphorus, which were not observable in the weekly data. This analysis has prompted the choice of using a daily timestep for this meso-scale modelling study as the minimum information requirement. A management intervention scenario was also run to show how the model can support catchment managers to investigate how reducing the concentrations of N and P in the various flow pathways. This scale appropriate modelling tool can help policy makers consider a range of strategies to to meet the European Union (EU) water quality targets for this type of catchment.
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
NASA Astrophysics Data System (ADS)
Sela, S.; Woodbury, P. B.; van Es, H. M.
2018-05-01
The US Midwest is the largest and most intensive corn (Zea mays, L.) production region in the world. However, N losses from corn systems cause serious environmental impacts including dead zones in coastal waters, groundwater pollution, particulate air pollution, and global warming. New approaches to reducing N losses are urgently needed. N surplus is gaining attention as such an approach for multiple cropping systems. We combined experimental data from 127 on-farm field trials conducted in seven US states during the 2011–2016 growing seasons with biochemical simulations using the PNM model to quantify the benefits of a dynamic location-adapted management approach to reduce N surplus. We found that this approach allowed large reductions in N rate (32%) and N surplus (36%) compared to existing static approaches, without reducing yield and substantially reducing yield-scaled N losses (11%). Across all sites, yield-scaled N losses increased linearly with N surplus values above ~48 kg ha‑1. Using the dynamic model-based N management approach enabled growers to get much closer to this target than using existing static methods, while maintaining yield. Therefore, this approach can substantially reduce N surplus and N pollution potential compared to static N management.
Forward-bias tunneling - A limitation to bipolar device scaling
NASA Technical Reports Server (NTRS)
Del Alamo, Jesus A.; Swanson, Richard M.
1986-01-01
Forward-bias tunneling is observed in heavily doped p-n junctions of bipolar transistors. A simple phenomenological model suitable to incorporation in device codes is developed. The model identifies as key parameters the space-charge-region (SCR) thickness at zero bias and the reduced doping level at its edges which can both be obtained from CV characteristics. This tunneling mechanism may limit the maximum gain achievable from scaled bipolar devices.
A Neutral Network based Early Eathquake Warning model in California region
NASA Astrophysics Data System (ADS)
Xiao, H.; MacAyeal, D. R.
2016-12-01
Early Earthquake Warning systems could reduce loss of lives and other economic impact resulted from natural disaster or man-made calamity. Current systems could be further enhanced by neutral network method. A 3 layer neural network model combined with onsite method was deployed in this paper to improve the recognition time and detection time for large scale earthquakes.The 3 layer neutral network early earthquake warning model adopted the vector feature design for sample events happened within 150 km radius of the epicenters. Dataset used in this paper contained both destructive events and small scale events. All the data was extracted from IRIS database to properly train the model. In the training process, backpropagation algorithm was used to adjust the weight matrices and bias matrices during each iteration. The information in all three channels of the seismometers served as the source in this model. Through designed tests, it was indicated that this model could identify approximately 90 percent of the events' scale correctly. And the early detection could provide informative evidence for public authorities to make further decisions. This indicated that neutral network model could have the potential to strengthen current early warning system, since the onsite method may greatly reduce the responding time and save more lives in such disasters.
NASA Technical Reports Server (NTRS)
Zalesak, J.
1975-01-01
A dynamic substructuring analysis, utilizing the component modes technique, of the 1/8 scale space shuttle orbiter finite element model is presented. The analysis was accomplished in 3 phases, using NASTRAN RIGID FORMAT 3, with appropriate Alters, on the IBM 360-370. The orbiter was divided into 5 substructures, each of which was reduced to interface degrees of freedom and generalized normal modes. The reduced substructures were coupled to yield the first 23 symmetric free-free orbiter modes, and the eigenvectors in the original grid point degree of freedom lineup were recovered. A comparison was made with an analysis which was performed with the same model using the direct coordinate elimination approach. Eigenvalues were extracted using the inverse power method.
Hybrid stochastic simplifications for multiscale gene networks.
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-09-07
Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Ştefănescu, R.; Sandu, A.; Navon, I. M.
2015-08-01
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.
A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model
Chacon, Luis; Stanier, Adam John
2016-12-01
Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less
Item Response Modeling of Paired Comparison and Ranking Data
ERIC Educational Resources Information Center
Maydeu-Olivares, Alberto; Brown, Anna
2010-01-01
The comparative format used in ranking and paired comparisons tasks can significantly reduce the impact of uniform response biases typically associated with rating scales. Thurstone's (1927, 1931) model provides a powerful framework for modeling comparative data such as paired comparisons and rankings. Although Thurstonian models are generally…
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...
2016-06-29
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less
Integrated Tokamak modeling: When physics informs engineering and research planning
NASA Astrophysics Data System (ADS)
Poli, Francesca Maria
2018-05-01
Modeling tokamaks enables a deeper understanding of how to run and control our experiments and how to design stable and reliable reactors. We model tokamaks to understand the nonlinear dynamics of plasmas embedded in magnetic fields and contained by finite size, conducting structures, and the interplay between turbulence, magneto-hydrodynamic instabilities, and wave propagation. This tutorial guides through the components of a tokamak simulator, highlighting how high-fidelity simulations can guide the development of reduced models that can be used to understand how the dynamics at a small scale and short time scales affects macroscopic transport and global stability of plasmas. It discusses the important role that reduced models have in the modeling of an entire plasma discharge from startup to termination, the limits of these models, and how they can be improved. It discusses the important role that efficient workflows have in the coupling between codes, in the validation of models against experiments and in the verification of theoretical models. Finally, it reviews the status of integrated modeling and addresses the gaps and needs towards predictions of future devices and fusion reactors.
Integrated Tokamak modeling: When physics informs engineering and research planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poli, Francesca Maria
Modeling tokamaks enables a deeper understanding of how to run and control our experiments and how to design stable and reliable reactors. We model tokamaks to understand the nonlinear dynamics of plasmas embedded in magnetic fields and contained by finite size, conducting structures, and the interplay between turbulence, magneto-hydrodynamic instabilities, and wave propagation. This tutorial guides through the components of a tokamak simulator, highlighting how high-fidelity simulations can guide the development of reduced models that can be used to understand how the dynamics at a small scale and short time scales affects macroscopic transport and global stability of plasmas. Itmore » discusses the important role that reduced models have in the modeling of an entire plasma discharge from startup to termination, the limits of these models, and how they can be improved. It discusses the important role that efficient workflows have in the coupling between codes, in the validation of models against experiments and in the verification of theoretical models. Finally, it reviews the status of integrated modeling and addresses the gaps and needs towards predictions of future devices and fusion reactors.« less
Integrated Tokamak modeling: When physics informs engineering and research planning
Poli, Francesca Maria
2018-05-01
Modeling tokamaks enables a deeper understanding of how to run and control our experiments and how to design stable and reliable reactors. We model tokamaks to understand the nonlinear dynamics of plasmas embedded in magnetic fields and contained by finite size, conducting structures, and the interplay between turbulence, magneto-hydrodynamic instabilities, and wave propagation. This tutorial guides through the components of a tokamak simulator, highlighting how high-fidelity simulations can guide the development of reduced models that can be used to understand how the dynamics at a small scale and short time scales affects macroscopic transport and global stability of plasmas. Itmore » discusses the important role that reduced models have in the modeling of an entire plasma discharge from startup to termination, the limits of these models, and how they can be improved. It discusses the important role that efficient workflows have in the coupling between codes, in the validation of models against experiments and in the verification of theoretical models. Finally, it reviews the status of integrated modeling and addresses the gaps and needs towards predictions of future devices and fusion reactors.« less
A data fusion approach for mapping daily evapotranspiration at field scale
USDA-ARS?s Scientific Manuscript database
The capability for mapping water consumption over cropped landscapes on a daily and seasonal basis is increasingly relevant given forecasted scenarios of reduced water availability. Prognostic modeling of water losses to the atmosphere, or evapotranspiration (ET), at field or finer scales in agricul...
Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals
Schmittfull, Marcel; Vlah, Zvonimir
2016-11-28
Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less
Numerical modeling of the SNS H{sup −} ion source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan
Ion source rf antennas that produce H- ions can fail when plasma heating causes ablation of the insulating coating due to small structural defects such as cracks. Reducing antenna failures that reduce the operating capabilities of the Spallation Neutron Source (SNS) accelerator is one of the top priorities of the SNS H- Source Program at ORNL. Numerical modeling of ion sources can provide techniques for optimizing design in order to reduce antenna failures. There are a number of difficulties in developing accurate models of rf inductive plasmas. First, a large range of spatial and temporal scales must be resolved inmore » order to accurately capture the physics of plasma motion, including the Debye length, rf frequencies on the order of tens of MHz, simulation time scales of many hundreds of rf periods, large device sizes on tens of cm, and ion motions that are thousands of times slower than electrons. This results in large simulation domains with many computational cells for solving plasma and electromagnetic equations, short time steps, and long-duration simulations. In order to reduce the computational requirements, one can develop implicit models for both fields and particle motions (e.g. divergence-preserving ADI methods), various electrostatic models, or magnetohydrodynamic models. We have performed simulations using all three of these methods and have found that fluid models have the greatest potential for giving accurate solutions while still being fast enough to perform long timescale simulations in a reasonable amount of time. We have implemented a number of fluid models with electromagnetics using the simulation tool USim and applied them to modeling the SNS H- ion source. We found that a reduced, single-fluid MHD model with an imposed magnetic field due to the rf antenna current and the confining multi-cusp field generated increased bulk plasma velocities of > 200 m/s in the region of the antenna where ablation is often observed in the SNS source. We report here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.« less
NASA Astrophysics Data System (ADS)
Separovic, Leo; Husain, Syed Zahid; Yu, Wei
2015-09-01
Internal variability (IV) in dynamical downscaling with limited-area models (LAMs) represents a source of error inherent to the downscaled fields, which originates from the sensitive dependence of the models to arbitrarily small modifications. If IV is large it may impose the need for probabilistic verification of the downscaled information. Atmospheric spectral nudging (ASN) can reduce IV in LAMs as it constrains the large-scale components of LAM fields in the interior of the computational domain and thus prevents any considerable penetration of sensitively dependent deviations into the range of large scales. Using initial condition ensembles, the present study quantifies the impact of ASN on IV in LAM simulations in the range of fine scales that are not controlled by spectral nudging. Four simulation configurations that all include strong ASN but differ in the nudging settings are considered. In the fifth configuration, grid nudging of land surface variables toward high-resolution surface analyses is applied. The results show that the IV at scales larger than 300 km can be suppressed by selecting an appropriate ASN setup. At scales between 300 and 30 km, however, in all configurations, the hourly near-surface temperature, humidity, and winds are only partly reproducible. Nudging the land surface variables is found to have the potential to significantly reduce IV, particularly for fine-scale temperature and humidity. On the other hand, hourly precipitation accumulations at these scales are generally irreproducible in all configurations, and probabilistic approach to downscaling is therefore recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Jiafu; Phipps, S.J.; Pitman, A.J.
The CSIRO Mk3L climate system model, a reduced-resolution coupled general circulation model, has previously been described in this journal. The model is configured for millennium scale or multiple century scale simulations. This paper reports the impact of replacing the relatively simple land surface scheme that is the default parameterisation in Mk3L with a sophisticated land surface model that simulates the terrestrial energy, water and carbon balance in a physically and biologically consistent way. An evaluation of the new model s near-surface climatology highlights strengths and weaknesses, but overall the atmospheric variables, including the near-surface air temperature and precipitation, are simulatedmore » well. The impact of the more sophisticated land surface model on existing variables is relatively small, but generally positive. More significantly, the new land surface scheme allows an examination of surface carbon-related quantities including net primary productivity which adds significantly to the capacity of Mk3L. Overall, results demonstrate that this reduced-resolution climate model is a good foundation for exploring long time scale phenomena. The addition of the more sophisticated land surface model enables an exploration of important Earth System questions including land cover change and abrupt changes in terrestrial carbon storage.« less
McLaren, Suzanne
2016-01-01
Internalized homophobia has been linked to depression among gay men, lesbians, and bisexuals. Relatively little research has investigated the link between internalized homophobia and suicidal thoughts and behaviors. The current research investigated the interrelations among internalized homophobia, depressive symptoms, and suicidal ideation by testing additive, mediation, and moderation models. Self-identified Australian gay men (n = 360), lesbians (n = 444), and bisexual women (n = 114) completed the Internalized Homophobia Scale, the Center for Epidemiological Studies Depression Scale, and the suicide subscale of the General Health Questionnaire. Results supported the additive and partial mediation models for gay men and the mediation and moderation models for lesbians. None of the models were supported for bisexual women. The findings imply that clinicians should focus on reducing internalized homophobia and depressive symptoms among gay men and lesbians, and depressive symptoms among bisexual women, to reduce suicidal ideation.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
NASA Astrophysics Data System (ADS)
Emery, Charlotte Marie; Paris, Adrien; Biancamaria, Sylvain; Boone, Aaron; Calmant, Stéphane; Garambois, Pierre-André; Santos da Silva, Joecila
2018-04-01
Land surface models (LSMs) are widely used to study the continental part of the water cycle. However, even though their accuracy is increasing, inherent model uncertainties can not be avoided. In the meantime, remotely sensed observations of the continental water cycle variables such as soil moisture, lakes and river elevations are more frequent and accurate. Therefore, those two different types of information can be combined, using data assimilation techniques to reduce a model's uncertainties in its state variables or/and in its input parameters. The objective of this study is to present a data assimilation platform that assimilates into the large-scale ISBA-CTRIP LSM a punctual river discharge product, derived from ENVISAT nadir altimeter water elevation measurements and rating curves, over the whole Amazon basin. To deal with the scale difference between the model and the observation, the study also presents an initial development for a localization treatment that allows one to limit the impact of observations to areas close to the observation and in the same hydrological network. This assimilation platform is based on the ensemble Kalman filter and can correct either the CTRIP river water storage or the discharge. Root mean square error (RMSE) compared to gauge discharges is globally reduced until 21 % and at Óbidos, near the outlet, RMSE is reduced by up to 52 % compared to ENVISAT-based discharge. Finally, it is shown that localization improves results along the main tributaries.
Order reduction for a model of marine bacteriophage evolution
NASA Astrophysics Data System (ADS)
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
A model of Beijing drivers' scrambling behaviors.
Shi, Jing; Bai, Yun; Tao, Li; Atchley, Paul
2011-07-01
A major, but unstudied, cause of crashes in China is drivers that "scramble" to gain the right of way in violation of traffic regulations. The motivation of this study is to explore the features of drivers' scrambling behaviors and the attitudes and driving skills that influence them. In this study, we established a scrambling behavior scale, and developed a driving attitude scale and a driving skill scale using factor analysis of an Internet survey of 486 drivers in Beijing. A structural equation model of scrambling behavior toward cars and pedestrians/cyclists was developed with attitudes and skills as predictors of behavior. Skills and attitudes of approval toward violations of traffic rules did not predict scrambling behaviors, while the motivation for safety and attitudes against violating traffic rules led to reduced scrambling behaviors. The current work highlights this peculiar aspect of Chinese roads and suggests methods to reduce the behavior. Copyright © 2011 Elsevier Ltd. All rights reserved.
Online Knowledge-Based Model for Big Data Topic Extraction.
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
2016-01-01
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half.
Some aspects of control of a large-scale dynamic system
NASA Technical Reports Server (NTRS)
Aoki, M.
1975-01-01
Techniques of predicting and/or controlling the dynamic behavior of large scale systems are discussed in terms of decentralized decision making. Topics discussed include: (1) control of large scale systems by dynamic team with delayed information sharing; (2) dynamic resource allocation problems by a team (hierarchical structure with a coordinator); and (3) some problems related to the construction of a model of reduced dimension.
ERIC Educational Resources Information Center
Ebesutani, Chad; Reise, Steven P.; Chorpita, Bruce F.; Ale, Chelsea; Regan, Jennifer; Young, John; Higa-McMillan, Charmaine; Weisz, John R.
2012-01-01
Using a school-based (N = 1,060) and clinic-referred (N = 303) youth sample, the authors developed a 25-item shortened version of the Revised Child Anxiety and Depression Scale (RCADS) using Schmid-Leiman exploratory bifactor analysis to reduce client burden and administration time and thus improve the transportability characteristics of this…
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-04
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-07-01
Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.
Improved simulation of tropospheric ozone by a global-multi-regional two-way coupling model system
NASA Astrophysics Data System (ADS)
Yan, Y.-Y.; Lin, J.-T.; Chen, J.; Hu, L.
2015-09-01
Small-scale nonlinear chemical and physical processes over pollution source regions affect the global ozone (O3) chemistry, but these processes are not captured by current global chemical transport models (CTMs) and chemistry-climate models that are limited by coarse horizontal resolutions (100-500 km, typically 200 km). These models tend to contain large (and mostly positive) tropospheric O3 biases in the Northern Hemisphere. Here we use a recently built two-way coupling system of the GEOS-Chem CTM to simulate the global tropospheric O3 in 2009. The system couples the global model (at 2.5° long. × 2° lat.) and its three nested models (at 0.667° long. × 0.5° lat.) covering Asia, North America and Europe, respectively. Benefiting from the high resolution, the nested models better capture small-scale processes than the global model alone. In the coupling system, the nested models provide results to modify the global model simulation within respective nested domains while taking the lateral boundary conditions from the global model. Due to the "coupling" effects, the two-way system significantly improves the tropospheric O3 simulation upon the global model alone, as found by comparisons with a suite of ground (1420 sites from WDCGG, GMD, EMEP, and AQS), aircraft (HIPPO and MOZAIC), and satellite measurements (two OMI products). Compared to the global model alone, the two-way coupled simulation enhances the correlation in day-to-day variation of afternoon mean O3 with the ground measurements from 0.53 to 0.68, and it reduces the mean model bias from 10.8 to 6.7 ppb in annual average afternoon O3. Regionally, the coupled model reduces the bias by 4.6 ppb over Europe, 3.9 ppb over North America, and 3.1 ppb over other regions. The two-way coupling brings O3 vertical profiles much closer to the HIPPO (for remote areas) and MOZAIC (for polluted regions) data, reducing the tropospheric (0-9 km) mean bias by 3-10 ppb at most MOZAIC sites and by 5.3 ppb for HIPPO profiles. The two-way coupled simulation also reduces the global tropospheric column ozone by 3.0 DU (9.5 %, annual mean), bringing them closer to the OMI data in all seasons. Simulation improvements are more significant in the northern hemisphere, and are primarily a result of improved representation of urban-rural contrast and other small-scale processes. The two-way coupled simulation also reduces the global tropospheric mean hydroxyl radical by 5 % with enhancements by 5 % in the lifetimes of methyl chloroform (from 5.58 to 5.87 yr) and methane (from 9.63 to 10.12 yr), bringing them closer to observation-based estimates. Improving model representations of small-scale processes are a critical step forward to understanding the global tropospheric chemistry.
NASA Astrophysics Data System (ADS)
Bossuyt, Juliaan; Howland, Michael F.; Meneveau, Charles; Meyers, Johan
2017-01-01
Unsteady loading and spatiotemporal characteristics of power output are measured in a wind tunnel experiment of a microscale wind farm model with 100 porous disk models. The model wind farm is placed in a scaled turbulent boundary layer, and six different layouts, varied from aligned to staggered, are considered. The measurements are done by making use of a specially designed small-scale porous disk model, instrumented with strain gages. The frequency response of the measurements goes up to the natural frequency of the model, which corresponds to a reduced frequency of 0.6 when normalized by the diameter and the mean hub height velocity. The equivalent range of timescales, scaled to field-scale values, is 15 s and longer. The accuracy and limitations of the acquisition technique are documented and verified with hot-wire measurements. The spatiotemporal measurement capabilities of the experimental setup are used to study the cross-correlation in the power output of various porous disk models of wind turbines. A significant correlation is confirmed between streamwise aligned models, while staggered models show an anti-correlation.
Ascarrunz, F G; Kisley, M A; Flach, K A; Hamilton, R W; MacGregor, R J
1995-07-01
This paper applies a general mathematical system for characterizing and scaling functional connectivity and information flow across the diffuse (EC) and discrete (DG) input junctions to the CA3 hippocampus. Both gross connectivity and coordinated multiunit informational firing patterns are quantitatively characterized in terms of 32 defining parameters interrelated by 17 equations, and then scaled down according to rules for uniformly proportional scaling and for partial representation. The diffuse EC-CA3 junction is shown to be uniformly scalable with realistic representation of both essential spatiotemporal cooperativity and coordinated firing patterns down to populations of a few hundred neurons. Scaling of the discrete DG-CA3 junction can be effected with a two-step process, which necessarily deviates from uniform proportionality but nonetheless produces a valuable and readily interpretable reduced model, also utilizing a few hundred neurons in the receiving population. Partial representation produces a reduced model of only a portion of the full network where each model neuron corresponds directly to a biological neuron. The mathematical analysis illustrated here shows that although omissions and distortions are inescapable in such an application, satisfactorily complete and accurate models the size of pattern modules are possible. Finally, the mathematical characterization of these junctions generates a theory which sees the DG as a definer of the fine structure of embedded traces in the hippocampus and entire coordinated patterns of sequences of 14-cell links in CA3 as triggered by the firing of sequences of individual neurons in DG.
USDA-ARS?s Scientific Manuscript database
Streambank stabilization techniques are often implemented to reduce sediment loads from unstable streambanks. Process-based models can predict sediment yields with stabilization scenarios prior to implementation. However, a framework does not exist on how to effectively utilize these models to evalu...
NASA Technical Reports Server (NTRS)
Booth, Earl R., Jr.; Coston, Calvin W., Jr.
2005-01-01
Tests were performed on a 1/20th-scale model of the Low Speed Aeroacoustic Wind Tunnel to determine the performance effects of insertion of acoustic baffles in the tunnel inlet, replacement of the existing collector with a new collector design in the open jet test section, and addition of flow splitters to the acoustic baffle section downstream of the test section. As expected, the inlet baffles caused a reduction in facility performance. About half of the performance loss was recovered by addition the flow splitters to the downstream baffles. All collectors tested reduced facility performance. However, test chamber recirculation flow was reduced by the new collector designs and shielding of some of the microphones was reduced owing to the smaller size of the new collector. Overall performance loss in the facility is expected to be a 5 percent top flow speed reduction, but the facility will meet OSHA limits for external noise levels and recirculation in the test section will be reduced.
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Covic, Tanya; Pallant, Julie F; Tennant, Alan; Cox, Sally; Emery, Paul; Conaghan, Philip G
2009-01-01
Background Depression is common in rheumatoid arthritis (RA), however reported prevalence varies considerably. Two frequently used instruments to identify depression are the Center for Epidemiological Studies Depression (CES-D) scale, and the Hospital Anxiety and Depression Scale (HADS). The objectives of this study were to test if the CES-D and HADS-D (a) satisfy current modern psychometric standards for unidimensional measurement in an early RA sample; (b) measure the same construct (i.e. depression); and (c) identify similar levels of depression. Methods Data from the two scales completed by patients with early RA were fitted to the Rasch measurement model to show that (a) each scale satisfies the criteria of fit to the model, including strict unidimensionality; (b) that the scales can be co-calibrated onto a single underlying continuum of depression and to (c) examine the location of the cut points on the underlying continuum as indication of the prevalence of depression. Results Ninety-two patients with early RA (62% female; mean age = 56.3, SD = 13.7) gave 141 sets of paired CES-D and HAD-D data. Fit of the data from the CES-D was found to be poor, and the scale had to be reduced to 13 items to satisfy Rasch measurement criteria whereas the HADS-D met model expectations from the outset. The 20 items combined (CES-D13 and HADS-D) satisfied Rasch model expectations. The CES-D gave a much higher prevalence of depression than the HADS-D. Conclusion The CES-D in its present form is unsuitable for use in patients with early RA, and needs to be reduced to a 13-item scale. The HADS-D is valid for early RA and the two scales measure the same underlying construct but their cut points lead to different estimates of the level of depression. Revised cut points on the CES-D13 provide comparative prevalence rates. PMID:19200388
General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae).
Camacho, Ernesto Robayo; Chong, Juang-Horng
We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Laleian, A.; Valocchi, A. J.; Werth, C. J.
2017-12-01
Multiscale models of reactive transport in porous media are capable of capturing complex pore-scale processes while leveraging the efficiency of continuum-scale models. In particular, porosity changes caused by biofilm development yield complex feedbacks between transport and reaction that are difficult to quantify at the continuum scale. Pore-scale models, needed to accurately resolve these dynamics, are often impractical for applications due to their computational cost. To address this challenge, we are developing a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled with a mortar method providing continuity at interfaces. We explore two decompositions of coupled pore-scale and continuum-scale regions to study biofilm growth in a transverse mixing zone. In the first decomposition, all reaction is confined to a pore-scale region extending the transverse mixing zone length. Only solute transport occurs in the surrounding continuum-scale regions. Relative to a fully pore-scale result, we find the multiscale model with this decomposition has a reduced run time and consistent result in terms of biofilm growth and solute utilization. In the second decomposition, reaction occurs in both an up-gradient pore-scale region and a down-gradient continuum-scale region. To quantify clogging, the continuum-scale model implements empirical relations between porosity and continuum-scale parameters, such as permeability and the transverse dispersion coefficient. Solutes are sufficiently mixed at the end of the pore-scale region, such that the initial reaction rate is accurately computed using averaged concentrations in the continuum-scale region. Relative to a fully pore-scale result, we find accuracy of biomass growth in the multiscale model with this decomposition improves as the interface between pore-scale and continuum-scale regions moves downgradient where transverse mixing is more fully developed. Also, this decomposition poses additional challenges with respect to mortar coupling. We explore these challenges and potential solutions. While recent work has demonstrated growing interest in multiscale models, further development is needed for their application to field-scale subsurface contaminant transport and remediation.
Modeling urban building energy use: A review of modeling approaches and procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wenliang; Zhou, Yuyu; Cetin, Kristen
With rapid urbanization and economic development, the world has been experiencing an unprecedented increase in energy consumption and greenhouse gas (GHG) emissions. While reducing energy consumption and GHG emissions is a common interest shared by major developed and developing countries, actions to enable these global reductions are generally implemented at the city scale. This is because baseline information from individual cities plays an important role in identifying economical options for improving building energy efficiency and reducing GHG emissions. Numerous approaches have been proposed for modeling urban building energy use in the past decades. This paper aims to provide an up-to-datemore » review of the broad categories of energy models for urban buildings and describes the basic workflow of physics-based, bottom-up models and their applications in simulating urban-scale building energy use. Because there are significant differences across models with varied potential for application, strengths and weaknesses of the reviewed models are also presented. This is followed by a discussion of challenging issues associated with model preparation and calibration.« less
Modeling urban building energy use: A review of modeling approaches and procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wenliang; Zhou, Yuyu; Cetin, Kristen
With rapid urbanization and economic development, the world has been experiencing an unprecedented increase in energy consumption and greenhouse gas (GHG) emissions. While reducing energy consumption and GHG emissions is a common interest shared by major developed and developing countries, actions to enable these global reductions are generally implemented at the city scale. This is because baseline information from individual cities plays an important role in identifying economical options for improving building energy efficiency and reducing GHG emissions. Numerous approaches have been proposed for modeling urban building energy use in the past decades. Our paper aims to provide an up-to-datemore » review of the broad categories of energy models for urban buildings and describes the basic workflow of physics-based, bottom-up models and their applications in simulating urban-scale building energy use. Because there are significant differences across models with varied potential for application, strengths and weaknesses of the reviewed models are also presented. We then follow this with a discussion of challenging issues associated with model preparation and calibration.« less
Modeling urban building energy use: A review of modeling approaches and procedures
Li, Wenliang; Zhou, Yuyu; Cetin, Kristen; ...
2017-11-13
With rapid urbanization and economic development, the world has been experiencing an unprecedented increase in energy consumption and greenhouse gas (GHG) emissions. While reducing energy consumption and GHG emissions is a common interest shared by major developed and developing countries, actions to enable these global reductions are generally implemented at the city scale. This is because baseline information from individual cities plays an important role in identifying economical options for improving building energy efficiency and reducing GHG emissions. Numerous approaches have been proposed for modeling urban building energy use in the past decades. Our paper aims to provide an up-to-datemore » review of the broad categories of energy models for urban buildings and describes the basic workflow of physics-based, bottom-up models and their applications in simulating urban-scale building energy use. Because there are significant differences across models with varied potential for application, strengths and weaknesses of the reviewed models are also presented. We then follow this with a discussion of challenging issues associated with model preparation and calibration.« less
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.
2012-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-located arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in lat/lon bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Hua, H.
2012-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within SciReduce a versatile set of python operators for data lookup, access, subsetting, co-registration, mining, fusion, and statistical analysis. All operators take in sets of geo-arrays and generate more arrays. Large, multi-year satellite and model datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of granules) can be compared or fused in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP or webification URLs, thereby minimizing the size of the stored input and intermediate datasets. A typical map function might assemble and quality control AIRS Level-2 water vapor profiles for a year of data in parallel, then a reduce function would average the profiles in bins (again, in parallel), and a final reduce would aggregate the climatology and write it to output files. We are using SciReduce to automate the production of multiple versions of a multi-year water vapor climatology (AIRS & MODIS), stratified by Cloudsat cloud classification, and compare it to models (ECMWF & MERRA reanalysis). We will present the architecture of SciReduce, describe the achieved "clock time" speedups in fusing huge datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer.
Large-scale structure in superfluid Chaplygin gas cosmology
NASA Astrophysics Data System (ADS)
Yang, Rongjia
2014-03-01
We investigate the growth of the large-scale structure in the superfluid Chaplygin gas (SCG) model. Both linear and nonlinear growth, such as σ8 and the skewness S3, are discussed. We find the growth factor of SCG reduces to the Einstein-de Sitter case at early times while it differs from the cosmological constant model (ΛCDM) case in the large a limit. We also find there will be more stricture growth on large scales in the SCG scenario than in ΛCDM and the variations of σ8 and S3 between SCG and ΛCDM cannot be discriminated.
NASA Astrophysics Data System (ADS)
Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.
2013-09-01
Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).
NASA Astrophysics Data System (ADS)
Scheibe, T. D.; Tartakovsky, G.; Tartakovsky, A. M.; Fang, Y.; Mahadevan, R.; Lovley, D. R.
2012-12-01
Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2013-09-07
Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated withmore » microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under conditions in which one or more nutrients were limiting. The fitted Monod kinetic model was also applied at the Darcy scale; that is, to simulate average reaction processes at the scale of the entire pore-scale model domain. As we expected, even under excess nutrient conditions for which the Monod and genome-scale models predicted equal reaction rates at the pore scale, the Monod model over-predicted the rates of biomass growth and iron and acetate utilization when applied at the Darcy scale. This discrepancy is caused by an inherent assumption of perfect mixing over the Darcy-scale domain, which is clearly violated in the pore-scale models. These results help to explain the need to modify the flux constraint parameters in order to match observations in previous applications of the genome-scale model at larger scales. These results also motivate further investigation of quantitative multi-scale relationships between fundamental behavior at the pore scale (where genome-scale models are appropriately applied) and observed behavior at larger scales (where predictions of reactive transport phenomena are needed).« less
Inverse finite-size scaling for high-dimensional significance analysis
NASA Astrophysics Data System (ADS)
Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki
2018-06-01
We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
NASA Astrophysics Data System (ADS)
Alligné, S.; Maruzewski, P.; Dinh, T.; Wang, B.; Fedorov, A.; Iosfin, J.; Avellan, F.
2010-08-01
The growing development of renewable energies combined with the process of privatization, lead to a change of economical energy market strategies. Instantaneous pricings of electricity as a function of demand or predictions, induces profitable peak productions which are mainly covered by hydroelectric power plants. Therefore, operators harness more hydroelectric facilities at full load operating conditions. However, the Francis Turbine features an axi-symmetric rope leaving the runner which may act under certain conditions as an internal energy source leading to instability. Undesired power and pressure fluctuations are induced which may limit the maximum available power output. BC Hydro experiences such constraints in a hydroelectric power plant consisting of four 435 MW Francis Turbine generating units, which is located in Canada's province of British Columbia. Under specific full load operating conditions, one unit experiences power and pressure fluctuations at 0.46 Hz. The aim of the paper is to present a methodology allowing prediction of this prototype's instability frequency from investigations on the reduced scale model. A new hydro acoustic vortex rope model has been developed in SIMSEN software, taking into account the energy dissipation due to the thermodynamic exchange between the gas and the surrounding liquid. A combination of measurements, CFD simulations and computation of eigenmodes of the reduced scale model installed on test rig, allows the accurate calibration of the vortex rope model parameters at the model scale. Then, transposition of parameters to the prototype according to similitude laws is applied and stability analysis of the power plant is performed. The eigenfrequency of 0.39 Hz related to the first eigenmode of the power plant is determined to be unstable. Predicted frequency of the full load power and pressure fluctuations at the unit unstable operating point is found to be in general agreement with the prototype measurements.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
NASA Astrophysics Data System (ADS)
Yang, Guijun; Yang, Hao; Jin, Xiuliang; Pignatti, Stefano; Casa, Faffaele; Silverstro, Paolo Cosmo
2016-08-01
Drought is the most costly natural disasters in China and all over the world. It is very important to evaluate the drought-induced crop yield losses and further improve water use efficiency at regional scale. Firstly, crop biomass was estimated by the combined use of Synthetic Aperture Radar (SAR) and optical remote sensing data. Then the estimated biophysical variable was assimilated into crop growth model (FAO AquaCrop) by the Particle Swarm Optimization (PSO) method from farmland scale to regional scale.At farmland scale, the most important crop parameters of AquaCrop model were determined to reduce the used parameters in assimilation procedure. The Extended Fourier Amplitude Sensitivity Test (EFAST) method was used for assessing the contribution of different crop parameters to model output. Moreover, the AquaCrop model was calibrated using the experiment data in Xiaotangshan, Beijing.At regional scale, spatial application of our methods were carried out and validated in the rural area of Yangling, Shaanxi Province, in 2014. This study will provide guideline to make irrigation decision of balancing of water consumption and yield loss.
Hybrid stochastic simplifications for multiscale gene networks
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-01-01
Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554
Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics
NASA Astrophysics Data System (ADS)
Marcé, R.; Armengol, J.
2009-01-01
One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.
Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G
2004-08-01
Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.
2002-01-01
Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ştefănescu, R., E-mail: rstefane@vt.edu; Sandu, A., E-mail: sandu@cs.vt.edu; Navon, I.M., E-mail: inavon@fsu.edu
2015-08-15
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush–Kuhn–Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov–Galerkin projections. For the first time POD, tensorialmore » POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov–Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.« less
Estimating Skin Cancer Risk: Evaluating Mobile Computer-Adaptive Testing.
Djaja, Ngadiman; Janda, Monika; Olsen, Catherine M; Whiteman, David C; Chien, Tsair-Wei
2016-01-22
Response burden is a major detriment to questionnaire completion rates. Computer adaptive testing may offer advantages over non-adaptive testing, including reduction of numbers of items required for precise measurement. Our aim was to compare the efficiency of non-adaptive (NAT) and computer adaptive testing (CAT) facilitated by Partial Credit Model (PCM)-derived calibration to estimate skin cancer risk. We used a random sample from a population-based Australian cohort study of skin cancer risk (N=43,794). All 30 items of the skin cancer risk scale were calibrated with the Rasch PCM. A total of 1000 cases generated following a normal distribution (mean [SD] 0 [1]) were simulated using three Rasch models with three fixed-item (dichotomous, rating scale, and partial credit) scenarios, respectively. We calculated the comparative efficiency and precision of CAT and NAT (shortening of questionnaire length and the count difference number ratio less than 5% using independent t tests). We found that use of CAT led to smaller person standard error of the estimated measure than NAT, with substantially higher efficiency but no loss of precision, reducing response burden by 48%, 66%, and 66% for dichotomous, Rating Scale Model, and PCM models, respectively. CAT-based administrations of the skin cancer risk scale could substantially reduce participant burden without compromising measurement precision. A mobile computer adaptive test was developed to help people efficiently assess their skin cancer risk.
Wuellner, M R; Bramblett, R G; Guy, C S; Zale, A V; Roberts, D R; Johnson, J
2013-05-01
The objectives of this study were (1) to determine whether the presence or absence of prairie fishes can be modelled using habitat and biotic characteristics measured at the reach and catchment scales and (2) to identify which scale (i.e. reach, catchment or a combination of variables measured at both scales) best explains the presence or absence of fishes. Reach and catchment information from 120 sites sampled from 1999 to 2004 were incorporated into tree classifiers for 20 prairie fish species, and multiple criteria were used to evaluate models. Fewer than six models were considered significant when modelling individual fish occurrences at the reach, catchment or combined scale, and only one species was successfully modelled at all three scales. The scarcity of significant models is probably related to the rigorous criteria by which these models were evaluated as well as the prevalence of tolerant, generalist fishes in these stochastic and intermittent streams. No significant differences in the amount of reduced deviance, mean misclassification error rates (MER), and mean improvement in MER metrics was detected among the three scales. Results from this study underscore the importance of continued habitat assessment at smaller scales to further understand prairie-fish occurrences as well as further evaluations of modelling methods to examine habitat relationships for tolerant, ubiquitous species. Incorporation of such suggestions in the future may help provide more accurate models that will allow for better management and conservation of prairie-fish species. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
Reduced-form air quality modeling for community-scale ...
Transportation plays an important role in modern society, but its impact on air quality has been shown to have significant adverse effects on public health. Numerous reviews (HEI, CDC, WHO) summarizing findings of hundreds of studies conducted mainly in the last decade, conclude that exposures to traffic emissions near roads are a public health concern. The Community LINE Source Model (C-LINE) is a web-based model designed to inform the community user of local air quality impacts due to roadway vehicles in their region of interest using a simplified modeling approach. Reduced-form air quality modeling is a useful tool for examining what-if scenarios of changes in emissions, such as those due to changes in traffic volume, fleet mix, or vehicle speed. Examining various scenarios of air quality impacts in this way can identify potentially at-risk populations located near roadways, and the effects that a change in traffic activity may have on them. C-LINE computes dispersion of primary mobile source pollutants using meteorological conditions for the region of interest and computes air-quality concentrations corresponding to these selected conditions. C-LINE functionality has been expanded to model emissions from port-related activities (e.g. ships, trucks, cranes, etc.) in a reduced-form modeling system for local-scale near-port air quality analysis. This presentation describes the Community modeling tools C-LINE and C-PORT that are intended to be used by local gove
NASA Technical Reports Server (NTRS)
Dittmar, J. H.
1985-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.
Improving parallel I/O autotuning with performance modeling
Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...
2014-01-01
Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less
The potential for agricultural land use change to reduce flood risk in a large watershed
USDA-ARS?s Scientific Manuscript database
Effects of agricultural land management practices on surface runoff are evident at local scales, but evidence for watershed-scale impacts is limited. In this study, we used the Soil and Water Assessment Tool model to assess changes in downstream flood risks under different land uses for the large, ...
Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment
ERIC Educational Resources Information Center
Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas
2015-01-01
Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…
Scale Free Reduced Rank Image Analysis.
ERIC Educational Resources Information Center
Horst, Paul
In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…
Two-field analysis of no-scale supergravity inflation
Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; ...
2015-01-08
Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index n s and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflationmore » model with a quadratic potential are capable of reducing r to very small values << 0.1. Here, we also calculate the non-Gaussianity measure f NL, finding that is well below the current experimental sensitivity.« less
Structure and modeling of turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novikov, E.A.
The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scalemore » motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yongfeng
2016-09-01
U3Si2 and FeCrAl have been proposed as fuel and cladding concepts, respectively, for accident tolerance fuels with higher tolerance to accident scenarios compared to UO2. However, a lot of key physics and material properties regarding their in-pile performance are yet to be explored. To accelerate the understanding and reduce the cost of experimental studies, multiscale modeling and simulation are used to develop physics-based materials models to assist engineering scale fuel performance modeling. In this report, the lower-length-scale efforts in method and material model development supported by the Accident Tolerance Fuel (ATF) high-impact-problem (HIP) under the NEAMS program are summarized. Significantmore » progresses have been made regarding interatomic potential, phase field models for phase decomposition and gas bubble formation, and thermal conductivity for U3Si2 fuel, and precipitation in FeCrAl cladding. The accomplishments are very useful by providing atomistic and mesoscale tools, improving the current understanding, and delivering engineering scale models for these two ATF concepts.« less
Intermittency in small-scale turbulence: a velocity gradient approach
NASA Astrophysics Data System (ADS)
Meneveau, Charles; Johnson, Perry
2017-11-01
Intermittency of small-scale motions is an ubiquitous facet of turbulent flows, and predicting this phenomenon based on reduced models derived from first principles remains an important open problem. Here, a multiple-time scale stochastic model is introduced for the Lagrangian evolution of the full velocity gradient tensor in fluid turbulence at arbitrarily high Reynolds numbers. This low-dimensional model differs fundamentally from prior shell models and other empirically-motivated models of intermittency because the nonlinear gradient self-stretching and rotation A2 term vital to the energy cascade and intermittency development is represented exactly from the Navier-Stokes equations. With only one adjustable parameter needed to determine the model's effective Reynolds number, numerical solutions of the resulting set of stochastic differential equations show that the model predicts anomalous scaling for moments of the velocity gradient components and negative derivative skewness. It also predicts signature topological features of the velocity gradient tensor such as vorticity alignment trends with the eigen-directions of the strain-rate. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.
Transition between inverse and direct energy cascades in multiscale optical turbulence.
Malkin, V M; Fisch, N J
2018-03-01
Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a single scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.
Transition between inverse and direct energy cascades in multiscale optical turbulence
NASA Astrophysics Data System (ADS)
Malkin, V. M.; Fisch, N. J.
2018-03-01
Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a single scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.
Bioactivity profiling using high-throughput in vitro assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in hig...
NASA Technical Reports Server (NTRS)
Lim, Young-Kwon; Stefanova, Lydia B.; Chan, Steven C.; Schubert, Siegfried D.; OBrien, James J.
2010-01-01
This study assesses the regional-scale summer precipitation produced by the dynamical downscaling of analyzed large-scale fields. The main goal of this study is to investigate how much the regional model adds smaller scale precipitation information that the large-scale fields do not resolve. The modeling region for this study covers the southeastern United States (Florida, Georgia, Alabama, South Carolina, and North Carolina) where the summer climate is subtropical in nature, with a heavy influence of regional-scale convection. The coarse resolution (2.5deg latitude/longitude) large-scale atmospheric variables from the National Center for Environmental Prediction (NCEP)/DOE reanalysis (R2) are downscaled using the NCEP Environmental Climate Prediction Center regional spectral model (RSM) to produce precipitation at 20 km resolution for 16 summer seasons (19902005). The RSM produces realistic details in the regional summer precipitation at 20 km resolution. Compared to R2, the RSM-produced monthly precipitation shows better agreement with observations. There is a reduced wet bias and a more realistic spatial pattern of the precipitation climatology compared with the interpolated R2 values. The root mean square errors of the monthly R2 precipitation are reduced over 93 (1,697) of all the grid points in the five states (1,821). The temporal correlation also improves over 92 (1,675) of all grid points such that the domain-averaged correlation increases from 0.38 (R2) to 0.55 (RSM). The RSM accurately reproduces the first two observed eigenmodes, compared with the R2 product for which the second mode is not properly reproduced. The spatial patterns for wet versus dry summer years are also successfully simulated in RSM. For shorter time scales, the RSM resolves heavy rainfall events and their frequency better than R2. Correlation and categorical classification (above/near/below average) for the monthly frequency of heavy precipitation days is also significantly improved by the RSM.
Online Knowledge-Based Model for Big Data Topic Extraction
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
2016-01-01
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004
Data reduction of room tests for zone model validation
M. Janssens; H. C. Tran
1992-01-01
Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...
Program Helps Generate Boundary-Element Mathematical Models
NASA Technical Reports Server (NTRS)
Goldberg, R. K.
1995-01-01
Composite Model Generation-Boundary Element Method (COM-GEN-BEM) computer program significantly reduces time and effort needed to construct boundary-element mathematical models of continuous-fiber composite materials at micro-mechanical (constituent) scale. Generates boundary-element models compatible with BEST-CMS boundary-element code for anlaysis of micromechanics of composite material. Written in PATRAN Command Language (PCL).
NASA Astrophysics Data System (ADS)
Jiang, L.
2017-12-01
Climate change is considered to be one of the greatest environmental threats. Global climate models (GCMs) are the primary tool used for studying climate change. However, GCMs are limited because of their coarse spatial resolution and inability to resolve important sub-grid scale features such as terrain and clouds. Statistical downscaling methods can be used to downscale large-scale variables to local-scale. In this study, we assess the applicability of the Statistical Downscaling Model (SDSM) in downscaling the outputs from Beijing Normal University Earth System Model (BNU-ESM). The study focus on the the Loess Plateau, China, and the variables for downscaling include daily mean temperature (TMEAN), maximum temperature (TMAX) and minimum temperature (TMIN). The results show that SDSM performs well for these three climatic variables on the Loess Plateau. After downscaling, the root mean square errors for TMEAN, TMAX, TMIN for BNU-ESM were reduced by 70.9%, 75.1%, and 67.2%, respectively. All the rates of change in TMEAN, TMAX and TMIN during the 21st century decreased after SDSM downscaling. We also show that SDSM can effectively reduce uncertainty, compared with the raw model outputs. TMEAN uncertainty was reduced by 27.1%, 26.8%, and 16.3% for the future scenarios of RCP 2.6, RCP 4.5 and RCP 8.5, respectively. The corresponding reductions in uncertainty were 23.6%, 30.7%, and 18.7% for TMAX; 37.6%, 31.8%, and 23.2% for TMIN.
Bench-Scale Silicone Process for Low-Cost CO{sub 2} Capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Benjamin; Genovese, Sarah; Perry, Robert
2013-12-31
A bench-scale system was designed and built to test an aminosilicone-based solvent. A model was built of the bench-scale system and this model was scaled up to model the performance of a carbon capture unit, using aminosilicones, for CO{sub 2} capture and sequestration (CCS) for a pulverized coal (PC) boiler at 550 MW. System and economic analysis for the carbon capture unit demonstrates that the aminosilicone solvent has significant advantages relative to a monoethanol amine (MEA)-based system. The CCS energy penalty for MEA is 35.9% and the energy penalty for aminosilicone solvent is 30.4% using a steam temperature of 395more » °C (743 °F). If the steam temperature is lowered to 204 °C (400 °F), the energy penalty for the aminosilicone solvent is reduced to 29%. The increase in cost of electricity (COE) over the non-capture case for MEA is ~109% and increase in COE for aminosilicone solvent is ~98 to 103% depending on the solvent cost at a steam temperature of 395 °C (743 °F). If the steam temperature is lowered to 204 °C (400 °F), the increase in COE for the aminosilicone solvent is reduced to ~95-100%.« less
NASA Astrophysics Data System (ADS)
Tassi, R.; Lorenzini, F.; Allasia, D. G.
2015-06-01
In the last decades, new approaches were adopted to manage stormwater as close to its source as possible through technologies and devices that preserve and recreate natural landscape features. Green Roofs (GR) are examples of these devices that are also incentivized by city's stormwater management plans. Several studies show that GR decreases on-site runoff from impervious surfaces, however, the analysis of the effect of widespread implementation of GR in the flood characteristics at the urban basin scale in subtropical areas are little discussed, mainly because of the absence of data. Thereby, this paper shows results related to the monitoring of an extensive modular GR under subtropical weather conditions, the development of a rainfall-runoff model based on the modified Curve Number (CN) and SCS Triangular Unit Hydrograph (TUH) methods and the analysis of large-scale impact of GR by modelling different basins. The model was calibrated against observed data and showed that GR absorbed almost all the smaller storms and reduced runoff even during the most intense rainfall. The overall CN was estimated in 83 (consistent with available literature) with the shape of hydrographs well reproduced. Large-scale modelling (in basins ranging from 0.03 ha to several square kilometers) showed that the widespread use of GRs reduced peak flows (volumes) around 57% (48%) at source and 38% (32%) at the basin scale. Thus, this research validated a tool for the assessment of structural management measures (specifically GR) to address changes in flood characteristics in the city's water management planning. From the application of this model it was concluded that even if the efficiency of GR decreases as the basin scale increase they still provide a good option to cope with urbanization impact.
Liu, Shuguang; Bond-Lamberty, Ben; Hicke, Jeffrey A.; Vargas, Rodrigo; Zhao, Shuqing; Chen, Jing; Edburg, Steven L.; Hu, Yueming; Liu, Jinxun; McGuire, A. David; Xiao, Jingfeng; Keane, Robert; Yuan, Wenping; Tang, Jianwu; Luo, Yiqi; Potter, Christopher; Oeding, Jennifer
2011-01-01
Forest disturbances greatly alter the carbon cycle at various spatial and temporal scales. It is critical to understand disturbance regimes and their impacts to better quantify regional and global carbon dynamics. This review of the status and major challenges in representing the impacts of disturbances in modeling the carbon dynamics across North America revealed some major advances and challenges. First, significant advances have been made in representation, scaling, and characterization of disturbances that should be included in regional modeling efforts. Second, there is a need to develop effective and comprehensive process‐based procedures and algorithms to quantify the immediate and long‐term impacts of disturbances on ecosystem succession, soils, microclimate, and cycles of carbon, water, and nutrients. Third, our capability to simulate the occurrences and severity of disturbances is very limited. Fourth, scaling issues have rarely been addressed in continental scale model applications. It is not fully understood which finer scale processes and properties need to be scaled to coarser spatial and temporal scales. Fifth, there are inadequate databases on disturbances at the continental scale to support the quantification of their effects on the carbon balance in North America. Finally, procedures are needed to quantify the uncertainty of model inputs, model parameters, and model structures, and thus to estimate their impacts on overall model uncertainty. Working together, the scientific community interested in disturbance and its impacts can identify the most uncertain issues surrounding the role of disturbance in the North American carbon budget and develop working hypotheses to reduce the uncertainty
Transport Barriers in Bootstrap Driven Tokamaks
NASA Astrophysics Data System (ADS)
Staebler, Gary
2017-10-01
Maximizing the bootstrap current in a tokamak, so that it drives a high fraction of the total current, reduces the external power required to drive current by other means. Improved energy confinement, relative to empirical scaling laws, enables a reactor to more fully take advantage of the bootstrap driven tokamak. Experiments have demonstrated improved energy confinement due to the spontaneous formation of an internal transport barrier in high bootstrap fraction discharges. Gyrokinetic analysis, and quasilinear predictive modeling, demonstrates that the observed transport barrier is due to the suppression of turbulence primarily due to the large Shafranov shift. ExB velocity shear does not play a significant role in the transport barrier due to the high safety factor. It will be shown, that the Shafranov shift can produce a bifurcation to improved confinement in regions of positive magnetic shear or a continuous reduction in transport for weak or negative magnetic shear. Operation at high safety factor lowers the pressure gradient threshold for the Shafranov shift driven barrier formation. The ion energy transport is reduced to neoclassical and electron energy and particle transport is reduced, but still turbulent, within the barrier. Deeper into the plasma, very large levels of electron transport are observed. The observed electron temperature profile is shown to be close to the threshold for the electron temperature gradient (ETG) mode. A large ETG driven energy transport is qualitatively consistent with recent multi-scale gyrokinetic simulations showing that reducing the ion scale turbulence can lead to large increase in the electron scale transport. A new saturation model for the quasilinear TGLF transport code, that fits these multi-scale gyrokinetic simulations, can match the data if the impact of zonal flow mixing on the ETG modes is reduced at high safety factor. This work was supported by the U.S. Department of Energy under DE-FG02-95ER54309 and DE-FC02-04ER54698.
An Equation-Free Reduced-Order Modeling Approach to Tropical Pacific Simulation
NASA Astrophysics Data System (ADS)
Wang, Ruiwen; Zhu, Jiang; Luo, Zhendong; Navon, I. M.
2009-03-01
The “equation-free” (EF) method is often used in complex, multi-scale problems. In such cases it is necessary to know the closed form of the required evolution equations about oscopic variables within some applied fields. Conceptually such equations exist, however, they are not available in closed form. The EF method can bypass this difficulty. This method can obtain oscopic information by implementing models at a microscopic level. Given an initial oscopic variable, through lifting we can obtain the associated microscopic variable, which may be evolved using Direct Numerical Simulations (DNS) and by restriction, we can obtain the necessary oscopic information and the projective integration to obtain the desired quantities. In this paper we apply the EF POD-assisted method to the reduced modeling of a large-scale upper ocean circulation in the tropical Pacific domain. The computation cost is reduced dramatically. Compared with the POD method, the method provided more accurate results and it did not require the availability of any explicit equations or the right-hand side (RHS) of the evolution equation.
NASA Astrophysics Data System (ADS)
Wu, M. J.; Guo, P.; Fu, N. F.; Xu, T. L.; Xu, X. S.; Jin, H. L.; Hu, X. G.
2016-06-01
The ionosphere scale height is one of the most significant ionospheric parameters, which contains information about the ion and electron temperatures and dynamics in upper ionosphere. In this paper, an empirical orthogonal function (EOF) analysis method is applied to process all the ionospheric radio occultations of GPS/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) from the year 2007 to 2011 to reconstruct a global ionospheric scale height model. This monthly medium model has spatial resolution of 5° in geomagnetic latitude (-87.5° ~ 87.5°) and temporal resolution of 2 h in local time. EOF analysis preserves the characteristics of scale height quite well in the geomagnetic latitudinal, anural, seasonal, and diurnal variations. In comparison with COSMIC measurements of the year of 2012, the reconstructed model indicates a reasonable accuracy. In order to improve the topside model of International Reference Ionosphere (IRI), we attempted to adopt the scale height model in the Bent topside model by applying a scale factor q as an additional constraint. With the factor q functioning in the exponent profile of topside ionosphere, the IRI scale height should be forced equal to the precise COSMIC measurements. In this way, the IRI topside profile can be improved to get closer to the realistic density profiles. Internal quality check of this approach is carried out by comparing COSMIC realistic measurements and IRI with or without correction, respectively. In general, the initial IRI model overestimates the topside electron density to some extent, and with the correction introduced by COSMIC scale height model, the deviation of vertical total electron content (VTEC) between them is reduced. Furthermore, independent validation with Global Ionospheric Maps VTEC implies a reasonable improvement in the IRI VTEC with the topside model correction.
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae)
Camacho, Ernesto Robayo; Chong, Juang-Horng
2015-01-01
We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance. PMID:26823990
Field-scale and wellbore modeling of compaction-induced casing failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilbert, L.B. Jr.; Gwinn, R.L.; Moroney, T.A.
1999-06-01
Presented in this paper are the results and verification of field- and wellbore-scale large deformation, elasto-plastic, geomechanical finite element models of reservoir compaction and associated casing damage. The models were developed as part of a multidisciplinary team project to reduce the number of costly well failures in the diatomite reservoir of the South Belridge Field near Bakersfield, California. Reservoir compaction of high porosity diatomite rock induces localized shearing deformations on horizontal weak-rock layers and geologic unconformities. The localized shearing deformations result in casing damage or failure. Two-dimensional, field-scale finite element models were used to develop relationships between field operations, surfacemore » subsidence, and shear-induced casing damage. Pore pressures were computed for eighteen years of simulated production and water injection, using a three-dimensional reservoir simulator. The pore pressures were input to the two-dimensional geomechanical field-scale model. Frictional contact surfaces were used to model localized shear deformations. To capture the complex casing-cement-rock interaction that governs casing damage and failure, three-dimensional models of a wellbore were constructed, including a frictional sliding surface to model localized shear deformation. Calculations were compared to field data for verification of the models.« less
Random Blume-Emery-Griffiths model on the Bethe lattice
NASA Astrophysics Data System (ADS)
Albayrak, Erhan
2015-12-01
The random phase transitions of the Blume-Emery-Griffiths (BEG) model for the spin-1 system are investigated on the Bethe lattice and the phase diagrams of the model are obtained. The biquadratic exchange interaction (K) is turned on, i.e. the BEG model, with probability p either attractively (K > 0) or repulsively (K < 0) and turned off, which leads to the BC model, with the probability (1 - p) throughout the Bethe lattice. By taking the bilinear exchange interaction parameter J as a scaling parameter, the effects of the competitions between the reduced crystal fields (D / J), reduced biquadratic exchange interaction parameter (K / J) and the reduced temperature (kT / J) for given values of the probability when the coordination number is q=4, i.e. on a square lattice, are studied in detail.
NASA Astrophysics Data System (ADS)
Jamroz, Ben; Julien, Keith; Knobloch, Edgar
2008-12-01
Taking advantage of disparate spatio-temporal scales relevant to astrophysics and laboratory experiments, we derive asymptotically exact reduced partial differential equation models for the magnetorotational instability. These models extend recent single-mode formulations leading to saturation in the presence of weak dissipation, and are characterized by a back-reaction on the imposed shear. Numerical simulations performed for a broad class of initial conditions indicate an initial phase of growth dominated by the optimal (fastest growing) magnetorotational instability fingering mode, followed by a vertical coarsening to a box-filling mode.
Theoretical Systematics of Future Baryon Acoustic Oscillation Surveys
NASA Astrophysics Data System (ADS)
Ding, Zhejie; Seo, Hee-Jong; Vlah, Zvonimir; Feng, Yu; Schmittfull, Marcel; Beutler, Florian
2018-05-01
Future Baryon Acoustic Oscillation surveys aim at observing galaxy clustering over a wide range of redshift and galaxy populations at great precision, reaching tenths of a percent, in order to detect any deviation of dark energy from the ΛCDM model. We utilize a set of paired quasi-N-body FastPM simulations that were designed to mitigate the sample variance effect on the BAO feature and evaluated the BAO systematics as precisely as ˜0.01%. We report anisotropic BAO scale shifts before and after density field reconstruction in the presence of redshift-space distortions over a wide range of redshift, galaxy/halo biases, and shot noise levels. We test different reconstruction schemes and different smoothing filter scales, and introduce physically-motivated BAO fitting models. For the first time, we derive a Galilean-invariant infrared resummed model for halos in real and redshift space. We test these models from the perspective of robust BAO measurements and non-BAO information such as growth rate and nonlinear bias. We find that pre-reconstruction BAO scale has moderate fitting-model dependence at the level of 0.1% - 0.2% for matter while the dependence is substantially reduced to less than 0.07% for halos. We find that post-reconstruction BAO shifts are generally reduced to below 0.1% in the presence of galaxy/halo bias and show much smaller fitting model dependence. Different reconstruction conventions can potentially make a much larger difference on the line-of-sight BAO scale, upto 0.3%. Meanwhile, the precision (error) of the BAO measurements is quite consistent regardless of the choice of the fitting model or reconstruction convention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Dwayne; Dulikravich, George; Cizmas, Paul
2017-11-27
This report summarizes the objectives, tasks and accomplishments made during the three year duration of this research project. The report presents the results obtained by applying advanced computational techniques to develop reduced-order models (ROMs) in the case of reacting multiphase flows based on high fidelity numerical simulation of gas-solids flow structures in risers and vertical columns obtained by the Multiphase Flow with Interphase eXchanges (MFIX) software. The research includes a numerical investigation of reacting and non-reacting gas-solids flow systems and computational analysis that will involve model development to accelerate the scale-up process for the design of fluidization systems by providingmore » accurate solutions that match the full-scale models. The computational work contributes to the development of a methodology for obtaining ROMs that is applicable to the system of gas-solid flows. Finally, the validity of the developed ROMs is evaluated by comparing the results against those obtained using the MFIX code. Additionally, the robustness of existing POD-based ROMs for multiphase flows is improved by avoiding non-physical solutions of the gas void fraction and ensuring that the reduced kinetics models used for reactive flows in fluidized beds are thermodynamically consistent.« less
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sungduk; Pritchard, Michael S.
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
Improved dual-porosity models for petrophysical analysis of vuggy reservoirs
NASA Astrophysics Data System (ADS)
Wang, Haitao
2017-08-01
A new vug interconnection, isolated vug (IVG), was investigated through resistivity modeling and the dual-porosity model for connected vug (CVG) vuggy reservoirs was tested. The vuggy models were built by pore-scale modeling, and their electrical resistivity was calculated by the finite difference method. For CVG vuggy reservoirs, the CVG reduced formation factors and increased the porosity exponents, and the existing dual-porosity model failed to match these results. Based on the existing dual-porosity model, a conceptual dual-porosity model for CVG was developed by introducing a decoupled term to reduce the resistivity of the model. For IVG vuggy reservoirs, IVG increased the formation factors and porosity exponents. The existing dual-porosity model succeeded due to accurate calculation of the formation factors of the deformed interparticle porous media caused by the insertion of the IVG. Based on the existing dual-porosity model, a new porosity model for IVG vuggy reservoirs was developed by simultaneously recalculating the formation factors of the altered interparticle pore-scale models. The formation factors and porosity exponents from the improved and extended dual-porosity models for CVG and IVG vuggy reservoirs well matched the simulated formation factors and porosity exponents. This work is helpful for understanding the influence of connected and disconnected vugs on resistivity factors—an issue of particular importance in carbonates.
Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...
Can longer forest harvest intervals increase summer streamflow for salmon recovery?
The Mashel Streamflow Modeling Project in the Mashel River Basin, Washington, is using a watershed-scale ecohydrological model to assess whether longer forest harvest intervals can remediate summer low flow conditions that have contributed to sharply reduced runs of spawning Chin...
Ganguly, Arnab; Alexeenko, Alina A; Schultz, Steven G; Kim, Sherry G
2013-10-01
A physics-based model for the sublimation-transport-condensation processes occurring in pharmaceutical freeze-drying by coupling product attributes and equipment capabilities into a unified simulation framework is presented. The system-level model is used to determine the effect of operating conditions such as shelf temperature, chamber pressure, and the load size on occurrence of choking for a production-scale dryer. Several data sets corresponding to production-scale runs with a load from 120 to 485 L have been compared with simulations. A subset of data is used for calibration, whereas another data set corresponding to a load of 150 L is used for model validation. The model predictions for both the onset and extent of choking as well as for the measured product temperature agree well with the production-scale measurements. Additionally, we study the effect of resistance to vapor transport presented by the duct with a valve and a baffle in the production-scale freeze-dryer. Computation Fluid Dynamics (CFD) techniques augmented with a system-level unsteady heat and mass transfer model allow to predict dynamic process conditions taking into consideration specific dryer design. CFD modeling of flow structure in the duct presented here for a production-scale freeze-dryer quantifies the benefit of reducing the obstruction to the flow through several design modifications. It is found that the use of a combined valve-baffle system can increase vapor flow rate by a factor of 2.2. Moreover, minor design changes such as moving the baffle downstream by about 10 cm can increase the flow rate by 54%. The proposed design changes can increase drying rates, improve efficiency, and reduce cycle times due to fewer obstructions in the vapor flow path. The comprehensive simulation framework combining the system-level model and the detailed CFD computations can provide a process analytical tool for more efficient and robust freeze-drying of bio-pharmaceuticals. Copyright © 2013 Elsevier B.V. All rights reserved.
A Preliminary Study for a New Model of Sense of Community
ERIC Educational Resources Information Center
Tartaglia, Stefano
2006-01-01
Although Sense of Community (SOC) is usually defined as a multidimensional construct, most SOC scales are unidimensional. To reduce the split between theory and empirical research, the present work identifies a multifactor structure for the Italian Sense of Community Scale (ISCS) that has already been validated as a unitary index of SOC. This…
Peridynamic Multiscale Finite Element Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Timothy; Bond, Stephen D.; Littlewood, David John
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.« less
Scaling-law equilibria for calcium in canopy-type models of the solar chromosphere
NASA Technical Reports Server (NTRS)
Jones, H. P.
1982-01-01
Scaling laws for resonance line formation are used to obtain approximate excitation and ionization equilibria for a three-level model of singly ionized calcium. The method has been developed for and is applied to the study of magnetograph response in the 8542 A infrared triplet line to magnetostatic canopies which schematically model diffuse, nearly horizontal fields in the low solar chromosphere. For this application, the method is shown to be efficient and semi-quantitative, and the results indicate the type and range of effects on calcium-line radiation which result from reduced gas pressure inside the magnetic regions.
Describing Ecosystem Complexity through Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Shope, C. L.; Tenhunen, J. D.; Peiffer, S.
2011-12-01
Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.
NASA Astrophysics Data System (ADS)
Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B.
2004-11-01
Large Eddy Simulations (LES) of atmospheric boundary-layer air movement in urban environments are especially challenging due to complex ground topography. Typically in such applications, fairly coarse grids must be used where the subgrid-scale (SGS) model is expected to play a crucial role. A LES code using pseudo-spectral discretization in horizontal planes and second-order differencing in the vertical is implemented in conjunction with the immersed boundary method to incorporate complex ground topography, with the classic equilibrium log-law boundary condition in the new-wall region, and with several versions of the eddy-viscosity model: (1) the constant-coefficient Smagorinsky model, (2) the dynamic, scale-invariant Lagrangian model, and (3) the dynamic, scale-dependent Lagrangian model. Other planar-averaged type dynamic models are not suitable because spatial averaging is not possible without directions of statistical homogeneity. These SGS models are tested in LES of flow around a square cylinder and of flow over surface-mounted cubes. Effects on the mean flow are documented and found not to be major. Dynamic Lagrangian models give a physically more realistic SGS viscosity field, and in general, the scale-dependent Lagrangian model produces larger Smagorinsky coefficient than the scale-invariant one, leading to reduced distributions of resolved rms velocities especially in the boundary layers near the bluff bodies.
Large-Scale Ocean Circulation-Cloud Interactions Reduce the Pace of Transient Climate Change
NASA Technical Reports Server (NTRS)
Trossman, D. S.; Palter, J. B.; Merlis, T. M.; Huang, Y.; Xia, Y.
2016-01-01
Changes to the large scale oceanic circulation are thought to slow the pace of transient climate change due, in part, to their influence on radiative feedbacks. Here we evaluate the interactions between CO2-forced perturbations to the large-scale ocean circulation and the radiative cloud feedback in a climate model. Both the change of the ocean circulation and the radiative cloud feedback strongly influence the magnitude and spatial pattern of surface and ocean warming. Changes in the ocean circulation reduce the amount of transient global warming caused by the radiative cloud feedback by helping to maintain low cloud coverage in the face of global warming. The radiative cloud feedback is key in affecting atmospheric meridional heat transport changes and is the dominant radiative feedback mechanism that responds to ocean circulation change. Uncertainty in the simulated ocean circulation changes due to CO2 forcing may contribute a large share of the spread in the radiative cloud feedback among climate models.
NASA Astrophysics Data System (ADS)
Bartholomeus, Ruud; van den Eertwegh, Gé; Simons, Gijs
2015-04-01
Agricultural crop yields depend largely on the soil moisture conditions in the root zone. Drought but especially an excess of water in the root zone and herewith limited availability of soil oxygen reduces crop yield. With ongoing climate change, more prolonged dry periods alternate with more intensive rainfall events, which changes soil moisture dynamics. With unaltered water management practices, reduced crop yield due to both drought stress and waterlogging will increase. Therefore, both farmers and water management authorities need to be provided with opportunities to reduce risks of decreasing crop yields. In The Netherlands, agricultural production of crops represents a market exceeding 2 billion euros annually. Given the increased variability in meteorological conditions and the resulting larger variations in soil moisture contents, it is of large economic importance to provide farmers and water management authorities with tools to mitigate risks of reduced crop yield by anticipatory water management, both at field and at regional scale. We provide the development and the field application of a decision support system (DSS), which allows to optimize crop yield by timely anticipation on drought and waterlogging situations. By using this DSS, we will minimize plant water stress through automated drainage and irrigation management. In order to optimize soil moisture conditions for crop growth, the interacting processes in the soil-plant-atmosphere system need to be considered explicitly. Our study comprises both the set-up and application of the DSS on a pilot plot in The Netherlands, in order to evaluate its implementation into daily agricultural practice. The DSS focusses on anticipatory water management at the field scale, i.e. the unit scale of interest to a farmer. We combine parallel field measurements ('observe'), process-based model simulations ('predict'), and the novel Climate Adaptive Drainage (CAD) system ('adjust') to optimize soil moisture conditions. CAD is used both for controlled drainage practices and for sub-irrigation. The DSS has a core of the plot-scale SWAP model (soil-water-atmosphere-plant), extended with a process-based module for the simulation of oxygen stress for plant roots. This module involves macro-scale and micro-scale gas diffusion, as well as the plant physiological demand of oxygen, to simulate transpiration reduction due to limited oxygen availability. Continuous measurements of soil moisture content, groundwater level, and drainage level are used to calibrate the SWAP model each day. This leads to an optimal reproduction of the actual soil moisture conditions by data assimilation in the first step in the DSS process. During the next step, near-future (+10 days) soil moisture conditions and drought and oxygen stress are predicted using weather forecasts. Finally, optimal drainage levels to minimize stress are simulated, which can be established by CAD. Linkage to a grid-based hydrological simulation model (SPHY) facilitates studying the spatial dynamics of soil moisture and associated implications for management at the regional scale. Thus, by using local-scale measurements, process-based models and weather forecasts to anticipate on near-future conditions, not only field-scale water management but also regional surface water management can be optimized both in space and time.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Vieira, D C S; Serpa, D; Nunes, J P C; Prats, S A; Neves, R; Keizer, J J
2018-08-01
Wildfires have become a recurrent threat for many Mediterranean forest ecosystems. The characteristics of the Mediterranean climate, with its warm and dry summers and mild and wet winters, make this a region prone to wildfire occurrence as well as to post-fire soil erosion. This threat is expected to be aggravated in the future due to climate change and land management practices and planning. The wide recognition of wildfires as a driver for runoff and erosion in burnt forest areas has created a strong demand for model-based tools for predicting the post-fire hydrological and erosion response and, in particular, for predicting the effectiveness of post-fire management operations to mitigate these responses. In this study, the effectiveness of two post-fire treatments (hydromulch and natural pine needle mulch) in reducing post-fire runoff and soil erosion was evaluated against control conditions (i.e. untreated conditions), at different spatial scales. The main objective of this study was to use field data to evaluate the ability of different erosion models: (i) empirical (RUSLE), (ii) semi-empirical (MMF), and (iii) physically-based (PESERA), to predict the hydrological and erosive response as well as the effectiveness of different mulching techniques in fire-affected areas. The results of this study showed that all three models were reasonably able to reproduce the hydrological and erosive processes occurring in burned forest areas. In addition, it was demonstrated that the models can be calibrated at a small spatial scale (0.5 m 2 ) but provide accurate results at greater spatial scales (10 m 2 ). From this work, the RUSLE model seems to be ideal for fast and simple applications (i.e. prioritization of areas-at-risk) mainly due to its simplicity and reduced data requirements. On the other hand, the more complex MMF and PESERA models would be valuable as a base of a possible tool for assessing the risk of water contamination in fire-affected water bodies and for testing different land management scenarios. Copyright © 2018 Elsevier Inc. All rights reserved.
Genetic Analysis of Reduced γ-Tocopherol Content in Ethiopian Mustard Seeds.
García-Navarro, Elena; Fernández-Martínez, José M; Pérez-Vich, Begoña; Velasco, Leonardo
2016-01-01
Ethiopian mustard (Brassica carinata A. Braun) line BCT-6, with reduced γ-tocopherol content in the seeds, has been previously developed. The objective of this research was to conduct a genetic analysis of seed tocopherols in this line. BCT-6 was crossed with the conventional line C-101 and the F1, F2, and BC plant generations were analyzed. Generation mean analysis using individual scaling tests indicated that reduced γ-tocopherol content fitted an additive-dominant genetic model with predominance of additive effects and absence of epistatic interactions. This was confirmed through a joint scaling test and additional testing of the goodness of fit of the model. Conversely, epistatic interactions were identified for total tocopherol content. Estimation of the minimum number of genes suggested that both γ- and total tocopherol content may be controlled by two genes. A positive correlation between total tocopherol content and the proportion of γ-tocopherol was identified in the F2 generation. Additional research on the feasibility of developing germplasm with high tocopherol content and reduced concentration of γ-tocopherol is required.
Pau, G. S. H.; Bisht, G.; Riley, W. J.
2014-09-17
Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less
Postglacial Terrestrial Carbon Dynamics and Atmospheric CO2
NASA Astrophysics Data System (ADS)
Prentice, C. I.; Harrison, S. P.; Kaplan, J. O.
2002-12-01
Combining PMIP climate model results from the last glacial maximum (LGM) with biome modelling indicates the involvement of both cold, dry climate and physiological effects of low atmospheric CO2 in reducing tree cover on the continents. Further results with the LPJ dynamic vegetation model agree with independent evidence for greatly reduced terrestrial carbon storage at LGM, and suggest that terrestrial carbon storage continued to increase during the Holocene. These results point to predominantly oceanic explanations for preindustrial changes in atmospheric CO2, although land changes after the LGM may have contributed indirectly by reducing the aeolian marine Fe source and (on a longer time scale) by triggering CaCO3 compensation in the ocean.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
Ochs, Alison; Fewell, Jennifer H.; Harrison, Jon F.
2017-01-01
Metabolic rates of individual animals and social insect colonies generally scale hypometrically, with mass-specific metabolic rates decreasing with increasing size. Although this allometry has wide ranging effects on social behaviour, ecology and evolution, its causes remain controversial. Because it is difficult to experimentally manipulate body size of organisms, most studies of metabolic scaling depend on correlative data, limiting their ability to determine causation. To overcome this limitation, we experimentally reduced the size of harvester ant colonies (Pogonomyrmex californicus) and quantified the consequent increase in mass-specific metabolic rates. Our results clearly demonstrate a causal relationship between colony size and hypometric changes in metabolic rate that could not be explained by changes in physical density. These findings provide evidence against prominent models arguing that the hypometric scaling of metabolic rate is primarily driven by constraints on resource delivery or surface area/volume ratios, because colonies were provided with excess food and colony size does not affect individual oxygen or nutrient transport. We found that larger colonies had lower median walking speeds and relatively more stationary ants and including walking speed as a variable in the mass-scaling allometry greatly reduced the amount of residual variation in the model, reinforcing the role of behaviour in metabolic allometry. Following the experimental size reduction, however, the proportion of stationary ants increased, demonstrating that variation in locomotory activity cannot solely explain hypometric scaling of metabolic rates in these colonies. Based on prior studies of this species, the increase in metabolic rate in size-reduced colonies could be due to increased anabolic processes associated with brood care and colony growth. PMID:28228514
Siapka, Mariana; Remme, Michelle; Obure, Carol Dayo; Maier, Claudia B; Dehne, Karl L
2014-01-01
Abstract Objective To synthesize the data available – on costs, efficiency and economies of scale and scope – for the six basic programmes of the UNAIDS Strategic Investment Framework, to inform those planning the scale-up of human immunodeficiency virus (HIV) services in low- and middle-income countries. Methods The relevant peer-reviewed and “grey” literature from low- and middle-income countries was systematically reviewed. Search and analysis followed Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. Findings Of the 82 empirical costing and efficiency studies identified, nine provided data on economies of scale. Scale explained much of the variation in the costs of several HIV services, particularly those of targeted HIV prevention for key populations and HIV testing and treatment. There is some evidence of economies of scope from integrating HIV counselling and testing services with several other services. Cost efficiency may also be improved by reducing input prices, task shifting and improving client adherence. Conclusion HIV programmes need to optimize the scale of service provision to achieve efficiency. Interventions that may enhance the potential for economies of scale include intensifying demand-creation activities, reducing the costs for service users, expanding existing programmes rather than creating new structures, and reducing attrition of existing service users. Models for integrated service delivery – which is, potentially, more efficient than the implementation of stand-alone services – should be investigated further. Further experimental evidence is required to understand how to best achieve efficiency gains in HIV programmes and assess the cost–effectiveness of each service-delivery model. PMID:25110375
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makedonska, Nataliia; Kwicklis, Edward Michael; Birdsell, Kay Hanson
This progress report for fiscal year 2015 (FY15) describes the development of discrete fracture network (DFN) models for Pahute Mesa. DFN models will be used to upscale parameters for simulations of subsurface flow and transport in fractured media in Pahute Mesa. The research focuses on modeling of groundwater flow and contaminant transport using DFNs generated according to fracture characteristics observed in the Topopah Spring Aquifer (TSA) and the Lava Flow Aquifer (LFA). This work will improve the representation of radionuclide transport processes in large-scale, regulatory-focused models with a view to reduce pessimistic bounding approximations and provide more realistic contaminant boundarymore » calculations that can be used to describe the future extent of contaminated groundwater. Our goal is to refine a modeling approach that can translate parameters to larger-scale models that account for local-scale flow and transport processes, which tend to attenuate migration.« less
Simulating Catchment Scale Afforestation for Mitigating Flooding
NASA Astrophysics Data System (ADS)
Barnes, M. S.; Bathurst, J. C.; Quinn, P. F.; Birkinshaw, S.
2016-12-01
After the 2013-14, and the more recent 2015-16, winter floods in the UK there were calls to 'forest the uplands' as a solution to reducing flood risk across the nation. However, the role of forests as a natural flood management practice remains highly controversial, due to a distinct lack of robust evidence into its effectiveness in reducing flood risk during extreme events. This project aims to improve the understanding of the impacts of upland afforestation on flood risk at the sub-catchment and full catchment scales. This will be achieved through an integrated fieldwork and modelling approach, with the use of a series of process based hydrological models to scale up and examine the effects forestry can have on flooding. Furthermore, there is a need to analyse the extent to which land management practices, catchment system engineering and the installation of runoff attenuation features (RAFs), such as engineered log jams, in headwater catchments can attenuate flood-wave movement, and potentially reduce downstream flood risk. Additionally, the proportion of a catchment or riparian reach that would need to be forested in order to achieve a significant impact on reducing downstream flooding will be defined. The consequential impacts of a corresponding reduction in agriculturally productive farmland and the potential decline of water resource availability will also be considered in order to safeguard the UK's food security and satisfy the global demand on water resources.
APEX Model Simulation for Row Crop Watersheds with Agroforestry and Grass Buffers
USDA-ARS?s Scientific Manuscript database
Watershed model simulation has become an important tool in studying ways and means to reduce transport of agricultural pollutants. Conducting field experiments to assess buffer influences on water quality are constrained by the large-scale nature of watersheds, high experimental costs, private owner...
Mercury capture within coal-fired power plant electrostatic precipitators: model evaluation.
Clack, Herek L
2009-03-01
Efforts to reduce anthropogenic mercury emissions worldwide have recently focused on a variety of sources, including mercury emitted during coal combustion. Toward that end, much research has been ongoing seeking to develop new processes for reducing coal combustion mercury emissions. Among air pollution control processes that can be applied to coal-fired boilers, electrostatic precipitators (ESPs) are by far the most common, both on a global scale and among the principal countries of India, China, and the U.S. that burn coal for electric power generation. A previously reported theoretical model of in-flight mercury capture within ESPs is herein evaluated against data from a number of full-scale tests of activated carbon injection for mercury emissions control. By using the established particle size distribution of the activated carbon and actual or estimated values of its equilibrium mercury adsorption capacity, the incremental reduction in mercury concentration across each ESP can be predicted and compared to experimental results. Because the model does not incorporate kinetics associated with gas-phase mercury transformation or surface adsorption, the model predictions representthe mass-transfer-limited performance. Comparing field data to model results reveals many facilities performing at or near the predicted mass-transfer-limited maximum, particularly at low rates of sorbent injection. Where agreement is poor between field data and model predictions, additional chemical or physical phenomena may be responsible for reducing mercury removal efficiencies.
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-06-15
Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less
Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator
NASA Astrophysics Data System (ADS)
Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei
2017-09-01
This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.
Methods for Scaling Icing Test Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.
1995-01-01
This report presents the results of tests at NASA Lewis to evaluate several methods to establish suitable alternative test conditions when the test facility limits the model size or operating conditions. The first method was proposed by Olsen. It can be applied when full-size models are tested and all the desired test conditions except liquid-water content can be obtained in the facility. The other two methods discussed are: a modification of the French scaling law and the AEDC scaling method. Icing tests were made with cylinders at both reference and scaled conditions representing mixed and glaze ice in the NASA Lewis Icing Research Tunnel. Reference and scale ice shapes were compared to evaluate each method. The Olsen method was tested with liquid-water content varying from 1.3 to .8 g/m(exp3). Over this range, ice shapes produced using the Olsen method were unchanged. The modified French and AEDC methods produced scaled ice shapes which approximated the reference shapes when model size was reduced to half the reference size for the glaze-ice cases tested.
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1994-01-01
We calculate reduced moments (xi bar)(sub q) of the matter density fluctuations, up to order q = 5, from counts in cells produced by particle-mesh numerical simulations with scale-free Gaussian initial conditions. We use power-law spectra P(k) proportional to k(exp n) with indices n = -3, -2, -1, 0, 1. Due to the supposed absence of characteristic times or scales in our models, all quantities are expected to depend on a single scaling variable. For each model, the moments at all times can be expressed in terms of the variance (xi bar)(sub 2), alone. We look for agreement with the hierarchical scaling ansatz, according to which ((xi bar)(sub q)) proportional to ((xi bar)(sub 2))(exp (q - 1)). For n less than or equal to -2 models, we find strong deviations from the hierarchy, which are mostly due to the presence of boundary problems in the simulations. A small, residual signal of deviation from the hierarchical scaling is however also found in n greater than or equal to -1 models. The wide range of spectra considered and the large dynamic range, with careful checks of scaling and shot-noise effects, allows us to reliably detect evolution away from the perturbation theory result.
Bellamy, Chloe; Altringham, John
2015-01-01
Conservation increasingly operates at the landscape scale. For this to be effective, we need landscape scale information on species distributions and the environmental factors that underpin them. Species records are becoming increasingly available via data centres and online portals, but they are often patchy and biased. We demonstrate how such data can yield useful habitat suitability models, using bat roost records as an example. We analysed the effects of environmental variables at eight spatial scales (500 m - 6 km) on roost selection by eight bat species (Pipistrellus pipistrellus, P. pygmaeus, Nyctalus noctula, Myotis mystacinus, M. brandtii, M. nattereri, M. daubentonii, and Plecotus auritus) using the presence-only modelling software MaxEnt. Modelling was carried out on a selection of 418 data centre roost records from the Lake District National Park, UK. Target group pseudoabsences were selected to reduce the impact of sampling bias. Multi-scale models, combining variables measured at their best performing spatial scales, were used to predict roosting habitat suitability, yielding models with useful predictive abilities. Small areas of deciduous woodland consistently increased roosting habitat suitability, but other habitat associations varied between species and scales. Pipistrellus were positively related to built environments at small scales, and depended on large-scale woodland availability. The other, more specialist, species were highly sensitive to human-altered landscapes, avoiding even small rural towns. The strength of many relationships at large scales suggests that bats are sensitive to habitat modifications far from the roost itself. The fine resolution, large extent maps will aid targeted decision-making by conservationists and planners. We have made available an ArcGIS toolbox that automates the production of multi-scale variables, to facilitate the application of our methods to other taxa and locations. Habitat suitability modelling has the potential to become a standard tool for supporting landscape-scale decision-making as relevant data and open source, user-friendly, and peer-reviewed software become widely available.
NASA Astrophysics Data System (ADS)
Pak, A.; Dewald, E. L.; Landen, O. L.; Milovich, J.; Strozzi, D. J.; Berzak Hopkins, L. F.; Bradley, D. K.; Divol, L.; Ho, D. D.; MacKinnon, A. J.; Meezan, N. B.; Michel, P.; Moody, J. D.; Moore, A. S.; Schneider, M. B.; Town, R. P. J.; Hsing, W. W.; Edwards, M. J.
2015-12-01
Temporally resolved measurements of the hohlraum radiation flux asymmetry incident onto a bismuth coated surrogate capsule have been made over the first two nanoseconds of ignition relevant laser pulses. Specifically, we study the P2 asymmetry of the incoming flux as a function of cone fraction, defined as the inner-to-total laser beam power ratio, for a variety of hohlraums with different scales and gas fills. This work was performed to understand the relevance of recent experiments, conducted in new reduced-scale neopentane gas filled hohlraums, to full scale helium filled ignition targets. Experimental measurements, matched by 3D view factor calculations, are used to infer differences in symmetry, relative beam absorption, and cross beam energy transfer (CBET), employing an analytic model. Despite differences in hohlraum dimensions and gas fill, as well as in laser beam pointing and power, we find that laser absorption, CBET, and the cone fraction, at which a symmetric flux is achieved, are similar to within 25% between experiments conducted in the reduced and full scale hohlraums. This work demonstrates a close surrogacy in the dynamics during the first shock between reduced-scale and full scale implosion experiments and is an important step in enabling the increased rate of study for physics associated with inertial confinement fusion.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
NASA Astrophysics Data System (ADS)
Lin, Jiang; Miao, Chiyuan
2017-04-01
Climate change is considered to be one of the greatest environmental threats. This has urged scientific communities to focus on the hot topic. Global climate models (GCMs) are the primary tool used for studying climate change. However, GCMs are limited because of their coarse spatial resolution and inability to resolve important sub-grid scale features such as terrain and clouds. Statistical downscaling methods can be used to downscale large-scale variables to local-scale. In this study, we assess the applicability of the widely used Statistical Downscaling Model (SDSM) for the Loess Plateau, China. The observed variables included daily mean temperature (TMEAN), maximum temperature (TMAX) and minimum temperature (TMIN) from 1961 to 2005. The and the daily atmospheric data were taken from reanalysis data from 1961 to 2005, and global climate model outputs from Beijing Normal University Earth System Model (BNU-ESM) from 1961 to 2099 and from observations . The results show that SDSM performs well for these three climatic variables on the Loess Plateau. After downscaling, the root mean square errors for TMEAN, TMAX, TMIN for BNU-ESM were reduced by 70.9%, 75.1%, and 67.2%, respectively. All the rates of change in TMEAN, TMAX and TMIN during the 21st century decreased after SDSM downscaling. We also show that SDSM can effectively reduce uncertainty, compared with the raw model outputs. TMEAN uncertainty was reduced by 27.1%, 26.8%, and 16.3% for the future scenarios of RCP 2.6, RCP 4.5 and RCP 8.5, respectively. The corresponding reductions in uncertainty were 23.6%, 30.7%, and 18.7% for TMAX, ; and 37.6%, 31.8%, and 23.2% for TMIN.
Muley, Pranjali D; Boldor, Dorin
2012-01-01
Use of advanced microwave technology for biodiesel production from vegetable oil is a relatively new technology. Microwave dielectric heating increases the process efficiency and reduces reaction time. Microwave heating depends on various factors such as material properties (dielectric and thermo-physical), frequency of operation and system design. Although lab scale results are promising, it is important to study these parameters and optimize the process before scaling up. Numerical modeling approach can be applied for predicting heating and temperature profiles including at larger scale. The process can be studied for optimization without actually performing the experiments, reducing the amount of experimental work required. A basic numerical model of continuous electromagnetic heating of biodiesel precursors was developed. A finite element model was built using COMSOL Multiphysics 4.2 software by coupling the electromagnetic problem with the fluid flow and heat transfer problem. Chemical reaction was not taken into account. Material dielectric properties were obtained experimentally, while the thermal properties were obtained from the literature (all the properties were temperature dependent). The model was tested for the two different power levels 4000 W and 4700 W at a constant flow rate of 840ml/min. The electric field, electromagnetic power density flow and temperature profiles were studied. Resulting temperature profiles were validated by comparing to the temperatures obtained at specific locations from the experiment. The results obtained were in good agreement with the experimental data.
NASA Astrophysics Data System (ADS)
French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter
2016-03-01
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed 'appropriate complexity modelling' of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.
NASA Technical Reports Server (NTRS)
Kazin, S. B.; Paas, J. E.; Minzner, W. R.
1973-01-01
A scale model of the bypass flow region of a 1.5 pressure ratio, single stage, low tip speed fan was tested with a serrated rotor leading edge to determine its effects on noise generation. The serrated rotor was produced by cutting teeth into the leading edge of the nominal rotor blades. The effects of speed and exhaust nozzle area on the scale models noise characteristics were investigated with both the nominal rotor and serrated rotor. Acoustic results indicate the serrations reduced front quadrant PNL's at takeoff power. In particular, the 200 foot (61.0 m) sideline noise was reduced from 3 to 4 PNdb at 40 deg for nominal and large nozzle operation. However, the rear quadrant maximum sideline PNL's were increased 1.5 to 3 PNdb at approach thust and up to 2 PNdb at takeoff thust with these serrated rotor blades. The configuration with the serrated rotor produced the lowest maximum 200 foot (61.0 m) sideline PNL for any given thust when the large nozzle (116% of design area) was employed.
Web based visualization of large climate data sets
Alder, Jay R.; Hostetler, Steven W.
2015-01-01
We have implemented the USGS National Climate Change Viewer (NCCV), which is an easy-to-use web application that displays future projections from global climate models over the United States at the state, county and watershed scales. We incorporate the NASA NEX-DCP30 statistically downscaled temperature and precipitation for 30 global climate models being used in the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), and hydrologic variables we simulated using a simple water-balance model. Our application summarizes very large, complex data sets at scales relevant to resource managers and citizens and makes climate-change projection information accessible to users of varying skill levels. Tens of terabytes of high-resolution climate and water-balance data are distilled to compact binary format summary files that are used in the application. To alleviate slow response times under high loads, we developed a map caching technique that reduces the time it takes to generate maps by several orders of magnitude. The reduced access time scales to >500 concurrent users. We provide code examples that demonstrate key aspects of data processing, data exporting/importing and the caching technique used in the NCCV.
In the United States, regional-scale photochemical models are being used to design emission control strategies needed to meet the relevant National Ambient Air Quality Standards (NAAQS) within the framework of the attainment demonstration process. Previous studies have shown that...
Developing a Drosophila Model of Schwannomatosis
2012-08-01
the entire Drosophila melanogaster genome and compared...et al., 2009; Hanahan and Weinberg, 2011). Over the last decade, the fruit fly Drosophila melanogaster has become an important model system for cancer...studies. Reduced redundancy in the Drosophila genome compared with that of humans, coupled with the ability to conduct large-scale genetic screens
Terry Turbopump Analytical Modeling Efforts in Fiscal Year 2016 ? Progress Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Douglas; Ross, Kyle; Cardoni, Jeffrey N
This document details the Fiscal Year 2016 modeling efforts to define the true operating limitations (margins) of the Terry turbopump systems used in the nuclear industry for Milestone 3 (full-scale component experiments) and Milestone 4 (Terry turbopump basic science experiments) experiments. The overall multinational-sponsored program creates the technical basis to: (1) reduce and defer additional utility costs, (2) simplify plant operations, and (3) provide a better understanding of the true margin which could reduce overall risk of operations.
Generation of Fullspan Leading-Edge 3D Ice Shapes for Swept-Wing Aerodynamic Testing
NASA Technical Reports Server (NTRS)
Camello, Stephanie C.; Lee, Sam; Lum, Christopher; Bragg, Michael B.
2016-01-01
The deleterious effect of ice accretion on aircraft is often assessed through dry-air flight and wind tunnel testing with artificial ice shapes. This paper describes a method to create fullspan swept-wing artificial ice shapes from partial span ice segments acquired in the NASA Glenn Icing Reserch Tunnel for aerodynamic wind-tunnel testing. Full-scale ice accretion segments were laser scanned from the Inboard, Midspan, and Outboard wing station models of the 65% scale Common Research Model (CRM65) aircraft configuration. These were interpolated and extrapolated using a weighted averaging method to generate fullspan ice shapes from the root to the tip of the CRM65 wing. The results showed that this interpolation method was able to preserve many of the highly three dimensional features typically found on swept-wing ice accretions. The interpolated fullspan ice shapes were then scaled to fit the leading edge of a 8.9% scale version of the CRM65 wing for aerodynamic wind-tunnel testing. Reduced fidelity versions of the fullspan ice shapes were also created where most of the local three-dimensional features were removed. The fullspan artificial ice shapes and the reduced fidelity versions were manufactured using stereolithography.
Application of fracture toughness scaling models to the ductile-to- brittle transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, R.E.; Joyce, J.A.
1996-01-01
An experimental investigation of fracture toughness in the ductile-brittle transition range was conducted. A large number of ASTM A533, Grade B steel, bend and tension specimens with varying crack lengths were tested throughout the transition region. Cleavage fracture toughness scaling models were utilized to correct the data for the loss of constraint in short crack specimens and tension geometries. The toughness scaling models were effective in reducing the scatter in the data, but tended to over-correct the results for the short crack bend specimens. A proposed ASTM Test Practice for Fracture Toughness in the Transition Range, which employs a mastermore » curve concept, was applied to the results. The proposed master curve over predicted the fracture toughness in the mid-transition and a modified master curve was developed that more accurately modeled the transition behavior of the material. Finally, the modified master curve and the fracture toughness scaling models were combined to predict the as-measured fracture toughness of the short crack bend and the tension specimens. It was shown that when the scaling models over correct the data for loss of constraint, they can also lead to non-conservative estimates of the increase in toughness for low constraint geometries.« less
NASA Astrophysics Data System (ADS)
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
Understanding and Controlling Sialylation in a CHO Fc-Fusion Process
Lewis, Amanda M.; Croughan, William D.; Aranibar, Nelly; Lee, Alison G.; Warrack, Bethanne; Abu-Absi, Nicholas R.; Patel, Rutva; Drew, Barry; Borys, Michael C.; Reily, Michael D.; Li, Zheng Jian
2016-01-01
A Chinese hamster ovary (CHO) bioprocess, where the product is a sialylated Fc-fusion protein, was operated at pilot and manufacturing scale and significant variation of sialylation level was observed. In order to more tightly control glycosylation profiles, we sought to identify the cause of variability. Untargeted metabolomics and transcriptomics methods were applied to select samples from the large scale runs. Lower sialylation was correlated with elevated mannose levels, a shift in glucose metabolism, and increased oxidative stress response. Using a 5-L scale model operated with a reduced dissolved oxygen set point, we were able to reproduce the phenotypic profiles observed at manufacturing scale including lower sialylation, higher lactate and lower ammonia levels. Targeted transcriptomics and metabolomics confirmed that reduced oxygen levels resulted in increased mannose levels, a shift towards glycolysis, and increased oxidative stress response similar to the manufacturing scale. Finally, we propose a biological mechanism linking large scale operation and sialylation variation. Oxidative stress results from gas transfer limitations at large scale and the presence of oxygen dead-zones inducing upregulation of glycolysis and mannose biosynthesis, and downregulation of hexosamine biosynthesis and acetyl-CoA formation. The lower flux through the hexosamine pathway and reduced intracellular pools of acetyl-CoA led to reduced formation of N-acetylglucosamine and N-acetylneuraminic acid, both key building blocks of N-glycan structures. This study reports for the first time a link between oxidative stress and mammalian protein sialyation. In this study, process, analytical, metabolomic, and transcriptomic data at manufacturing, pilot, and laboratory scales were taken together to develop a systems level understanding of the process and identify oxygen limitation as the root cause of glycosylation variability. PMID:27310468
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.
Salama, Mhd Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.
Characterization of double continuum formulations of transport through pore-scale information
NASA Astrophysics Data System (ADS)
Porta, G.; Ceriotti, G.; Bijeljic, B.
2016-12-01
Information on pore-scale characteristics is becoming increasingly available at unprecedented levels of detail from modern visualization/data-acquisition techniques. These advancements are not completely matched by corresponding developments of operational procedures according to which we can engineer theoretical findings aiming at improving our ability to reduce the uncertainty associated with the outputs of continuum-scale models to be employed at large scales. We present here a modeling approach which rests on pore-scale information to achieve a complete characterization of a double continuum model of transport and fluid-fluid reactive processes. Our model makes full use of pore-scale velocity distributions to identify mobile and immobile regions. We do so on the basis of a pointwise (in the pore space) evaluation of the relative strength of advection and diffusion time scales, as rendered by spatially variable values of local Péclet numbers. After mobile and immobile regions are demarcated, we build a simplified unit cell which is employed as a representative proxy of the real porous domain. This model geometry is then employed to simplify the computation of the effective parameters embedded in the double continuum transport model, while retaining relevant information from the pore-scale characterization of the geometry and velocity field. We document results which illustrate the applicability of the methodology to predict transport of a passive tracer within two- and three-dimensional media upon comparison with direct pore-scale numerical simulation of transport in the same geometrical settings. We also show preliminary results about the extension of this model to fluid-fluid reactive transport processes. In this context, we focus on results obtained in two-dimensional porous systems. We discuss the impact of critical quantities required as input to our modeling approach to obtain continuum-scale outputs. We identify the key limitations of the proposed methodology and discuss its capability also in comparison with alternative approaches grounded, e.g., on nonlocal and particle-based approximations.
Impact of Adsorption on Gas Transport in Nanopores.
Wu, Tianhao; Zhang, Dongxiao
2016-03-29
Given the complex nature of the interaction between gas and solid atoms, the development of nanoscale science and technology has engendered a need for further understanding of gas transport behavior through nanopores and more tractable models for large-scale simulations. In the present paper, we utilize molecular dynamic simulations to demonstrate the behavior of gas flow under the influence of adsorption in nano-channels consisting of illite and graphene, respectively. The results indicate that velocity oscillation exists along the cross-section of the nano-channel, and the total mass flow could be either enhanced or reduced depending on variations in adsorption under different conditions. The mechanisms can be explained by the extra average perturbation stress arising from density oscillation via the novel perturbation model for micro-scale simulation, and approximated via the novel dual-region model for macro-scale simulation, which leads to a more accurate permeability correction model for industrial applications than is currently available.
Satellite-Scale Snow Water Equivalent Assimilation into a High-Resolution Land Surface Model
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle J.M.; Reichle, Rolf H.; Houser, Paul R.; Arsenault, Kristi R.; Verhoest, Niko E.C.; Paulwels, Valentijn R.N.
2009-01-01
An ensemble Kalman filter (EnKF) is used in a suite of synthetic experiments to assimilate coarse-scale (25 km) snow water equivalent (SWE) observations (typical of satellite retrievals) into fine-scale (1 km) model simulations. Coarse-scale observations are assimilated directly using an observation operator for mapping between the coarse and fine scales or, alternatively, after disaggregation (re-gridding) to the fine-scale model resolution prior to data assimilation. In either case observations are assimilated either simultaneously or independently for each location. Results indicate that assimilating disaggregated fine-scale observations independently (method 1D-F1) is less efficient than assimilating a collection of neighboring disaggregated observations (method 3D-Fm). Direct assimilation of coarse-scale observations is superior to a priori disaggregation. Independent assimilation of individual coarse-scale observations (method 3D-C1) can bring the overall mean analyzed field close to the truth, but does not necessarily improve estimates of the fine-scale structure. There is a clear benefit to simultaneously assimilating multiple coarse-scale observations (method 3D-Cm) even as the entire domain is observed, indicating that underlying spatial error correlations can be exploited to improve SWE estimates. Method 3D-Cm avoids artificial transitions at the coarse observation pixel boundaries and can reduce the RMSE by 60% when compared to the open loop in this study.
Normal forms for reduced stochastic climate models
Majda, Andrew J.; Franzke, Christian; Crommelin, Daan
2009-01-01
The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943
Module Degradation Mechanisms Studied by a Multi-Scale Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Steve; Al-Jassim, Mowafak; Hacke, Peter
2016-11-21
A key pathway to meeting the Department of Energy SunShot 2020 goals is to reduce financing costs by improving investor confidence through improved photovoltaic (PV) module reliability. A comprehensive approach to further understand and improve PV reliability includes characterization techniques and modeling from module to atomic scale. Imaging techniques, which include photoluminescence, electroluminescence, and lock-in thermography, are used to locate localized defects responsible for module degradation. Small area samples containing such defects are prepared using coring techniques and are then suitable and available for microscopic study and specific defect modeling and analysis.
NASA Technical Reports Server (NTRS)
Stewart, E. C.; Doggett, R. V., Jr.
1978-01-01
Some experimental results are presented from wind tunnel studies of a dynamic model equipped with an aeromechanical gust alleviation system for reducing the normal acceleration response of light airplanes. The gust alleviation system consists of two auxiliary aerodynamic surfaces that deflect the wing flaps through mechanical linkages when a gust is encountered to maintain nearly constant airplane lift. The gust alleviation system was implemented on a 1/6-scale, rod mounted, free flying model that is geometrically and dynamically representative of small, four place, high wing, single engine, light airplanes. The effects of flaps with different spans, two size of auxiliary aerodynamic surfaces, plain and double hinged flaps, and a flap elevator interconnection were studied. The model test results are presented in terms of predicted root mean square response of the full scale airplane to atmospheric turbulence. The results show that the gust alleviation system reduces the root mean square normal acceleration response by 30 percent in comparison with the response in the flaps locked condition. Small reductions in pitch-rate response were also obtained. It is believed that substantially larger reductions in normal acceleration can be achieved by reducing the rather high levels of mechanical friction which were extant in the alleviation system of the present model.
Air injection test on a Kaplan turbine: prototype - model comparison
NASA Astrophysics Data System (ADS)
Angulo, M.; Rivetti, A.; Díaz, L.; Liscia, S.
2016-11-01
Air injection is a very well-known resource to reduce pressure pulsation magnitude in turbines, especially on Francis type. In the case of large Kaplan designs, even when not so usual, it could be a solution to mitigate vibrations arising when tip vortex cavitation phenomenon becomes erosive and induces structural vibrations. In order to study this alternative, aeration tests were performed on a Kaplan turbine at model and prototype scales. The research was focused on efficiency of different air flow rates injected in reducing vibrations, especially at the draft tube and the discharge ring and also in the efficiency drop magnitude. It was found that results on both scales presents the same trend in particular for vibration levels at the discharge ring. The efficiency drop was overestimated on model tests while on prototype were less than 0.2 % for all power output. On prototype, air has a beneficial effect in reducing pressure fluctuations up to 0.2 ‰ of air flow rate. On model high speed image computing helped to quantify the volume of tip vortex cavitation that is strongly correlated with the vibration level. The hydrophone measurements did not capture the cavitation intensity when air is injected, however on prototype, it was detected by a sonometer installed at the draft tube access gallery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crater, Jason; Galleher, Connor; Lievense, Jeff
NREL is developing an advanced aerobic bubble column model using Aspen Custom Modeler (ACM). The objective of this work is to integrate the new fermentor model with existing techno-economic models in Aspen Plus and Excel to establish a new methodology for guiding process design. To assist this effort, NREL has contracted Genomatica to critique and make recommendations for improving NREL's bioreactor model and large scale aerobic bioreactor design for biologically producing lipids at commercial scale. Genomatica has highlighted a few areas for improving the functionality and effectiveness of the model. Genomatica recommends using a compartment model approach with an integratedmore » black-box kinetic model of the production microbe. We also suggest including calculations for stirred tank reactors to extend the models functionality and adaptability for future process designs. Genomatica also suggests making several modifications to NREL's large-scale lipid production process design. The recommended process modifications are based on Genomatica's internal techno-economic assessment experience and are focused primarily on minimizing capital and operating costs. These recommendations include selecting/engineering a thermotolerant yeast strain with lipid excretion; using bubble column fermentors; increasing the size of production fermentors; reducing the number of vessels; employing semi-continuous operation; and recycling cell mass.« less
Nontidal Loading Applied in VLBI Geodetic Analysis
NASA Astrophysics Data System (ADS)
MacMillan, D. S.
2015-12-01
We investigate the application of nontidal atmosphere pressure, hydrology, and ocean loading series in the analysis of VLBI data. The annual amplitude of VLBI scale variation is reduced to less than 0.1 ppb, a result of the annual components of the vertical loading series. VLBI site vertical scatter and baseline length scatter is reduced when these loading models are applied. We operate nontidal loading services for hydrology loading (GLDAS model), atmospheric pressure loading (NCEP), and nontidal ocean loading (JPL ECCO model). As an alternative validation, we compare these loading series with corresponding series generated by other analysis centers.
Transition between inverse and direct energy cascades in multiscale optical turbulence
Malkin, V. M.; Fisch, N. J.
2018-03-06
Transition between inverse and direct energy cascades in multiscale optical turbulence. Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a singlemore » scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.« less
The influence of large-scale wind power on global climate.
Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J
2004-11-16
Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.
Medvigy, David; Moorcroft, Paul R
2012-01-19
Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.
Reducing the convective losses of cavity receivers
NASA Astrophysics Data System (ADS)
Flesch, Robert; Grobbel, Johannes; Stadler, Hannes; Uhlig, Ralf; Hoffschmidt, Bernhard
2016-05-01
Convective losses reduce the efficiency of cavity receivers used in solar power towers especially under windy conditions. Therefore, measures should be taken to reduce these losses. In this paper two different measures are analyzed: an air curtain and a partial window which covers one third of the aperture opening. The cavity without modifications and the usage of a partial window were analyzed in a cryogenic wind tunnel at -173°C. The cryogenic environment allows transforming the results from the small model cavity to a large scale receiver with Gr≈3.9.1010. The cavity with the two modifications in the wind tunnel environment was analyzed with a CFD model as well. By comparing the numerical and experimental results the model was validated. Both modifications are capable of reducing the convection losses. In the best case a reduction of about 50 % was achieved.
Large scale structures in the kinetic gravity braiding model that can be unbraided
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, Rampei; Yamamoto, Kazuhiro, E-mail: rampei@theo.phys.sci.hiroshima-u.ac.jp, E-mail: kazuhiro@hiroshima-u.ac.jp
2011-04-01
We study cosmological consequences of a kinetic gravity braiding model, which is proposed as an alternative to the dark energy model. The kinetic braiding model we study is characterized by a parameter n, which corresponds to the original galileon cosmological model for n = 1. We find that the background expansion of the universe of the kinetic braiding model is the same as the Dvali-Turner's model, which reduces to that of the standard cold dark matter model with a cosmological constant (ΛCDM model) for n equal to infinity. We also find that the evolution of the linear cosmological perturbation inmore » the kinetic braiding model reduces to that of the ΛCDM model for n = ∞. Then, we focus our study on the growth history of the linear density perturbation as well as the spherical collapse in the nonlinear regime of the density perturbations, which might be important in order to distinguish between the kinetic braiding model and the ΛCDM model when n is finite. The theoretical prediction for the large scale structure is confronted with the multipole power spectrum of the luminous red galaxy sample of the Sloan Digital Sky survey. We also discuss future prospects of constraining the kinetic braiding model using a future redshift survey like the WFMOS/SuMIRe PFS survey as well as the cluster redshift distribution in the South Pole Telescope survey.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.
Establishing an integrated human milk banking approach to strengthen newborn care.
DeMarchis, A; Israel-Ballard, K; Mansen, Kimberly Amundson; Engmann, C
2017-05-01
The provision of donor human milk can significantly reduce morbidity and mortality among vulnerable infants and is recommended by the World Health Organization as the next best option when a mother's own milk is unavailable. Regulated human milk banks can meet this need, however, scale-up has been hindered by the absence of an appropriate model for resource-limited settings and a lack of policy support for human milk banks and for the operational procedures supporting them. To reduce infant mortality, human milk banking systems need to be scaled up and integrated with other components of newborn care. This article draws on current guidelines and best practices from human milk banks to offer a compilation of universal requirements that provide a foundation for an integrated model of newborn care that is appropriate for low- and high-resource settings alike.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggerone, G.T.; Rogers, D.E.
Adult sockeye salmon scales, which provide an index of annual salmon growth in fresh and marine waters during 1965--1997, were measured to examine the effects on growth and adult returns of large spawning escapements influenced by the Exxon Valdez oil spill. Scale growth in freshwater was significantly reduced by the large 1989 spawning escapements in the Kenai River system, Red Lake, and Akalura Lake, but not in Chignik Lake. These data suggest that sockeye growth in freshwater may be less stable following the large escapement. Furthermore, the observations of large escapement adversely affecting growth of adjacent brood years of salmonmore » has important implications for stock-recruitment modeling. In Prince William Sound, Coghill Lake sockeye salmon that migrated through oil-contaminated waters did not exhibit noticeably reduced marine growth, but a model was developed that might explain low adult returns in recent years.« less
Yokokura, Ana Valéria Carvalho Pires; Silva, Antônio Augusto Moura da; Fernandes, Juliana de Kássia Braga; Del-Ben, Cristina Marta; Figueiredo, Felipe Pinheiro de; Barbieri, Marco Antonio; Bettiol, Heloisa
2017-12-18
This study aimed to assess the dimensional structure, reliability, convergent validity, discriminant validity, and scalability of the Perceived Stress Scale (PSS). The sample consisted of 1,447 pregnant women in São Luís (Maranhão State) and 1,400 in Ribeirão Preto (São Paulo State), Brazil. The 14 and 10-item versions of the scale were assessed using confirmatory factor analysis, using weighted least squares means and variance (WLSMV). In both cities, the two-factor models (positive factors, measuring resilience to stressful situations, and negative factors, measuring stressful situations) showed better fit than the single-factor models. The two-factor models for the complete (PSS14) and reduced scale (PSS10) showed good internal consistency (Cronbach's alpha ≥ 0.70). All the factor loadings were ≥ 0.50, except for items 8 and 12 of the negative dimension and item 13 of the positive dimension. The correlations between both dimensions of stress and psychological violence showed the expected magnitude (0.46-0.59), providing evidence of an adequate convergent construct validity. The correlations between the scales' positive and negative dimensions were around 0.74-0.78, less than 0.85, which suggests adequate discriminant validity. Extracted mean variance and scalability were slightly higher for PSS10 than for PSS14. The results were consistent in both cities. In conclusion, the single-factor solution is not recommended for assessing stress in pregnant women. The reduced, 10-item two-factor scale appears to be more appropriate for measuring perceived stress in pregnant women.
Use of Item Models in a Large-Scale Admissions Test: A Case Study
ERIC Educational Resources Information Center
Sinharay, Sandip; Johnson, Matthew S.
2008-01-01
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996, 2002). They have the potential to produce large numbers of high-quality items at reduced cost. This article introduces data from an…
The Effect of Large Scale Salinity Gradient on Langmuir Turbulence
NASA Astrophysics Data System (ADS)
Fan, Y.; Jarosz, E.; Yu, Z.; Jensen, T.; Sullivan, P. P.; Liang, J.
2017-12-01
Langmuir circulation (LC) is believed to be one of the leading order causes of turbulent mixing in the upper ocean. It is important for momentum and heat exchange across the mixed layer (ML) and directly impact the dynamics and thermodynamics in the upper ocean and lower atmosphere including the vertical distributions of chemical, biological, optical, and acoustic properties. Based on Craik and Leibovich (1976) theory, large eddy simulation (LES) models have been developed to simulate LC in the upper ocean, yielding new insights that could not be obtained from field observations and turbulent closure models. Due its high computational cost, LES models are usually limited to small domain sizes and cannot resolve large-scale flows. Furthermore, most LES models used in the LC simulations use periodic boundary conditions in the horizontal direction, which assumes the physical properties (i.e. temperature and salinity) and expected flow patterns in the area of interest are of a periodically repeating nature so that the limited small LES domain is representative for the larger area. Using periodic boundary condition can significantly reduce computational effort in problems, and it is a good assumption for isotropic shear turbulence. However, LC is anisotropic (McWilliams et al 1997) and was observed to be modulated by crosswind tidal currents (Kukulka et al 2011). Using symmetrical domains, idealized LES studies also indicate LC could interact with oceanic fronts (Hamlington et al 2014) and standing internal waves (Chini and Leibovich, 2005). The present study expands our previous LES modeling investigations of Langmuir turbulence to the real ocean conditions with large scale environmental motion that features fresh water inflow into the study region. Large scale gradient forcing is introduced to the NCAR LES model through scale separation analysis. The model is applied to a field observation in the Gulf of Mexico in July, 2016 when the measurement site was impacted by large fresh water inflow due to flooding from the Mississippi river. Model results indicate that the strong salinity gradient can reduce the mean flow in the ML and inhibit the turbulence in the planetary boundary layer. The Langmuir cells are also rotated clockwise by the pressure gradient.
The Grand Challenge of Basin-Scale Groundwater Quality Management Modelling
NASA Astrophysics Data System (ADS)
Fogg, G. E.
2017-12-01
The last 50+ years of agricultural, urban and industrial land and water use practices have accelerated the degradation of groundwater quality in the upper portions of many major aquifer systems upon which much of the world relies for water supply. In the deepest and most extensive systems (e.g., sedimentary basins) that typically have the largest groundwater production rates and hold fresh groundwaters on decadal to millennial time scales, most of the groundwater is not yet contaminated. Predicting the long-term future groundwater quality in such basins is a grand scientific challenge. Moreover, determining what changes in land and water use practices would avert future, irreversible degradation of these massive freshwater stores is a grand challenge both scientifically and societally. It is naïve to think that the problem can be solved by eliminating or reducing enough of the contaminant sources, for human exploitation of land and water resources will likely always result in some contamination. The key lies in both reducing the contaminant sources and more proactively managing recharge in terms of both quantity and quality, such that the net influx of contaminants is sufficiently moderate and appropriately distributed in space and time to reverse ongoing groundwater quality degradation. Just as sustainable groundwater quantity management is greatly facilitated with groundwater flow management models, sustainable groundwater quality management will require the use of groundwater quality management models. This is a new genre of hydrologic models do not yet exist, partly because of the lack of modeling tools and the supporting research to model non-reactive as well as reactive transport on large space and time scales. It is essential that the contaminant hydrogeology community, which has heretofore focused almost entirely on point-source plume-scale problems, direct it's efforts toward the development of process-based transport modeling tools and analyses capable of appropriately upscaling advection-dispersion and reactions at the basin scale (10^2 km). A road map for research and development in groundwater quality management modeling and its application toward securing future groundwater resources will be discussed.
NASA Astrophysics Data System (ADS)
Mei, Chao; Liu, Jiahong; Wang, Hao; Shao, Weiwei; Xia, Lin; Xiang, Chenyao; Zhou, Jinjun
2018-06-01
Urban inundation is a serious challenge that increasingly confronts the residents of many cities, as well as policymakers, in the context of rapid urbanization and climate change worldwide. In recent years, source control measures (SCMs) such as green roofs, permeable pavements, rain gardens, and vegetative swales have been implemented to address flood inundation in urban settings, and proven to be cost-effective and sustainable. In order to investigate the ability of SCMs on reducing inundation in a community-scale urban drainage system, a dynamic rainfall-runoff model of a community-scale urban drainage system was developed based on SWMM. SCMs implementing scenarios were modelled under six design rainstorm events with return period ranging from 2 to 100 years, and inundation risks of the drainage system were evaluated before and after the proposed implementation of SCMs, with a risk-evaluation method based on SWMM and analytic hierarchy process (AHP). Results show that, SCMs implementation resulting in significantly reduction of hydrological indexes that related to inundation risks, range of reduction rates of average flow, peak flow, and total flooded volume of the drainage system were 28.1-72.1, 19.0-69.2, and 33.9-56.0 %, respectively, under six rainfall events with return periods ranging from 2 to 100 years. Corresponding, the inundation risks of the drainage system were significantly reduced after SCMs implementation, the risk values falling below 0.2 when the rainfall return period was less than 10 years. Simulation results confirm the effectiveness of SCMs on mitigating inundation, and quantified the potential of SCMs on reducing inundation risks in the urban drainage system, which provided scientific references for implementing SCMs for inundation control of the study area.
Suryawanshi, Gajendra W.; Hoffmann, Alexander
2015-01-01
Human immunodeficiency virus-1 (HIV-1) employs accessory proteins to evade innate immune responses by neutralizing the anti-viral activity of host restriction factors. Apolipoprotein B mRNA-editing enzyme 3G (APOBEC3G, A3G) and bone marrow stromal cell antigen 2 (BST2) are host resistance factors that potentially inhibit HIV-1 infection. BST2 reduces viral production by tethering budding HIV-1 particles to virus producing cells, while A3G inhibits the reverse transcription (RT) process and induces viral genome hypermutation through cytidine deamination, generating fewer replication competent progeny virus. Two HIV-1 proteins counter these cellular restriction factors: Vpu, which reduces surface BST2, and Vif, which degrades cellular A3G. The contest between these host and viral proteins influences whether HIV-1 infection is established and progresses towards AIDS. In this work, we present an age-structured multi-scale viral dynamics model of in vivo HIV-1 infection. We integrated the intracellular dynamics of anti-viral activity of the host factors and their neutralization by HIV-1 accessory proteins into the virus/cell population dynamics model. We calculate the basic reproductive ratio (Ro) as a function of host-viral protein interaction coefficients, and numerically simulated the multi-scale model to understand HIV-1 dynamics following host factor-induced perturbations. We found that reducing the influence of Vpu triggers a drop in Ro, revealing the impact of BST2 on viral infection control. Reducing Vif’s effect reveals the restrictive efficacy of A3G in blocking RT and in inducing lethal hypermutations, however, neither of these factors alone is sufficient to fully restrict HIV-1 infection. Interestingly, our model further predicts that BST2 and A3G function synergistically, and delineates their relative contribution in limiting HIV-1 infection and disease progression. We provide a robust modeling framework for devising novel combination therapies that target HIV-1 accessory proteins and boost antiviral activity of host factors. PMID:26385832
Ice Accretion with Varying Surface Tension
NASA Technical Reports Server (NTRS)
Bilanin, Alan J.; Anderson, David N.
1995-01-01
During an icing encounter of an aircraft in flight, super-cooled water droplets impinging on an airfoil may splash before freezing. This paper reports tests performed to determine if this effect is significant and uses the results to develop an improved scaling method for use in icing test facilities. Simple laboratory tests showed that drops splash on impact at the Reynolds and Weber numbers typical of icing encounters. Further confirmation of droplet splash came from icing tests performed in the NaSA Lewis Icing Research Tunnel (IRT) with a surfactant added to the spray water to reduce the surface tension. The resulting ice shapes were significantly different from those formed when no surfactant was added to the water. These results suggested that the droplet Weber number must be kept constant to properly scale icing test conditions. Finally, the paper presents a Weber-number-based scaling method and reports results from scaling tests in the IRT in which model size was reduced up to a factor of 3. Scale and reference ice shapes are shown which confirm the effectiveness of this new scaling method.
NASA Astrophysics Data System (ADS)
Saksena, S.; Merwade, V.; Singhofen, P.
2017-12-01
There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.
Modeling of Nano-Scale Transistors and Memory Devices for Low Power Applications
NASA Astrophysics Data System (ADS)
Cao, Xi
As the featuring size of transistors scaled down to sub-20 nm, the continuous scaling of power has become one of the main challenges of the semiconductor industry. The power issue is raised by the barely scalable supply voltage and a limitation on the subthreshold swing (SS) of conventional metal-oxide-semiconductor field-effect transistor (MOSFET). In this work, self-consistent quantum transport device simulators are developed to examine the nanoscale transistors based on black phosphorus (BP) materials. The scaling limit of double-gated BP MOSFETs is assessed. To reduce the SS below the thermionic limit for ultra-steep switching, tunnel FETs (TFETs) and vertical ballistic impact ionization FETs based on BP and its heterojunctions are investigated. Furthermore, the ferroelectric tunneling junction (FTJ) is modeled and examined for potential low power memory applications. For BP MOSFETs, the device physics at the ultimate scaling limit are examined. The performance of monolayer BP MOSFETs is projected to sub-10 nm and compared with the International Technology Roadmap for Semiconductors (ITRS) requirements. And the interplay of quantum mechanical effects and the highly anisotropic bandstructure of BP at this scale is investigated. By choice of layer number and crystalline direction, BP materials can offer a range of bandgap and effective mass values, which is attractive for TFET applications. Therefore, scaling behaviors of BP TFETs near and below the 10 nm scale are studied. The gate oxide thickness scaling and the effect of high-k dielectric are compared between the TFETs and the MOSFETs. For the TFETs with the gate lengths beyond 10 nm and at the sub-10 nm scale, the direct-source-to-drain tunneling issues are evaluated, and different strategies to achieve ultra-steep switching are specified. In a sub-10 nm graphene-BP-graphene heterojunction transistor, the sharp turnon behavior was observed, under a small source-drain bias of 0.1 V. The fast switch is attributed to a ballistic energy-dependent impact ionization mechanism. A device model is developed, which shows agreement with experiment results. The model is applied to explore the gate oxide scaling behavior and the effect of graphene doping, and to optimize the device for low power applications. Finally, to keep the integrity of the computing system, the FTJ is studied for its possible use as a low power memory device. A compact model for FTJ, dealing with both static and dynamic behaviors, is developed and compared with experimental data. The write energy consumed by the memory cell, comprising one transistor and one FTJ, is estimated by applying the compact model to circuit simulation. And a way to reduce the write energy is suggested.
Hypochondriacal attitudes comprise heterogeneous non-illness-related cognitions.
Schwenzer, Michael; Mathiak, Klaus
2012-10-17
Hypochondriacal attitudes were associated with cognitions not related to illness: Social fears, low self-esteem, and reduced warm glow effect, i.e. less positive appraisal of familiar stimuli. Only a single study had investigated the correlation of hypochondriacal attitudes with the warm glow effect so far and the present study aimed to corroborate this association. Particularly, the present investigation tested for the first time whether social fears, low self-esteem, and reduced warm glow effect represent distinct or related biases in hypochondriacal attitudes. Fifty-five volunteers filled in the Hypochondriacal Beliefs and Disease Phobia scales of the Illness Attitude Scales, two scales enquiring social fears of criticism and intimacy, and the Rosenberg Self-Esteem Scale. The interaction of valence and spontaneous familiarity ratings of Chinese characters indicated the warm glow effect. A stepwise regression model revealed specific covariance of social fears and warm glow with hypochondriacal attitudes independent from the respective other variable. The correlation between low self-esteem and hypochondriacal attitudes missed significance. Hypochondriacal attitudes are embedded in a heterogeneous cluster of non-illness-related cognitions. Each social fears and a reduced cognitive capacity to associate two features--positive appraisal and familiarity--could diminish the susceptibility to safety signals such as medical reassurance. To compensate for reduced susceptibility to safety signals, multifocal treatment and repeated consultations appear advisable.
Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Marks, D. G.; Gurney, R. J.
2009-12-01
The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Astrophysics Data System (ADS)
Herrington, A. R.; Reed, K. A.
2018-02-01
A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin
2017-02-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.
Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data
Salahuddin, Saqib; Porter, Emily; Meaney, Paul M.; O’Halloran, Martin
2016-01-01
The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues. PMID:28191324
Finite-size scaling for discontinuous nonequilibrium phase transitions
NASA Astrophysics Data System (ADS)
de Oliveira, Marcelo M.; da Luz, M. G. E.; Fiore, Carlos E.
2018-06-01
A finite-size scaling theory, originally developed only for transitions to absorbing states [Phys. Rev. E 92, 062126 (2015), 10.1103/PhysRevE.92.062126], is extended to distinct sorts of discontinuous nonequilibrium phase transitions. Expressions for quantities such as response functions, reduced cumulants, and equal area probability distributions are derived from phenomenological arguments. Irrespective of system details, all these quantities scale with the volume, establishing the dependence on size. The approach generality is illustrated through the analysis of different models. The present results are a relevant step in trying to unify the scaling behavior description of nonequilibrium transition processes.
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Pizzitutti, Francesco; Pan, William; Feingold, Beth; Zaitchik, Ben; Álvarez, Carlos A; Mena, Carlos F
2018-01-01
Though malaria control initiatives have markedly reduced malaria prevalence in recent decades, global eradication is far from actuality. Recent studies show that environmental and social heterogeneities in low-transmission settings have an increased weight in shaping malaria micro-epidemiology. New integrated and more localized control strategies should be developed and tested. Here we present a set of agent-based models designed to study the influence of local scale human movements on local scale malaria transmission in a typical Amazon environment, where malaria is transmission is low and strongly connected with seasonal riverine flooding. The agent-based simulations show that the overall malaria incidence is essentially not influenced by local scale human movements. In contrast, the locations of malaria high risk spatial hotspots heavily depend on human movements because simulated malaria hotspots are mainly centered on farms, were laborers work during the day. The agent-based models are then used to test the effectiveness of two different malaria control strategies both designed to reduce local scale malaria incidence by targeting hotspots. The first control scenario consists in treat against mosquito bites people that, during the simulation, enter at least once inside hotspots revealed considering the actual sites where human individuals were infected. The second scenario involves the treatment of people entering in hotspots calculated assuming that the infection sites of every infected individual is located in the household where the individual lives. Simulations show that both considered scenarios perform better in controlling malaria than a randomized treatment, although targeting household hotspots shows slightly better performance.
Monthly streamflow forecasting at varying spatial scales in the Rhine basin
NASA Astrophysics Data System (ADS)
Schick, Simon; Rössler, Ole; Weingartner, Rolf
2018-02-01
Model output statistics (MOS) methods can be used to empirically relate an environmental variable of interest to predictions from earth system models (ESMs). This variable often belongs to a spatial scale not resolved by the ESM. Here, using the linear model fitted by least squares, we regress monthly mean streamflow of the Rhine River at Lobith and Basel against seasonal predictions of precipitation, surface air temperature, and runoff from the European Centre for Medium-Range Weather Forecasts. To address potential effects of a scale mismatch between the ESM's horizontal grid resolution and the hydrological application, the MOS method is further tested with an experiment conducted at the subcatchment scale. This experiment applies the MOS method to 133 additional gauging stations located within the Rhine basin and combines the forecasts from the subcatchments to predict streamflow at Lobith and Basel. In doing so, the MOS method is tested for catchments areas covering 4 orders of magnitude. Using data from the period 1981-2011, the results show that skill, with respect to climatology, is restricted on average to the first month ahead. This result holds for both the predictor combination that mimics the initial conditions and the predictor combinations that additionally include the dynamical seasonal predictions. The latter, however, reduce the mean absolute error of the former in the range of 5 to 12 %, which is consistently reproduced at the subcatchment scale. An additional experiment conducted for 5-day mean streamflow indicates that the dynamical predictions help to reduce uncertainties up to about 20 days ahead, but it also reveals some shortcomings of the present MOS method.
Modeling effects of climate change and phase shifts on detrital production of a kelp bed.
Krumhansl, Kira A; Lauzon-Guay, Jean-Sébastien; Scheibling, Robert E
2014-03-01
The exchange of energy and nutrients between ecosystems (i.e., resource subsidies) plays a central role in ecological dynamics over a range of spatial and temporal scales. Little attention has been paid to the role of anthropogenic impacts on natural systems in altering the magnitude, timing, and quality of resource subsidies. Kelp ecosystems are highly productive on a local scale and export over 80% of kelp primary production as detritus, subsidizing consumers across broad spatial scales. Here, we generate a model of detrital production from a kelp bed in Nova Scotia to hindcast trends in detrital production based on temperature and wave height recorded in the study region from 1976 to 2009, and to project changes in detrital production that may result from future climate change. Historical and projected increases in temperature and wave height led to higher rates of detrital production through increased blade breakage and kelp dislodgment from the substratum, but this reduced kelp biomass and led to a decline in detrital production in the long-term. We also used the model to demonstrate that the phase shift from a highly productive kelp bed to a low-productivity barrens, driven by the grazing activity of sea urchins, reduces kelp detrital production by several orders of magnitude, an effect that would be exacerbated by projected increases in temperature and wave action. These results indicate that climate-mediated changes in ecological dynamics operating on local scales may alter the magnitude of resource subsidies to adjacent ecosystems, affecting ecological dynamics on regional scales.
On the climate impacts from the volcanic and solar forcings
NASA Astrophysics Data System (ADS)
Varotsos, Costas A.; Lovejoy, Shaun
2016-04-01
The observed and the modelled estimations show that the main forcings on the atmosphere are of volcanic and solar origins, which act however in an opposite way. The former can be very strong and decrease at short time scales, whereas, the latter increase with time scale. On the contrary, the observed fluctuations in temperatures increase at long scales (e.g. centennial and millennial), and the solar forcings do increase with scale. The common practice is to reduce forcings to radiative equivalents assuming that their combination is linear. In order to clarify the validity of the linearity assumption and determine its range of validity, we systematically compare the statistical properties of solar only, volcanic only and combined solar and volcanic forcings over the range of time scales from one to 1000 years. Additionally, we attempt to investigate plausible reasons for the discrepancies observed between the measured and modeled anomalies of tropospheric temperatures in the tropics. For this purpose, we analyse tropospheric temperature anomalies for both the measured and modeled time series. The results obtained show that the measured temperature fluctuations reveal white noise behavior, while the modeled ones exhibit long-range power law correlations. We suggest that the persistent signal, should be removed from the modeled values in order to achieve better agreement with observations. Keywords: Scaling, Nonlinear variability, Climate system, Solar radiation
Persistence of initial conditions in continental scale air quality simulations
NASA Astrophysics Data System (ADS)
Hogrefe, Christian; Roselle, Shawn J.; Bash, Jesse O.
2017-07-01
This study investigates the effect of initial conditions (IC) for pollutant concentrations in the atmosphere and soil on simulated air quality for two continental-scale Community Multiscale Air Quality (CMAQ) model applications. One of these applications was performed for springtime and the second for summertime. Results show that a spin-up period of ten days commonly used in regional-scale applications may not be sufficient to reduce the effects of initial conditions to less than 1% of seasonally-averaged surface ozone concentrations everywhere while 20 days were found to be sufficient for the entire domain for the spring case and almost the entire domain for the summer case. For the summer case, differences were found to persist longer aloft due to circulation of air masses and even a spin-up period of 30 days was not sufficient to reduce the effects of ICs to less than 1% of seasonally-averaged layer 34 ozone concentrations over the southwestern portion of the modeling domain. Analysis of the effect of soil initial conditions for the CMAQ bidirectional NH3 exchange model shows that during springtime they can have an important effect on simulated inorganic aerosols concentrations for time periods of one month or longer. The effects are less pronounced during other seasons. The results, while specific to the modeling domain and time periods simulated here, suggest that modeling protocols need to be scrutinized for a given application and that it cannot be assumed that commonly-used spin-up periods are necessarily sufficient to reduce the effects of initial conditions on model results to an acceptable level. What constitutes an acceptable level of difference cannot be generalized and will depend on the particular application, time period and species of interest. Moreover, as the application of air quality models is being expanded to cover larger geographical domains and as these models are increasingly being coupled with other modeling systems to better represent air-surface-water exchanges, the effects of model initialization in such applications needs to be studied in future work.
Dow, Christopher B; Collins, Brandon M; Stephens, Scott L
2016-03-01
Finding novel ways to plan and implement landscape-level forest treatments that protect sensitive wildlife and other key ecosystem components, while also reducing the risk of large-scale, high-severity fires, can prove to be difficult. We examined alternative approaches to landscape-scale fuel-treatment design for the same landscape. These approaches included two different treatment scenarios generated from an optimization algorithm that reduces modeled fire spread across the landscape, one with resource-protection constrains and one without the same. We also included a treatment scenario that was the actual fuel-treatment network implemented, as well as a no-treatment scenario. For all the four scenarios, we modeled hazardous fire potential based on conditional burn probabilities, and projected fire emissions. Results demonstrate that in all the three active treatment scenarios, hazardous fire potential, fire area, and emissions were reduced by approximately 50 % relative to the untreated condition. Results depict that incorporation of constraints is more effective at reducing modeled fire outputs, possibly due to the greater aggregation of treatments, creating greater continuity of fuel-treatment blocks across the landscape. The implementation of fuel-treatment networks using different planning techniques that incorporate real-world constraints can reduce the risk of large problematic fires, allow for landscape-level heterogeneity that can provide necessary ecosystem services, create mixed forest stand structures on a landscape, and promote resilience in the uncertain future of climate change.
The use of imprecise processing to improve accuracy in weather & climate prediction
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, T. N.
2014-08-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.
Schachtel, Bernard; Aspley, Sue; Shephard, Adrian; Shea, Timothy; Smith, Gary; Schachtel, Emily
2014-07-03
The sore throat pain model has been conducted by different clinical investigators to demonstrate the efficacy of acute analgesic drugs in single-dose randomized clinical trials. The model used here was designed to study the multiple-dose safety and efficacy of lozenges containing flurbiprofen at 8.75 mg. Adults (n=198) with moderate or severe acute sore throat and findings of pharyngitis on a Tonsillo-Pharyngitis Assessment (TPA) were randomly assigned to use either flurbiprofen 8.75 mg lozenges (n=101) or matching placebo lozenges (n=97) under double-blind conditions. Patients sucked one lozenge every three to six hours as needed, up to five lozenges per day, and rated symptoms on 100-mm scales: the Sore Throat Pain Intensity Scale (STPIS), the Difficulty Swallowing Scale (DSS), and the Swollen Throat Scale (SwoTS). Reductions in pain (lasting for three hours) and in difficulty swallowing and throat swelling (for four hours) were observed after a single dose of the flurbiprofen 8.75 mg lozenge (P<0.05 compared with placebo). After using multiple doses over 24 hours, flurbiprofen-treated patients experienced a 59% greater reduction in throat pain, 45% less difficulty swallowing, and 44% less throat swelling than placebo-treated patients (all P<0.01). There were no serious adverse events. Utilizing the sore throat pain model with multiple doses over 24 hours, flurbiprofen 8.75 mg lozenges were shown to be an effective, well-tolerated treatment for sore throat pain. Other pharmacologic actions (reduced difficulty swallowing and reduced throat swelling) and overall patient satisfaction from the flurbiprofen lozenges were also demonstrated in this multiple-dose implementation of the sore throat pain model. This trial was registered with ClinicalTrials.gov, registration number: NCT01048866, registration date: January 13, 2010.
Koldsø, Heidi; Reddy, Tyler; Fowler, Philip W; Duncan, Anna L; Sansom, Mark S P
2016-09-01
The cytoskeleton underlying cell membranes may influence the dynamic organization of proteins and lipids within the bilayer by immobilizing certain transmembrane (TM) proteins and forming corrals within the membrane. Here, we present coarse-grained resolution simulations of a biologically realistic membrane model of asymmetrically organized lipids and TM proteins. We determine the effects of a model of cytoskeletal immobilization of selected membrane proteins using long time scale coarse-grained molecular dynamics simulations. By introducing compartments with varying degrees of restraints within the membrane models, we are able to reveal how compartmentalization caused by cytoskeletal immobilization leads to reduced and anomalous diffusional mobility of both proteins and lipids. This in turn results in a reduced rate of protein dimerization within the membrane and of hopping of membrane proteins between compartments. These simulations provide a molecular realization of hierarchical models often invoked to explain single-molecule imaging studies of membrane proteins.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.; Lakshmi, V.
2017-12-01
Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.
NASA Astrophysics Data System (ADS)
Tao, Wei-Kuo; Chern, Jiun-Dar
2017-06-01
The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multiscale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. The impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE, a CRM) model and Goddard MMF that uses the GCEs as its embedded CRMs. Both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the Goddard MMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feedback are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.
NASA Astrophysics Data System (ADS)
Olson, Mitchell R.; Sale, Tom C.
2015-06-01
Soil remediation is often inhibited by subsurface heterogeneity, which constrains contaminant/reagent contact. Use of soil mixing techniques for reagent delivery provides a means to overcome contaminant/reagent contact limitations. Furthermore, soil mixing reduces the permeability of treated soils, thus extending the time for reactions to proceed. This paper describes research conducted to evaluate implications of soil mixing on remediation of non-aqueous phase liquid (NAPL) source zones. The research consisted of column studies and subsequent modeling of field-scale systems. For column studies, clean influent water was flushed through columns containing homogenized soils, granular zero valent iron (ZVI), and trichloroethene (TCE) NAPL. Within the columns, NAPL depletion occurred due to dissolution, followed by either column-effluent discharge or ZVI-mediated degradation. Complete removal of TCE NAPL from the columns occurred in 6-8 pore volumes of flow. However, most of the TCE (> 96%) was discharged in the column effluent; less than 4% of TCE was degraded. The low fraction of TCE degraded is attributed to the short hydraulic residence time (< 4 days) in the columns. Subsequently, modeling was conducted to scale up column results. By scaling up to field-relevant system sizes (> 10 m) and reducing permeability by one-or-more orders of magnitude, the residence time could be greatly extended, potentially for periods of years to decades. Model output indicates that the fraction of TCE degraded can be increased to > 99.9%, given typical post-mixing soil permeability values. These results suggest that remediation performance can be greatly enhanced by combining contaminant degradation with an extended residence time.
Modeling fuel treatment impacts on fire suppression cost savings: A review
Matthew P. Thompson; Nathaniel M. Anderson
2015-01-01
High up-front costs and uncertain return on investment make it difficult for land managers to economically justify large-scale fuel treatments, which remove trees and other vegetation to improve conditions for fire control, reduce the likelihood of ignition, or reduce potential damage from wildland fire if it occurs. In the short-term, revenue from harvested forest...
Evaluating effects of vegetation on the acoustical environment by physical scale-modeling
Richard H. Lyon; Cristopher N. Blair; Richard G. DeJong
1977-01-01
It is generally assumed that vegetation is beneficial acoustically, as well as esthetically, in that it may act as a shield to reduce highway noise impact on a community as in a sound absorber to reduce reverberant noise levels in city streets. Contradictory evidence exists, however, that noise may be increased because of vegetation. We performed field studies and...
Simulation of double layers in a model auroral circuit with nonlinear impedance
NASA Technical Reports Server (NTRS)
Smith, R. A.
1986-01-01
A reduced circuit description of the U-shaped potential structure of a discrete auroral arc, consisting of the flank transmission line plus parallel-electric-field region, is used to provide the boundary condition for one-dimensional simulations of the double-layer evolution. The model yields asymptotic scalings of the double-layer potential, as a function of an anomalous transport coefficient alpha and of the perpendicular length scale l(a) of the arc. The arc potential phi(DL) scales approximately linearly with alpha, and for alpha fixed phi (DL) about l(a) to the z power. Using parameters appropriate to the auroral zone acceleration region, potentials of phi (DPL) 10 kV scale to projected ionospheric dimensions of about 1 km, with power flows of the order of magnitude of substorm dissipation rates.
Factor Structure of Scores from the Conners’ Rating Scales–Revised Among Nepali Children
Pendergast, Laura L.; Vandiver, Beverly J.; Schaefer, Barbara A.; Cole, Pamela M.; Murray-Kolb, Laura M.; Christian, Parul
2014-01-01
This study used exploratory and confirmatory factor analyses to examine the structures of scores from the Conners’ Teacher and Parent Rating Scales–Revised (CTRS-R and CPRS-R, respectively; Conners, 1997). The scales were administered to 1,835 parents and 1,387 teachers of children in Nepal's Sarlahi district – a region where no other measures of child psychopathology have been studied. With a Nepali sample, the findings indicate that reduced two factor models for the Conners’ scales are superior to the models identified in the scale development research. The hyperactivity and inattention factors were comparable to what has been identified in prior research, while other factors (e.g., social problems) differed substantially. Implications for use of the Conners’ scales in Nepal and cross cultural issues in the assessment of ADHD symptoms are discussed. PMID:25574454
Insertion loss of noise barriers on an aboveground, full-scale model longwall coal mining shearer.
Sweeney, Daniel D; Slagley, Jeremy M; Smith, David A
2010-05-01
The U.S. mining industry struggles with hazardous noise and dust exposures in underground mining. Specifically, longwall coal mine shearer operators are routinely exposed to noise levels at 151% of the allowable daily dose, and approximately 20% exceed regulatory dust levels. In the current study, a partial barrier was mounted on the full-scale mock shearer at the National Institute for Occupational Safety and Health Pittsburgh Research Laboratory. A simulated, full-scale, coal mine longwall shearer operation was employed to test the feasibility of utilizing a barrier to separate the shearer operator from the direct path of the noise and dust source during mining operations. In this model, noise levels at the operators' positions were reduced by 2.6 to 8.2 A-weighted decibels (dBA) from the application of the test barriers. Estimated insertion loss underground was 1.7 to 7.3 dBA. The barrier should be tested in an underground mining operation to determine if it can reduce shearer operators' noise exposure to below regulatory limits.
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
2011-01-01
Background Knowledge in natural sciences generally predicts study performance in the first two years of the medical curriculum. In order to reduce delay and dropout in the preclinical years, Hamburg Medical School decided to develop a natural science test (HAM-Nat) for student selection. In the present study, two different approaches to scale construction are presented: a unidimensional scale and a scale composed of three subject specific dimensions. Their psychometric properties and relations to academic success are compared. Methods 334 first year medical students of the 2006 cohort responded to 52 multiple choice items from biology, physics, and chemistry. For the construction of scales we generated two random subsamples, one for development and one for validation. In the development sample, unidimensional item sets were extracted from the item pool by means of weighted least squares (WLS) factor analysis, and subsequently fitted to the Rasch model. In the validation sample, the scales were subjected to confirmatory factor analysis and, again, Rasch modelling. The outcome measure was academic success after two years. Results Although the correlational structure within the item set is weak, a unidimensional scale could be fitted to the Rasch model. However, psychometric properties of this scale deteriorated in the validation sample. A model with three highly correlated subject specific factors performed better. All summary scales predicted academic success with an odds ratio of about 2.0. Prediction was independent of high school grades and there was a slight tendency for prediction to be better in females than in males. Conclusions A model separating biology, physics, and chemistry into different Rasch scales seems to be more suitable for item bank development than a unidimensional model, even when these scales are highly correlated and enter into a global score. When such a combination scale is used to select the upper quartile of applicants, the proportion of successful completion of the curriculum after two years is expected to rise substantially. PMID:21999767
Hissbach, Johanna C; Klusmann, Dietrich; Hampe, Wolfgang
2011-10-14
Knowledge in natural sciences generally predicts study performance in the first two years of the medical curriculum. In order to reduce delay and dropout in the preclinical years, Hamburg Medical School decided to develop a natural science test (HAM-Nat) for student selection. In the present study, two different approaches to scale construction are presented: a unidimensional scale and a scale composed of three subject specific dimensions. Their psychometric properties and relations to academic success are compared. 334 first year medical students of the 2006 cohort responded to 52 multiple choice items from biology, physics, and chemistry. For the construction of scales we generated two random subsamples, one for development and one for validation. In the development sample, unidimensional item sets were extracted from the item pool by means of weighted least squares (WLS) factor analysis, and subsequently fitted to the Rasch model. In the validation sample, the scales were subjected to confirmatory factor analysis and, again, Rasch modelling. The outcome measure was academic success after two years. Although the correlational structure within the item set is weak, a unidimensional scale could be fitted to the Rasch model. However, psychometric properties of this scale deteriorated in the validation sample. A model with three highly correlated subject specific factors performed better. All summary scales predicted academic success with an odds ratio of about 2.0. Prediction was independent of high school grades and there was a slight tendency for prediction to be better in females than in males. A model separating biology, physics, and chemistry into different Rasch scales seems to be more suitable for item bank development than a unidimensional model, even when these scales are highly correlated and enter into a global score. When such a combination scale is used to select the upper quartile of applicants, the proportion of successful completion of the curriculum after two years is expected to rise substantially.
A 100,000 Scale Factor Radar Range.
Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser
2017-12-19
The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.
NASA Astrophysics Data System (ADS)
Nolte, C. G.; Otte, T. L.; Bowden, J. H.; Otte, M. J.
2010-12-01
There is disagreement in the regional climate modeling community as to the appropriateness of the use of internal nudging. Some investigators argue that the regional model should be minimally constrained and allowed to respond to regional-scale forcing, while others have noted that in the absence of interior nudging, significant large-scale discrepancies develop between the regional model solution and the driving coarse-scale fields. These discrepancies lead to reduced confidence in the ability of regional climate models to dynamically downscale global climate model simulations under climate change scenarios, and detract from the usability of the regional simulations for impact assessments. The advantages and limitations of interior nudging schemes for regional climate modeling are investigated in this study. Multi-year simulations using the WRF model driven by reanalysis data over the continental United States at 36km resolution are conducted using spectral nudging, grid point nudging, and for a base case without interior nudging. The means, distributions, and inter-annual variability of temperature and precipitation will be evaluated in comparison to regional analyses.
Green infrastructure retrofits on residential parcels: Ecohydrologic modeling for stormwater design
NASA Astrophysics Data System (ADS)
Miles, B.; Band, L. E.
2014-12-01
To meet water quality goals stormwater utilities and not-for-profit watershed organizations in the U.S. are working with citizens to design and implement green infrastructure on residential land. Green infrastructure, as an alternative and complement to traditional (grey) stormwater infrastructure, has the potential to contribute to multiple ecosystem benefits including stormwater volume reduction, carbon sequestration, urban heat island mitigation, and to provide amenities to residents. However, in small (1-10-km2) medium-density urban watersheds with heterogeneous land cover it is unclear whether stormwater retrofits on residential parcels significantly contributes to reduce stormwater volume at the watershed scale. In this paper, we seek to improve understanding of how small-scale redistribution of water at the parcel scale as part of green infrastructure implementation affects urban water budgets and stormwater volume across spatial scales. As study sites we use two medium-density headwater watersheds in Baltimore, MD and Durham, NC. We develop ecohydrology modeling experiments to evaluate the effectiveness of redirecting residential rooftop runoff to un-altered pervious surfaces and to engineered rain gardens to reduce stormwater runoff. As baselines for these experiments, we performed field surveys of residential rooftop hydrologic connectivity to adjacent impervious surfaces, and found low rates of connectivity. Through simulations of pervasive adoption of downspout disconnection to un-altered pervious areas or to rain garden stormwater control measures (SCM) in these catchments, we find that most parcel-scale changes in stormwater fate are attenuated at larger spatial scales and that neither SCM alone is likely to provide significant changes in streamflow at the watershed scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pak, A.; Dewald, E. L.; Landen, O. L.
2015-12-15
Temporally resolved measurements of the hohlraum radiation flux asymmetry incident onto a bismuth coated surrogate capsule have been made over the first two nanoseconds of ignition relevant laser pulses. Specifically, we study the P2 asymmetry of the incoming flux as a function of cone fraction, defined as the inner-to-total laser beam power ratio, for a variety of hohlraums with different scales and gas fills. This work was performed to understand the relevance of recent experiments, conducted in new reduced-scale neopentane gas filled hohlraums, to full scale helium filled ignition targets. Experimental measurements, matched by 3D view factor calculations, are usedmore » to infer differences in symmetry, relative beam absorption, and cross beam energy transfer (CBET), employing an analytic model. Despite differences in hohlraum dimensions and gas fill, as well as in laser beam pointing and power, we find that laser absorption, CBET, and the cone fraction, at which a symmetric flux is achieved, are similar to within 25% between experiments conducted in the reduced and full scale hohlraums. This work demonstrates a close surrogacy in the dynamics during the first shock between reduced-scale and full scale implosion experiments and is an important step in enabling the increased rate of study for physics associated with inertial confinement fusion.« less
Dynamic effective connectivity in cortically embedded systems of recurrently coupled synfire chains.
Trengove, Chris; Diesmann, Markus; van Leeuwen, Cees
2016-02-01
As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.
NASA Technical Reports Server (NTRS)
Cole, T. W.; Rathburn, E. A.
1974-01-01
A static acoustic and propulsion test of a small radius Jacobs-Hurkamp and a large radius Flex Flap combined with four upper surface blowing (USB) nozzles was performed. Nozzle force and flow data, flap trailing edge total pressure survey data, and acoustic data were obtained. Jacobs-Hurkamp flap surface pressure data, flow visualization photographs, and spoiler acoustic data from the limited mid-year tests are reported. A pressure ratio range of 1.2 to 1.5 was investigated for the USB nozzles and for the auxiliary blowing slots. The acoustic data were scaled to a four-engine STOL airplane of roughly 110,000 kilograms or 50,000 pounds gross weight, corresponding to a model scale of approximately 0.2 for the nozzles without deflector. The model nozzle scale is actually reduced to about .17 with deflector although all results in this report assume 0.2 scale factor. Trailing edge pressure surveys indicated that poor flow attachment was obtained even at large flow impingement angles unless a nozzle deflector plate was used. Good attachment was obtained with the aspect ratio four nozzle with deflector, confirming the small scale wind tunnel tests.
NASA Astrophysics Data System (ADS)
Rougier, Esteban; Patton, Howard J.
2015-05-01
Reduced displacement potentials (RDPs) for chemical explosions of the Source Physics Experiments (SPE) in granite at the Nevada Nuclear Security Site are estimated from free-field ground motion recordings. Far-field P wave source functions are proportional to the time derivative of RDPs. Frequency domain comparisons between measured source functions and model predictions show that high-frequency amplitudes roll off as ω- 2, but models fail to predict the observed seismic moment, corner frequency, and spectral overshoot. All three features are fit satisfactorily for the SPE-2 test after cavity radius Rc is reduced by 12%, elastic radius is reduced by 58%, and peak-to-static pressure ratio on the elastic radius is increased by 100%, all with respect to the Mueller-Murphy model modified with the Denny-Johnson Rc scaling law. A large discrepancy is found between the cavity volume inferred from RDPs and the volume estimated from laser scans of the emplacement hole. The measurements imply a scaled Rc of ~5 m/kt1/3, more than a factor of 2 smaller than nuclear explosions. Less than 25% of the seismic moment can be attributed to cavity formation. A breakdown of the incompressibility assumption due to shear dilatancy of the source medium around the cavity is the likely explanation. New formulas are developed for volume changes due to medium bulking (or compaction). A 0.04% decrease of average density inside the elastic radius accounts for the missing volumetric moment. Assuming incompressibility, established Rc scaling laws predicted the moment reasonable well, but it was only fortuitous because dilation of the source medium compensated for the small cavity volume.
Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa
2015-01-01
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Gregory S; Braun, Robert J; Ma, Zhiwen
This project was motivated by the potential of reducible perovskite oxides for high-temperature, thermochemical energy storage (TCES) to provide dispatchable renewable heat for concentrating solar power (CSP) plants. This project sought to identify and characterize perovskites from earth-abundant cations with high reducibility below 1000 °C for coupling TCES of solar energy to super-critical CO2 (s-CO2) plants that operate above temperature limits (< 600 °C) of current molten-salt storage. Specific TCES > 750 kJ/kg for storage cycles between 500 and 900 °C was targeted with a system cost goal of $15/kWhth. To realize feasibility of TCES systems based on reducible perovskites,more » our team focused on designing and testing a lab-scale concentrating solar receiver, wherein perovskite particles capture solar energy by fast O2 release and sensible heating at a thermal efficiency of 90% and wall temperatures below 1100 °C. System-level models of the receiver and reoxidation reactor coupled to validated thermochemical materials models can assess approaches to scale-up a full TCES system based on reduction/oxidation cycles of perovskite oxides at large scales. After characterizing many Ca-based perovskites for TCES, our team identified strontium-doped calcium manganite Ca1-xSrxMnO3-δ (with x ≤ 0.1) as a composition with adequate stability and specific TCES capacity (> 750 kJ/kg for Ca0.95Sr0.05MnO3-δ) for cycling between air at 500 °C and low-PO2 (10-4 bar) N2 at 900 °C. Substantial kinetic tests demonstrated that resident times of several minutes in low-PO2 gas were needed for these materials to reach the specific TCES goals with particles of reasonable size for large-scale transport (diameter dp > 200 μm). On the other hand, fast reoxidation kinetics in air enables subsequent rapid heat release in a fluidized reoxidation reactor/ heat recovery unit for driving s-CO2 power plants. Validated material thermochemistry coupled to radiation and convective particle-gas transport models facilitated full TCES system analysis for CSP and results showed that receiver efficiencies approaching 85% were feasible with wall-to-particle heat transfer coefficients observed in laboratory experiments. Coupling these reactive particle-gas transport models to external SolTrace and CFD models drove design of a reactive-particle receiver with indirect heating through flux spreading. A lab-scale receiver using Ca0.9Sr0.1MnO3-δ was demonstrated at NREL’s High Flux Solar Furnace with particle temperatures reaching 900 °C while wall temperatures remained below 1100 °C and approximately 200 kJ/kg of chemical energy storage. These first demonstrations of on-sun perovskite reduction and the robust modeling tools from this program provide a basis for going forward with improved receiver designs to increase heat fluxes and solar-energy capture efficiencies. Measurements and modeling tools from this project provide the foundations for advancing TCES for CSP and other applications using reducible perovskite oxides from low-cost, earth-abundant elements. A perovskite composition has been identified that has the thermodynamic potential to meet the targeted TCES capacity of 750 kJ/kg over a range of temperatures amenable for integration with s-CO2 cycles. Further research needs to explore ways of accelerating effective particle kinetics through variations in composition and/or reactor/receiver design. Initial demonstrations of on-sun particle reduction for TCES show a need for testing at larger scales with reduced heat losses and improved particle-wall heat transfer. The gained insight into particle-gas transport and reactor design can launch future development of cost-effective, large-scale particle-based TCES as a technology for enabling increased renewable energy penetration.« less
Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors
Salama, Mhd. Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors. PMID:22163615
Catchment scale afforestation for mitigating flooding
NASA Astrophysics Data System (ADS)
Barnes, Mhari; Quinn, Paul; Bathurst, James; Birkinshaw, Stephen
2016-04-01
After the 2013-14 floods in the UK there were calls to 'forest the uplands' as a solution to reducing flood risk across the nation. At present, 1 in 6 homes in Britain are at risk of flooding and current EU legislation demands a sustainable, 'nature-based solution'. However, the role of forests as a natural flood management technique remains highly controversial, due to a distinct lack of robust evidence into its effectiveness in reducing flood risk during extreme events. SHETRAN, physically-based spatially-distributed hydrological models of the Irthing catchment and Wark forest sub-catchments (northern England) have been developed in order to test the hypothesis of the effect trees have on flood magnitude. The advanced physically-based models have been designed to model scale-related responses from 1, through 10, to 100km2, a first study of the extent to which afforestation and woody debris runoff attenuation features (RAFs) may help to mitigate floods at the full catchment scale (100-1000 km2) and on a national basis. Furthermore, there is a need to analyse the extent to which land management practices, and the installation of nature-based RAFs, such as woody debris dams, in headwater catchments can attenuate flood-wave movement, and potentially reduce downstream flood risk. The impacts of riparian planting and the benefits of adding large woody debris of several designs and on differing sizes of channels has also been simulated using advanced hydrodynamic (HiPIMS) and hydrological modelling (SHETRAN). With the aim of determining the effect forestry may have on flood frequency, 1000 years of generated rainfall data representative of current conditions has been used to determine the difference between current land-cover, different distributions of forest cover and the defining scenarios - complete forest removal and complete afforestation of the catchment. The simulations show the percentage of forestry required to have a significant impact on mitigating downstream flood risk at sub-catchment and catchment scale. Key words: Flood peak, nature-based solutions, forest hydrology, hydrological modelling, SHETRAN, flood frequency, flood magnitude, land-cover change, upland afforestation.
Persistence of initial conditions in continental scale air quality ...
This study investigates the effect of initial conditions (IC) for pollutant concentrations in the atmosphere and soil on simulated air quality for two continental-scale Community Multiscale Air Quality (CMAQ) model applications. One of these applications was performed for springtime and the second for summertime. Results show that a spin-up period of ten days commonly used in regional-scale applications may not be sufficient to reduce the effects of initial conditions to less than 1% of seasonally-averaged surface ozone concentrations everywhere while 20 days were found to be sufficient for the entire domain for the spring case and almost the entire domain for the summer case. For the summer case, differences were found to persist longer aloft due to circulation of air masses and even a spin-up period of 30 days was not sufficient to reduce the effects of ICs to less than 1% of seasonally-averaged layer 34 ozone concentrations over the southwestern portion of the modeling domain. Analysis of the effect of soil initial conditions for the CMAQ bidirectional NH3 exchange model shows that during springtime they can have an important effect on simulated inorganic aerosols concentrations for time periods of one month or longer. The effects are less pronounced during other seasons. The results, while specific to the modeling domain and time periods simulated here, suggest that modeling protocols need to be scrutinized for a given application and that it cannot be assum
Techno-economic analysis of a transient plant-based platform for monoclonal antibody production
Nandi, Somen; Kwong, Aaron T.; Holtz, Barry R.; Erwin, Robert L.; Marcel, Sylvain; McDonald, Karen A.
2016-01-01
ABSTRACT Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new “greenfield” biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization. PMID:27559626
Techno-economic analysis of a transient plant-based platform for monoclonal antibody production.
Nandi, Somen; Kwong, Aaron T; Holtz, Barry R; Erwin, Robert L; Marcel, Sylvain; McDonald, Karen A
Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new "greenfield" biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization.
NASA Astrophysics Data System (ADS)
Fu, Xiangwen; Liu, Junfeng; Ban-Weiss, George A.; Zhang, Jiachen; Huang, Xin; Ouyang, Bin; Popoola, Olalekan; Tao, Shu
2017-09-01
Street canyons are ubiquitous in urban areas. Traffic-related air pollutants in street canyons can adversely affect human health. In this study, an urban-scale traffic pollution dispersion model is developed considering street distribution, canyon geometry, background meteorology, traffic assignment, traffic emissions and air pollutant dispersion. In the model, vehicle exhausts generated from traffic flows first disperse inside street canyons along the micro-scale wind field generated by computational fluid dynamics (CFD) model. Then, pollutants leave the street canyon and further disperse over the urban area. On the basis of this model, the effects of canyon geometry on the distribution of NOx and CO from traffic emissions were studied over the center of Beijing. We found that an increase in building height leads to heavier pollution inside canyons and lower pollution outside canyons at pedestrian level, resulting in higher domain-averaged concentrations over the area. In addition, canyons with highly even or highly uneven building heights on each side of the street tend to lower the urban-scale air pollution concentrations at pedestrian level. Further, increasing street widths tends to lead to lower pollutant concentrations by reducing emissions and enhancing ventilation simultaneously. Our results indicate that canyon geometry strongly influences human exposure to traffic pollutants in the populated urban area. Carefully planning street layout and canyon geometry while considering traffic demand as well as local weather patterns may significantly reduce inhalation of unhealthy air by urban residents.
Variable Grid Traveltime Tomography for Near-surface Seismic Imaging
NASA Astrophysics Data System (ADS)
Cai, A.; Zhang, J.
2017-12-01
We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.
LASSIE: simulating large-scale models of biochemical systems on GPUs.
Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo
2017-05-10
Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE is able to perform fast simulations of even larger models, whereby the tested CPU-implementation of LSODA failed to reach termination. LASSIE is therefore expected to make an important breakthrough in Systems Biology applications, for the execution of faster and in-depth computational analyses of large-scale models of complex biological systems.
Kinetic roughening and porosity scaling in film growth with subsurface lateral aggregation.
Reis, F D A Aarão
2015-06-01
We study surface and bulk properties of porous films produced by a model in which particles incide perpendicularly to a substrate, interact with deposited neighbors in its trajectory, and aggregate laterally with probability of order a at each position. The model generalizes ballisticlike models by allowing attachment to particles below the outer surface. For small values of a, a crossover from uncorrelated deposition (UD) to correlated growth is observed. Simulations are performed in 1+1 and 2+1 dimensions. Extrapolation of effective exponents and comparison of roughness distributions confirm Kardar-Parisi-Zhang roughening of the outer surface for a>0. A scaling approach for small a predicts crossover times as a(-2/3) and local height fluctuations as a(-1/3) at the crossover, independent of substrate dimension. These relations are different from all previously studied models with crossovers from UD to correlated growth due to subsurface aggregation, which reduces scaling exponents. The same approach predicts the porosity and average pore height scaling as a(1/3) and a(-1/3), respectively, in good agreement with simulation results in 1+1 and 2+1 dimensions. These results may be useful for modeling samples with desired porosity and long pores.
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
Development of a Decisional Balance Scale for Young Adult Marijuana Use
Elliott, Jennifer C.; Carey, Kate B.; Scott-Sheldon, Lori A. J.
2010-01-01
This study describes the development and validation of a decisional balance scale for marijuana use in young adults. Scale development was accomplished in four phases. First, 53 participants (70% female, 68% freshman) provided qualitative data that yielded content for an initial set of 47 items. In the second phase, an exploratory factor analysis on the responses of 260 participants (52% female, 68% freshman) revealed two factors, corresponding to pros and cons. Items that did not load well on the factors were omitted, resulting in a reduced set of 36 items. In the third phase, 182 participants (49% female, 37% freshmen) completed the revised scale and an evaluation of factor structure led to scale revisions and model respecification to create a good-fitting model. The final scales consisted of 8 pros (α = 0.91) and 16 cons (α = 0.93), and showed evidence of validity. In the fourth phase (N = 248, 66% female, 70% freshman), we confirmed the factor structure, and provided further evidence for reliability and validity. The Marijuana Decisional Balance Scale enhances our ability to study motivational factors associated with marijuana use among young adults. PMID:21261405
Detection and analysis of part load and full load instabilities in a real Francis turbine prototype
NASA Astrophysics Data System (ADS)
Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme
2017-04-01
Francis turbines operate in many cases out of its best efficiency point, in order to regulate their output power according to the instantaneous energy demand of the grid. Therefore, it is of paramount importance to analyse and determine the unstable operating points for these kind of units. In the framework of the HYPERBOLE project (FP7-ENERGY-2013-1; Project number 608532) a large Francis unit was investigated numerically, experimentally in a reduced scale model and also experimentally and numerically in the real prototype. This paper shows the unstable operating points identified during the experimental tests on the real Francis unit and the analysis of the main characteristics of these instabilities. Finally, it is shown that similar phenomena have been identified on previous research in the LMH (Laboratory for Hydraulic Machines, Lausanne) with the reduced scale model.
Integrated analysis of the effects of agricultural management on nitrogen fluxes at landscape scale.
Kros, J; Frumau, K F A; Hensen, A; de Vries, W
2011-11-01
The integrated modelling system INITIATOR was applied to a landscape in the northern part of the Netherlands to assess current nitrogen fluxes to air and water and the impact of various agricultural measures on these fluxes, using spatially explicit input data on animal numbers, land use, agricultural management, meteorology and soil. Average model results on NH(3) deposition and N concentrations in surface water appear to be comparable to observations, but the deviation can be large at local scale, despite the use of high resolution data. Evaluated measures include: air scrubbers reducing NH(3) emissions from poultry and pig housing systems, low protein feeding, reduced fertilizer amounts and low-emission stables for cattle. Low protein feeding and restrictive fertilizer application had the largest effect on both N inputs and N losses, resulting in N deposition reductions on Natura 2000 sites of 10% and 12%, respectively. Copyright © 2011 Elsevier Ltd. All rights reserved.
Establishing an integrated human milk banking approach to strengthen newborn care
DeMarchis, A; Israel-Ballard, K; Mansen, Kimberly Amundson; Engmann, C
2017-01-01
The provision of donor human milk can significantly reduce morbidity and mortality among vulnerable infants and is recommended by the World Health Organization as the next best option when a mother's own milk is unavailable. Regulated human milk banks can meet this need, however, scale-up has been hindered by the absence of an appropriate model for resource-limited settings and a lack of policy support for human milk banks and for the operational procedures supporting them. To reduce infant mortality, human milk banking systems need to be scaled up and integrated with other components of newborn care. This article draws on current guidelines and best practices from human milk banks to offer a compilation of universal requirements that provide a foundation for an integrated model of newborn care that is appropriate for low- and high-resource settings alike. PMID:27831549
Helicopter main-rotor noise: Determination of source contributions using scaled model data
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Jolly, J. Ralph, Jr.; Marcolini, Michael A.
1988-01-01
Acoustic data from a test of a 40 percent model MBB BO-105 helicopter main rotor are scaled to equivalent full-scale flyover cases. The test was conducted in the anechoic open test section of the German-Dutch Windtunnel (DNW). The measured data are in the form of acoustic pressure time histories and spectra from two out-of-flow microphones underneath and foward of the model. These are scaled to correspond to measurements made at locations 150 m below the flight path of a full-scale rotor. For the scaled data, a detailed analysis is given for the identification in the data of the noise contributions from different rotor noise sources. Key results include a component breakdown of the noise contributions, in terms of noise criteria calculations of a weighted sound pressure level (dBA) and perceived noise level (PNL), as functions of rotor advance ratio and descent angle. It is shown for the scaled rotor that, during descent, impulsive blade-vortex interaction (BVI) noise is the dominant contributor to the noise. In level flight and mild climb, broadband blade-turbulent wake interaction (BWI) noise is dominant due to the absence of BVI activity. At high climb angles, BWI is reduced and self-noise from blade boundary-layer turbulence becomes the most prominent.
Verification and Calibration of a Reduced Order Wind Farm Model by Wind Tunnel Experiments
NASA Astrophysics Data System (ADS)
Schreiber, J.; Nanos, E. M.; Campagnolo, F.; Bottasso, C. L.
2017-05-01
In this paper an adaptation of the FLORIS approach is considered that models the wind flow and power production within a wind farm. In preparation to the use of this model for wind farm control, this paper considers the problem of its calibration and validation with the use of experimental observations. The model parameters are first identified based on measurements performed on an isolated scaled wind turbine operated in a boundary layer wind tunnel in various wind-misalignment conditions. Next, the wind farm model is verified with results of experimental tests conducted on three interacting scaled wind turbines. Although some differences in the estimated absolute power are observed, the model appears to be capable of identifying with good accuracy the wind turbine misalignment angles that, by deflecting the wake, lead to maximum power for the investigated layouts.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Bioimmobilization of uranium-practical tools for field applications
NASA Astrophysics Data System (ADS)
Istok, J. D.
2011-12-01
Extensive laboratory and field research has conclusively demonstrated that it is possible to stimulate indigenous microbial activity and create conditions favorable for the reductive precipitation of uranium from groundwater, reducing aqueous U concentrations below regulatory levels. A wide variety of complex and coupled biogeochemical processes have been identified and specific reaction mechanisms and parameters have been quantified for a variety of experimental systems including pure, mixed, and natural microbial cultures, and single mineral, artificial, and natural sediments, and groundwater aquifers at scales ranging from very small (10s nm) to very large (10s m). Multicomponent coupled reactive transport models have also been developed to simulate various aspects of this process in 3D heterogeneous environments. Nevertheless, full-scale application of reductive bioimmobilization of uranium (and other radionuclides and metals) remains problematical because of the technical and logistical difficulties in creating and maintaining reducing environment in the many large U contaminated groundwater aquifers currently under aerobic and oxidizing conditions and often containing high concentrations of competing and more energetically favorable electron acceptors (esp. nitrate). This talk will discuss how simple tools, including small-scale in situ testing and geochemical reaction path modeling, can be used to quickly assess the feasibility of applying bioimmobilization to remediate U contaminated groundwater aquifers and provide data needed for full-scale design.
Effects of coupled dark energy on the Milky Way and its satellites
NASA Astrophysics Data System (ADS)
Penzo, Camilla; Macciò, Andrea V.; Baldi, Marco; Casarini, Luciano; Oñorbe, Jose; Dutton, Aaron A.
2016-09-01
We present the first numerical simulations in coupled dark energy cosmologies with high enough resolution to investigate the effects of the coupling on galactic and subgalactic scales. We choose two constant couplings and a time-varying coupling function and we run simulations of three Milky Way-sized haloes (˜1012 M⊙), a lower mass halo (6 × 1011 M⊙) and a dwarf galaxy halo (5 × 109 M⊙). We resolve each halo with several million dark matter particles. On all scales, the coupling causes lower halo concentrations and a reduced number of substructures with respect to Λ cold dark matter (ΛCDM). We show that the reduced concentrations are not due to different formation times. We ascribe them to the extra terms that appear in the equations describing the gravitational dynamics. On the scale of the Milky Way satellites, we show that the lower concentrations can help in reconciling observed and simulated rotation curves, but the coupling values necessary to have a significant difference from ΛCDM are outside the current observational constraints. On the other hand, if other modifications to the standard model allowing a higher coupling (e.g. massive neutrinos) are considered, coupled dark energy can become an interesting scenario to alleviate the small-scale issues of the ΛCDM model.
Statistical simulation of the magnetorotational dynamo.
Squire, J; Bhattacharjee, A
2015-02-27
Turbulence and dynamo induced by the magnetorotational instability (MRI) are analyzed using quasilinear statistical simulation methods. It is found that homogenous turbulence is unstable to a large-scale dynamo instability, which saturates to an inhomogenous equilibrium with a strong dependence on the magnetic Prandtl number (Pm). Despite its enormously reduced nonlinearity, the dependence of the angular momentum transport on Pm in the quasilinear model is qualitatively similar to that of nonlinear MRI turbulence. This demonstrates the importance of the large-scale dynamo and suggests how dramatically simplified models may be used to gain insight into the astrophysically relevant regimes of very low or high Pm.
NASA Technical Reports Server (NTRS)
Hayden, R. E.; Kadman, Y.; Chanaud, R. C.
1972-01-01
The feasibility of quieting the externally-blown-flap (EBF) noise sources which are due to interaction of jet exhaust flow with deployed flaps was demonstrated on a 1/15-scale 3-flap EBF model. Sound field characteristics were measured and noise reduction fundamentals were reviewed in terms of source models. Test of the 1/15-scale model showed broadband noise reductions of up to 20 dB resulting from combination of variable impedance flap treatment and mesh grids placed in the jet flow upstream of the flaps. Steady-state lift, drag, and pitching moment were measured with and without noise reduction treatment.
NASA Astrophysics Data System (ADS)
Smith, B.; Wårlind, D.; Arneth, A.; Hickler, T.; Leadley, P.; Siltberg, J.; Zaehle, S.
2013-11-01
The LPJ-GUESS dynamic vegetation model uniquely combines an individual- and patch-based representation of vegetation dynamics with ecosystem biogeochemical cycling from regional to global scales. We present an updated version that includes plant and soil N dynamics, analysing the implications of accounting for C-N interactions on predictions and performance of the model. Stand structural dynamics and allometric scaling of tree growth suggested by global databases of forest stand structure and development were well-reproduced by the model in comparison to an earlier multi-model study. Accounting for N cycle dynamics improved the goodness-of-fit for broadleaved forests. N limitation associated with low N mineralisation rates reduces productivity of cold-climate and dry-climate ecosystems relative to mesic temperate and tropical ecosystems. In a model experiment emulating free-air CO2 enrichment (FACE) treatment for forests globally, N-limitation associated with low N mineralisation rates of colder soils reduces CO2-enhancement of NPP for boreal forests, while some temperate and tropical forests exhibit increased NPP enhancement. Under a business-as-usual future climate and emissions scenario, ecosystem C storage globally was projected to increase by c. 10%; additional N requirements to match this increasing ecosystem C were within the high N supply limit estimated on stoichiometric grounds in an earlier study. Our results highlight the importance of accounting for C-N interactions not only in studies of global terrestrial C cycling, but to understand underlying mechanisms on local scales and in different regional contexts.
NASA Astrophysics Data System (ADS)
Smith, B.; Wårlind, D.; Arneth, A.; Hickler, T.; Leadley, P.; Siltberg, J.; Zaehle, S.
2014-04-01
The LPJ-GUESS dynamic vegetation model uniquely combines an individual- and patch-based representation of vegetation dynamics with ecosystem biogeochemical cycling from regional to global scales. We present an updated version that includes plant and soil N dynamics, analysing the implications of accounting for C-N interactions on predictions and performance of the model. Stand structural dynamics and allometric scaling of tree growth suggested by global databases of forest stand structure and development were well reproduced by the model in comparison to an earlier multi-model study. Accounting for N cycle dynamics improved the goodness of fit for broadleaved forests. N limitation associated with low N-mineralisation rates reduces productivity of cold-climate and dry-climate ecosystems relative to mesic temperate and tropical ecosystems. In a model experiment emulating free-air CO2 enrichment (FACE) treatment for forests globally, N limitation associated with low N-mineralisation rates of colder soils reduces CO2 enhancement of net primary production (NPP) for boreal forests, while some temperate and tropical forests exhibit increased NPP enhancement. Under a business-as-usual future climate and emissions scenario, ecosystem C storage globally was projected to increase by ca. 10%; additional N requirements to match this increasing ecosystem C were within the high N supply limit estimated on stoichiometric grounds in an earlier study. Our results highlight the importance of accounting for C-N interactions in studies of global terrestrial N cycling, and as a basis for understanding mechanisms on local scales and in different regional contexts.
Flint, Lorraine E.; Flint, Alan L.
2012-01-01
The methodology, which includes a sequence of rigorous analyses and calculations, is intended to reduce the addition of uncertainty to the climate data as a result of the downscaling while providing the fine-scale climate information necessary for ecological analyses. It results in new but consistent data sets for the US at 4 km, the southwest US at 270 m, and California at 90 m and illustrates the utility of fine-scale downscaling to analyses of ecological processes influenced by topographic complexity.
NASA Astrophysics Data System (ADS)
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh
2015-12-01
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.
Kinematic responses and injuries of pedestrian in car-pedestrian collisions
NASA Astrophysics Data System (ADS)
Teng, T. L.; Liang, C. C.; Hsu, C. Y.; Tai, S. F.
2017-10-01
How to protect pedestrians and reduce the collision injury has gradually become the new field of automotive safety research and focus in the world. Many engineering studies have appeared and their purpose is trying to reduce the pedestrian injuries caused by traffic accident. The physical model involving impactor model and full scale pedestrian model are costly when taking the impact test. This study constructs a vehicle-pedestrian collision model by using the MADYMO. To verify the accuracy of the proposed vehicle-pedestrian collision model, the experimental data are used in the pedestrian model test. The proposed model also will be applied to analyze the kinematic responses and injuries of pedestrian in collisions in this study. The modeled results can help assess the pedestrian friendliness of vehicles and assist in the future development of pedestrian friendliness vehicle technologies.
NASA Astrophysics Data System (ADS)
Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien
2017-04-01
Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).
Measuring the effect of fuel treatments on forest carbon using landscape risk analysis
A.A. Ager; M.A. Finney; A. McMahan; J. Carthcart
2010-01-01
Wildfire simulation modelling was used to examine whether fuel reduction treatments can potentially reduce future wildfire emissions and provide carbon benefits. In contrast to previous reports, the current study modelled landscape scale effects of fuel treatments on fire spread and intensity, and used a probabilistic framework to quantify wildfire effects on carbon...
Recent assessments have analyzed the health impacts of PM2.5 from emissions from different locations and sectors using simplified or reduced-form air quality models. Here we present an alternative approach using the adjoint of the Community Multiscale Air Quality (CMAQ) model, wh...
Global hydrodynamic modelling of flood inundation in continental rivers: How can we achieve it?
NASA Astrophysics Data System (ADS)
Yamazaki, D.
2016-12-01
Global-scale modelling of river hydrodynamics is essential for understanding global hydrological cycle, and is also required in interdisciplinary research fields . Global river models have been developed continuously for more than two decades, but modelling river flow at a global scale is still a challenging topic because surface water movement in continental rivers is a multi-spatial-scale phenomena. We have to consider the basin-wide water balance (>1000km scale), while hydrodynamics in river channels and floodplains is regulated by much smaller-scale topography (<100m scale). For example, heavy precipitation in upstream regions may later cause flooding in farthest downstream reaches. In order to realistically simulate the timing and amplitude of flood wave propagation for a long distance, consideration of detailed local topography is unavoidable. I have developed the global hydrodynamic model CaMa-Flood to overcome this scale-discrepancy of continental river flow. The CaMa-Flood divides river basins into multiple "unit-catchments", and assumes the water level is uniform within each unit-catchment. One unit-catchment is assigned to each grid-box defined at the typical spatial resolution of global climate models (10 100 km scale). Adopting a uniform water level in a >10km river segment seems to be a big assumption, but it is actually a good approximation for hydrodynamic modelling of continental rivers. The number of grid points required for global hydrodynamic simulations is largely reduced by this "unit-catchment assumption". Alternative to calculating 2-dimensional floodplain flows as in regional flood models, the CaMa-Flood treats floodplain inundation in a unit-catchment as a sub-grid physics. The water level and inundated area in each unit-catchment are diagnosed from water volume using topography parameters derived from high-resolution digital elevation models. Thus, the CaMa-Flood is at least 1000 times computationally more efficient compared to regional flood inundation models while the reality of simulated flood dynamics is kept. I will explain in detail how the CaMa-Flood model has been constructed from high-resolution topography datasets, and how the model can be used for various interdisciplinary applications.
The effects of ballast on the sound radiation from railway track
NASA Astrophysics Data System (ADS)
Zhang, Xianying; Thompson, David; Jeong, Hongseok; Squicciarini, Giacomo
2017-07-01
In a conventional railway track, the rails are laid on sleepers, usually made of concrete, which are supported by a layer of coarse stones known as ballast. This paper focuses on quantifying the influence that the ballast has on the noise produced by the vibration of the track, particularly on the rail and sleeper radiation ratios. A one-fifth scale model of a railway track has been used to conduct acoustic and vibration measurements. This includes reduced-scale ballast that has been produced with stone sizes in the correct proportions. Two different scaling factors (1:√5 and 1:5) have been adopted for the stone sizes in an attempt to reproduce approximately the acoustic properties of full-scale ballast. It is shown that, although a scale factor of 1:√5 gives a better scaling of the acoustic properties, the stones scaled at 1:5 also give acceptable results. The flow resistivity and porosity of this ballast sample have been measured. These have been used in a local reaction model based on the Johnson-Allard formulation to predict the ballast absorption, showing good agreement with measurements of the absorption coefficient. The effects of the presence of the ballast on the noise radiation from a reduced-scale steel rail and concrete sleeper have been investigated experimentally with the ballast located on a rigid foundation. Comparisons are made with the corresponding numerical predictions obtained by using the boundary element method, in which the ballast is represented by a surface impedance. Additionally the finite element method has been used in which the porous medium is considered as an equivalent fluid. From these results it is shown that the extended reaction model gives better agreement with the measurements. Finally, the effects of the ballast vibration on the sleeper radiation have also been investigated for a case of three sleepers embedded in ballast. The ballast vibration is shown to increase the sound radiation by between 1 and 4.5 dB for frequencies between 20 and 300 Hz at full scale whereas at higher frequencies the effect is negligible.
Recent assimilation developments of FOAM the Met Office ocean forecast system
NASA Astrophysics Data System (ADS)
Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert
2015-04-01
FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2018-03-26
In this paper we present a framework for the reduction and linking of physiologically based pharmacokinetic (PBPK) models with models of systems biology to describe the effects of drug administration across multiple scales. To address the issue of model complexity, we propose the reduction of each type of model separately prior to being linked. We highlight the use of balanced truncation in reducing the linear components of PBPK models, whilst proper lumping is shown to be efficient in reducing typically nonlinear systems biology type models. The overall methodology is demonstrated via two example systems; a model of bacterial chemotactic signalling in Escherichia coli and a model of extracellular regulatory kinase activation mediated via the extracellular growth factor and nerve growth factor receptor pathways. Each system is tested under the simulated administration of three hypothetical compounds; a strong base, a weak base, and an acid, mirroring the parameterisation of pindolol, midazolam, and thiopental, respectively. Our method can produce up to an 80% decrease in simulation time, allowing substantial speed-up for computationally intensive applications including parameter fitting or agent based modelling. The approach provides a straightforward means to construct simplified Quantitative Systems Pharmacology models that still provide significant insight into the mechanisms of drug action. Such a framework can potentially bridge pre-clinical and clinical modelling - providing an intermediate level of model granularity between classical, empirical approaches and mechanistic systems describing the molecular scale.
Physicochemical heterogeneity controls on uranium bioreduction rates at the field scale.
Li, Li; Gawande, Nitin; Kowalsky, Michael B; Steefel, Carl I; Hubbard, Susan S
2011-12-01
It has been demonstrated in laboratory systems that U(VI) can be reduced to immobile U(IV) by bacteria in natural environments. The ultimate efficacy of bioreduction at the field scale, however, is often challenging to quantify and depends on site characteristics. In this work, uranium bioreduction rates at the field scale are quantified, for the first time, using an integrated approach. The approach combines field data, inverse and forward hydrological and reactive transport modeling, and quantification of reduction rates at different spatial scales. The approach is used to explore the impact of local scale (tens of centimeters) parameters and processes on field scale (tens of meters) system responses to biostimulation treatments and the controls of physicochemical heterogeneity on bioreduction rates. Using the biostimulation experiments at the Department of Energy Old Rifle site, our results show that the spatial distribution of hydraulic conductivity and solid phase mineral (Fe(III)) play a critical role in determining the field-scale bioreduction rates. Due to the dependence on Fe-reducing bacteria, field-scale U(VI) bioreduction rates were found to be largely controlled by the abundance of Fe(III) minerals at the vicinity of the injection wells and by the presence of preferential flow paths connecting injection wells to down gradient Fe(III) abundant areas.
Lab-scale study on the application of In-Adit-Sulfate-Reducing System for AMD control.
Ji, S W; Kim, S J
2008-12-30
In a study of the 29 operating passive systems for acid mine drainage (AMD) treatment, 19 systems showed various performance problems. Some systems showed very low efficiency even without visible leakage or overflow. Though systems show fairly good efficiency in metal removal (mainly iron) and pH control, sulfate removal rates were very low which indicates the possibility of very poor sulfate reductions by Sulfate Reducing Bacteria (SRB). As an alternative method, In-Adit-Sulfate-Reducing System (IASRS), the method of placing the SAPS inside the adit, to have temperature constant at about 15 degrees C, was suggested. Lab-scale model experiments of IASRS were carried out. The models 1 and 2 were run at 15 degrees C and 25 degrees C, respectively. The model 1 contained about a half of COD in the beginning of the operation than that of model 2. Metal removal ratios were higher than 90% in both systems. Both systems showed the sulfate removal ratios of 23% and 27%, respectively, which were still considerably low, even though higher than those of presently operating systems. However, since the synthetic AMD used was very low in pH (2.8) and very high in sulfate concentration, if some suggested modifications were applied to the standard design, it is presumed that the sulfate removal ratio would have increased.
NASA Technical Reports Server (NTRS)
Kelley, Henry L.
1990-01-01
Performance of a 27 percent scale model rotor designed for the AH-64 helicopter (alternate rotor) was measured in hover and forward flight and compared against and AH-64 baseline rotor model. Thrust, rotor tip Mach number, advance ratio, and ground proximity were varied. In hover, at a nominal thrust coefficient of 0.0064, the power savings was about 6.4 percent for the alternate rotor compared to the baseline. The corresponding thrust increase at this condition was approx. 4.5 percent which represents an equivalent full scale increase in lift capability of about 660 lbs. Comparable results were noted in forward flight except for the high thrust, high speed cases investigated where the baseline rotor was slightly superior. Reduced performance at the higher thrusts and speeds was likely due to Reynolds number effects and blade elasticity differences.
Sun, Xiaowei; Li, Wei; Xie, Yulei; Huang, Guohe; Dong, Changjuan; Yin, Jianguang
2016-11-01
A model based on economic structure adjustment and pollutants mitigation was proposed and applied in Urumqi. Best-worst case analysis and scenarios analysis were performed in the model to guarantee the parameters accuracy, and to analyze the effect of changes of emission reduction styles. Results indicated that pollutant-mitigations of electric power industry, iron and steel industry, and traffic relied mainly on technological transformation measures, engineering transformation measures and structure emission reduction measures, respectively; Pollutant-mitigations of cement industry relied mainly on structure emission reduction measures and technological transformation measures; Pollutant-mitigations of thermal industry relied mainly on the four mitigation measures. They also indicated that structure emission reduction was a better measure for pollutants mitigation of Urumqi. Iron and steel industry contributed greatly in SO2, NOx and PM (particulate matters) emission reduction and should be given special attention in pollutants emission reduction. In addition, the scales of iron and steel industry should be reduced with the decrease of SO2 mitigation amounts. The scales of traffic and electric power industry should be reduced with the decrease of NOx mitigation amounts, and the scales of cement industry and iron and steel industry should be reduced with the decrease of PM mitigation amounts. The study can provide references of pollutants mitigation schemes to decision-makers for regional economic and environmental development in the 12th Five-Year Plan on National Economic and Social Development of Urumqi. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhuang, Kai; Izallalen, Mounir; Mouser, Paula; Richter, Hanno; Risso, Carla; Mahadevan, Radhakrishnan; Lovley, Derek R
2011-02-01
The advent of rapid complete genome sequencing, and the potential to capture this information in genome-scale metabolic models, provide the possibility of comprehensively modeling microbial community interactions. For example, Rhodoferax and Geobacter species are acetate-oxidizing Fe(III)-reducers that compete in anoxic subsurface environments and this competition may have an influence on the in situ bioremediation of uranium-contaminated groundwater. Therefore, genome-scale models of Geobacter sulfurreducens and Rhodoferax ferrireducens were used to evaluate how Geobacter and Rhodoferax species might compete under diverse conditions found in a uranium-contaminated aquifer in Rifle, CO. The model predicted that at the low rates of acetate flux expected under natural conditions at the site, Rhodoferax will outcompete Geobacter as long as sufficient ammonium is available. The model also predicted that when high concentrations of acetate are added during in situ bioremediation, Geobacter species would predominate, consistent with field-scale observations. This can be attributed to the higher expected growth yields of Rhodoferax and the ability of Geobacter to fix nitrogen. The modeling predicted relative proportions of Geobacter and Rhodoferax in geochemically distinct zones of the Rifle site that were comparable to those that were previously documented with molecular techniques. The model also predicted that under nitrogen fixation, higher carbon and electron fluxes would be diverted toward respiration rather than biomass formation in Geobacter, providing a potential explanation for enhanced in situ U(VI) reduction in low-ammonium zones. These results show that genome-scale modeling can be a useful tool for predicting microbial interactions in subsurface environments and shows promise for designing bioremediation strategies.
A new synoptic scale resolving global climate simulation using the Community Earth System Model
NASA Astrophysics Data System (ADS)
Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana
2014-12-01
High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.
Li, Harbin; McNulty, Steven G
2007-10-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.
40 CFR 1054.5 - Which nonroad engines are excluded from this part's requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW, SMALL NONROAD SPARK-IGNITION... described in § 1054.20. (d) Engines used in reduced-scale models of vehicles that are not capable of...
Production of black holes and their angular momentum distribution in models with split fermions
NASA Astrophysics Data System (ADS)
Dai, De-Chang; Starkman, Glenn D.; Stojkovic, Dejan
2006-05-01
In models with TeV-scale gravity it is expected that mini black holes will be produced in near-future accelerators. On the other hand, TeV-scale gravity is plagued with many problems like fast proton decay, unacceptably large n-n¯ oscillations, flavor changing neutral currents, large mixing between leptons, etc. Most of these problems can be solved if different fermions are localized at different points in the extra dimensions. We study the cross section for the production of black holes and their angular momentum distribution in these models with “split” fermions. We find that, for a fixed value of the fundamental mass scale, the total production cross section is reduced compared with models where all the fermions are localized at the same point in the extra dimensions. Fermion splitting also implies that the bulk component of the black hole angular momentum must be taken into account in studies of the black hole decay via Hawking radiation.
Noise Testing of an Experimental Augmentor Wing
1974-06-21
The augmentor wing concept was introduced during the early 1960s to enhance the performance of vertical and short takeoff (VSTOL) aircraft. The leading edge of the wing has full-span vertical flaps, and the trailing edge has double-slotted flaps. This provides aircraft with more control in takeoff and landing conditions. The augmentor wing also produced lower noise levels than other VSTOL designs. In the early 1970s Boeing Corporation built a Buffalo C-8A augmentor wing research aircraft for Ames Research Center. Researches at Lewis Research Center concentrated their efforts on reducing the noise levels of the wing. They initially used small-scale models to develop optimal nozzle screening methods. They then examined the nozzle designs on a large-scale model, seen here on an external test stand. This test stand included an airflow system, nozzle, the augmentor wing, and a muffler system below to reduce the atmospheric noise levels. The augmentor was lined with noise-reducing acoustic panels. The Lewis researchers were able to adjust the airflow to simulate conditions at takeoff and landing. Once the conditions were stabilized they took noise measurements from microphones placed in all directions from the wing, including an aircraft flying over. They found that the results coincided with the earlier small-scale studies for landing situations but not takeoffs. The acoustic panels were found to be successful.
NASA Astrophysics Data System (ADS)
Higginbottom, Thomas P.; Symeonakis, Elias; Meyer, Hanna; van der Linden, Sebastian
2018-05-01
Increasing attention is being directed at mapping the fractional woody cover of savannahs using Earth-observation data. In this study, we test the utility of Landsat TM/ ETM-based spectral-temporal variability metrics for mapping regional-scale woody cover in the Limpopo Province of South Africa, for 2010. We employ a machine learning framework to compare the accuracies of Random Forest models derived using metrics calculated from different seasons. We compare these results to those from fused Landsat-PALSAR data to establish if seasonal metrics can compensate for structural information from the PALSAR signal. Furthermore, we test the applicability of a statistical variable selection method, the recursive feature elimination (RFE), in the automation of the model building process in order to reduce model complexity and processing time. All of our tests were repeated at four scales (30, 60, 90, and 120 m-pixels) to investigate the role of spatial resolution on modelled accuracies. Our results show that multi-seasonal composites combining imagery from both the dry and wet seasons produced the highest accuracies (R2 = 0.77, RMSE = 9.4, at the 120 m scale). When using a single season of observations, dry season imagery performed best (R2 = 0.74, RMSE = 9.9, at the 120 m resolution). Combining Landsat and radar imagery was only marginally beneficial, offering a mean relative improvement of 1% in accuracy at the 120 m scale. However, this improvement was concentrated in areas with lower densities of woody coverage (<30%), which are areas of concern for environmental monitoring. At finer spatial resolutions, the inclusion of SAR data actually reduced accuracies. Overall, the RFE was able to produce the most accurate model (R2 = 0.8, RMSE = 8.9, at the 120 m pixel scale). For mapping savannah woody cover at the 30 m pixel scale, we suggest that monitoring methodologies continue to exploit the Landsat archive, but should aim to use multi-seasonal derived information. When the coarser 120 m pixel scale is adequate, integration of Landsat and SAR data should be considered, especially in areas with lower woody cover densities. The use of multiple seasonal compositing periods offers promise for large-area mapping of savannahs, even in regions with a limited historical Landsat coverage.
Nanosecond Plasma Enhanced H2/O2/N2 Premixed Flat Flames
2014-01-01
Simulations are conducted with a one-dimensional, multi-scale, pulsed -discharge model with detailed plasma-combustion kinetics to develop additional insight... model framework. The reduced electric field, E/N, during each pulse varies inversely with number density. A significant portion of the input energy is...dimensional numerical model [4, 12] capable of resolving electric field transients over nanosecond timescales (during each discharge pulse ) and radical
ERIC Educational Resources Information Center
Truckenmiller, James L.
The Health, Education and Welfare (HEW) Office of Youth Development's National Strategy for Youth Development model was promoted as a community-based planning and procedural tool for enhancing positive youth development and reducing delinquency. To test the applicability of the model as a function of delinquency level, the program's Impact Scales…
Realization of State-Space Models for Wave Propagation Simulations
2012-01-01
reduction techniques can be applied to reduce the dimension of the model further if warranted. INFRASONIC PROPAGATION MODEL Infrasound is sound below 20...capable of scatter- ing and blocking the propagation. This is because the infrasound wavelengths are near the scales of topographic features. These...and Development Center (ERDC) Big Black Test Site (BBTS) and an infrasound -sensing array at the ERDC Waterways Experiment Station (WES). Both are
Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R
2013-01-01
This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.
NASA Technical Reports Server (NTRS)
Mazurkivich, Pete; Chandler, Frank; Grayson, Gary
2005-01-01
To meet the requirements for the 2nd Generation Reusable Launch Vehicle (RLV), a unique propulsion feed system concept was identified using crossfeed between the booster and orbiter stages that could reduce the Two-Stage-to-Orbit (TSTO) vehicle weight and development cost by approximately 25%. A Main Propulsion System (MPS) crossfeed water demonstration test program was configured to address all the activities required to reduce the risks for the MPS crossfeed system. A transient, one-dimensional system simulation was developed for the subscale crossfeed water flow tests. To ensure accurate representation of the crossfeed valve's dynamics in the system model, a high-fidelity, three-dimensional, computational fluid-dynamics (CFD) model was employed. The results from the CFD model were used to specify the valve's flow characteristics in the system simulation. This yielded a crossfeed system model that was anchored to the specific valve hardware and achieved good agreement with the measured test data. These results allowed the transient models to be correlated and validated and used for full scale mission predictions. The full scale model simulations indicate crossfeed is ' viable with the system pressure disturbances at the crossfeed transition being less than experienced by the propulsion system during engine start and shutdown transients.
Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hales, J. D.; Tonks, M. R.; Chockalingam, K.
2015-03-01
Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less
Rast, Luke I; Rouzine, Igor M; Rozhnova, Ganna; Bishop, Lisa; Weinberger, Ariel D; Weinberger, Leor S
2016-05-01
The rapid evolution of RNA-encoded viruses such as HIV presents a major barrier to infectious disease control using conventional pharmaceuticals and vaccines. Previously, it was proposed that defective interfering particles could be developed to indefinitely control the HIV/AIDS pandemic; in individual patients, these engineered molecular parasites were further predicted to be refractory to HIV's mutational escape (i.e., be 'resistance-proof'). However, an outstanding question has been whether these engineered interfering particles-termed Therapeutic Interfering Particles (TIPs)-would remain resistance-proof at the population-scale, where TIP-resistant HIV mutants may transmit more efficiently by reaching higher viral loads in the TIP-treated subpopulation. Here, we develop a multi-scale model to test whether TIPs will maintain indefinite control of HIV at the population-scale, as HIV ('unilaterally') evolves toward TIP resistance by limiting the production of viral proteins available for TIPs to parasitize. Model results capture the existence of two intrinsic evolutionary tradeoffs that collectively prevent the spread of TIP-resistant HIV mutants in a population. First, despite their increased transmission rates in TIP-treated sub-populations, unilateral TIP-resistant mutants are shown to have reduced transmission rates in TIP-untreated sub-populations. Second, these TIP-resistant mutants are shown to have reduced growth rates (i.e., replicative fitness) in both TIP-treated and TIP-untreated individuals. As a result of these tradeoffs, the model finds that TIP-susceptible HIV strains continually outcompete TIP-resistant HIV mutants at both patient and population scales when TIPs are engineered to express >3-fold more genomic RNA than HIV expresses. Thus, the results provide design constraints for engineering population-scale therapies that may be refractory to the acquisition of antiviral resistance.
Derivation of martian surface slope characteristics from directional thermal infrared radiometry
NASA Astrophysics Data System (ADS)
Bandfield, Joshua L.; Edwards, Christopher S.
2008-01-01
Directional thermal infrared measurements of the martian surface is one of a variety of methods that may be used to characterize surface roughness and slopes at scales smaller than can be obtained by orbital imagery. Thermal Emission Spectrometer (TES) emission phase function (EPF) observations show distinct apparent temperature variations with azimuth and emission angle that are consistent with the presence of warm, sunlit and cool, shaded slopes at typically ˜0.1 m scales. A surface model of a Gaussian distribution of azimuth independent slopes (described by θ-bar) is combined with a thermal model to predict surface temperature from each viewing angle and azimuth of the TES EPF observation. The models can be used to predict surface slopes using the difference in measured apparent temperature from 2 separate 60-70° emission angle observations taken ˜180° in azimuth relative to each other. Most martian surfaces are consistent with low to moderate slope distributions. The slope distributions display distinct correlations with latitude, longitude, and albedo. Exceptionally smooth surfaces are located at lower latitudes in both the southern highlands as well as in high albedo dusty terrains. High slopes are associated with southern high-latitude patterned ground and north polar sand dunes. There is little apparent correlation between high resolution imagery and the derived θ-bar, with exceptions such as duneforms. This method can be used to characterize potential landing sites by assuming fractal scaling behavior to meter scales. More precisely targeted thermal infrared observations from other spacecraft instruments are capable of significantly reducing uncertainty as well as reducing measurement spot size from 10s of kilometers to sub-kilometer scales.
Critical Casimir force scaling functions of the two-dimensional Ising model at finite aspect ratios
NASA Astrophysics Data System (ADS)
Hobrecht, Hendrik; Hucht, Alfred
2017-02-01
We present a systematic method to calculate the universal scaling functions for the critical Casimir force and the according potential of the two-dimensional Ising model with various boundary conditions. Therefore we start with the dimer representation of the corresponding partition function Z on an L× M square lattice, wrapped around a torus with aspect ratio ρ =L/M . By assuming periodic boundary conditions and translational invariance in at least one direction, we systematically reduce the problem to a 2× 2 transfer matrix representation. For the torus we first reproduce the results by Kaufman and then give a detailed calculation of the scaling functions. Afterwards we present the calculation for the cylinder with open boundary conditions. All scaling functions are given in form of combinations of infinite products and integrals. Our results reproduce the known scaling functions in the limit of thin films ρ \\to 0 . Additionally, for the cylinder at criticality our results confirm the predictions from conformal field theory.
NASA Astrophysics Data System (ADS)
O'Donnell, F. C.; Flatley, W. T.; Masek Lopez, S.; Fulé, P. Z.; Springer, A. E.
2017-12-01
Climate change and fire suppression are interacting to reduce forest health, drive high-intensity wildfires, and potentially reduce water quantity and quality in high-elevation forests of the southwestern US. Forest restoration including thinning and prescribed fire, is a management approach that reduces fire risk. It may also improve forest health by increasing soil moisture through the combined effects of increased snow pack and reduced evapotranspiration (ET), though the relative importance of these mechanisms is unknown. It is also unclear how small-scale changes in the hydrologic cycle will scale-up to influence watershed dynamics. We conducted field and modeling studies to investigate these issues. We measured snow depth, snow water equivalent (SWE), and soil moisture at co-located points in paired restoration-control plots near Flagstaff, AZ. Soil moisture was consistently higher in restored plots across all seasons. Snow depth and SWE were significantly higher in restored plots immediately after large snow events with no difference one week after snowfall, suggesting that restoration leads to both increased accumulation and sublimation. At the point scale, there was a small (ρ=0.28) but significant correlation between fall-to-spring soil moisture increase and peak SWE during the winter. Consistent with previous studies, soil drying due to ET was more rapid in recently restored sites than controls, but there was no difference 10 years after restoration. In addition to the small role played by snow and ET, we also observed more rapid soil moisture loss in the 1-2 days following rain or rapid snowmelt in control than in restoration plots. We hypothesize that this is due to a loss of macropores when woody plants are replaced by herbaceous vegetation and warrants further study. To investigate watershed-scale dynamics, we combined spatially-explicit vegetation and fire modeling with statistical water and sediment yield models for a large forested landscape on the Kaibab Plateau, AZ. Our results predicted that climate-induced vegetation changes will result in annual runoff declines of 2%-10% in the next century, but that restoration reversed these declines. We also predict that restoration treatments will protect water quality by reducing the incidence of high severity fire and the associated erosion.
Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Purba, Victor; Jafarpour, Saber
Next-generation power networks will contain large numbers of grid-connected inverters satisfying a significant fraction of system load. Since each inverter model has a relatively large number of dynamic states, it is impractical to analyze complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loopmore » for grid synchronization. We outline a structure-preserving reduced-order inverter model with lumped parameters for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. We show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as any individual inverter in the system. Numerical simulations validate the reduced-order model.« less
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from: inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. The work presented under this task uses the first-principles based Computational Fluid Dynamics (CFD) technique to compute heat transfer from tank wall to the cryogenic fluids, and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between tank wall and cryogenic propellant, and that between tank wall and ullage gas were then simulated. The results showed that commonly used heat transfer correlations for either vertical or horizontal plate over predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.
NASA Astrophysics Data System (ADS)
Mclaughlin, D. L.; Jones, C. N.; Evenson, G. R.; Golden, H. E.; Lane, C.; Alexander, L. C.; Lang, M.
2017-12-01
Combined geospatial and modeling approaches are required to fully enumerate wetland hydrologic connectivity and downstream effects. Here, we utilized both geospatial analysis and hydrologic modeling to explore drivers and consequences of modified surface water connectivity in the Delmarva Peninsula, with particular focus on increased connectivity via pervasive wetland ditching. Our geospatial analysis quantified both historical and contemporary wetland storage capacity across the region, and suggests that over 70% of historical storage capacity has been lost due to this ditching. Building upon this analysis, we applied a catchment-scale model to simulate implications of reduced storage capacity on catchment-scale hydrology. In short, increased connectivity (and concomitantly reduced wetland water storage capacity) decreases catchment inundation extent and spatial heterogeneity, shortens cumulative residence times, and increases downstream flow variation with evident effects on peak and baseflow dynamics. As such, alterations in connectivity have implications for hydrologically mediated functions in catchments (e.g., nutrient removal) and downstream systems (e.g., maintenance of flow for aquatic habitat). Our work elucidates such consequences in Delmarva Peninsula while also providing new tools for broad application to target wetland restoration and conservation. Views expressed are those of the authors and do not necessarily reflect policies of the US EPA or US FWS.
CFD Extraction of Heat Transfer Coefficient in Cryogenic Propellant Tanks
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. This study uses first-principles based CFD methodology to compute heat transfer from the tank wall to the cryogenic fluids and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between the tank wall and cryogenic propellant, and that between the tank wall and ullage gas were then simulated. The results showed that the commonly used heat transfer correlations for either vertical or horizontal plate over-predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.
Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng
2018-05-03
Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2 = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.
Isotropic model for cluster growth on a regular lattice
NASA Astrophysics Data System (ADS)
Yates, Christian A.; Baker, Ruth E.
2013-08-01
There exists a plethora of mathematical models for cluster growth and/or aggregation on regular lattices. Almost all suffer from inherent anisotropy caused by the regular lattice upon which they are grown. We analyze the little-known model for stochastic cluster growth on a regular lattice first introduced by Ferreira Jr. and Alves [J. Stat. Mech. Theo. & Exp.1742-546810.1088/1742-5468/2006/11/P11007 (2006) P11007], which produces circular clusters with no discernible anisotropy. We demonstrate that even in the noise-reduced limit the clusters remain circular. We adapt the model by introducing a specific rearrangement algorithm so that, rather than adding elements to the cluster from the outside (corresponding to apical growth), our model uses mitosis-like cell splitting events to increase the cluster size. We analyze the surface scaling properties of our model and compare it to the behavior of more traditional models. In “1+1” dimensions we discover and explore a new, nonmonotonic surface thickness scaling relationship which differs significantly from the Family-Vicsek scaling relationship. This suggests that, for models whose clusters do not grow through particle additions which are solely dependent on surface considerations, the traditional classification into “universality classes” may not be appropriate.
Detached-Eddy Simulation Based on the v2-f Model
NASA Technical Reports Server (NTRS)
Jee, Sol Keun; Shariff, Karim
2012-01-01
Detached eddy simulation (DES) based on the v2-f RANS model is proposed. This RANS model incorporates the anisotropy of near-wall turbulence which is absent in other RANS models commonly used in the DES community. In LES mode, the proposed DES formulation reduces to a transport equation for the subgrid-scale kinetic energy. The constant, CDES, required by this model was calibrated by simulating isotropic turbulence. In the final paper, DES simulations of canonical separated flows will be presented.
Path model of antenatal stress and depressive symptoms among Chinese primipara in late pregnancy.
Li, Yingtao; Zeng, Yingchun; Zhu, Wei; Cui, Ying; Li, Jie
2016-07-21
Antenatal maternal mental health problems have numerous consequences for the well-being of both mother and child. This study aimed to test and construct a pertinent model of antenatal depressive symptoms within the conceptual framework of a stress process model. This study utilized a cross-sectional study design. participants were adult women (18 years or older) having a healthy pregnancy, in their third trimester (the mean weeks gestation was 34.71). depressive and anxiety symptoms were measured by Zung's Self-rating Depressive and Anxiety Scale, stress was measured by Pregnancy-related Pressure Scale, social support and coping strategies were measured by Social Support Rating Scale and Simplified Coping Style Questionnaire, respectively. path analysis was applied to examine the hypothesized causal paths between study variables. A total of 292 subjects were enrolled. The final testing model showed good fit, with normed χ (2) = 32.317, p = 0.061, CFI = 0.961, TLI = 0.917, IFI = 0.964, NFI = 0.900, RMSEA = 0.042. This path model supported the proposed model within the theoretical framework of the stress process model. Pregnancy-related stress, financial strain and active coping have both direct and indirect effects on depressive symptoms. Psychological preparedness for delivery, social support and anxiety levels have direct effects on antenatal depressive symptoms. Good preparedness for delivery could reduce depressive symptoms, while higher levels of anxiety could significantly increase depressive symptoms. Additionally, there were indirect effects of miscarriage history, irregular menstruation, partner relationship and passive coping with depressive symptoms. The empirical support from this study has enriched theories on the determinants of depressive symptoms among Chinese primipara, and could facilitate the formulation of appropriate interventions for reducing antenatal depressive symptoms, and enhancing the mental health of pregnant women.
Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.
2010-11-30
The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, anmore » intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.« less
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Jackson, Karen E.; Polanco, Michael A.; Littell, Justin D.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber (DEA) under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. The presence of this energy absorbing device reduced the peak impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to a system-integrated finite element model of the test article developed in parallel with the test program. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests were conducted to evaluate the impact performances of various components and subsystems, including new crush tubes and the DEA blocks. Parameters defined for the system-integrated finite element model were determined from these tests. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the full-scale crash test without the DEA. This combination of heuristic and quantitative methods identified modeling deficiencies, evaluated parameter importance, and proposed required model changes. The multidimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and copilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. One lesson learned was that this approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and pretest predictions. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, potentially reducing overall development costs.
Bridging the scales in a eulerian air quality model to assess megacity export of pollution
NASA Astrophysics Data System (ADS)
Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.
2013-08-01
In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.
Scaling relations for a functionally two-dimensional plant: Chamaesyce setiloba (Euphorbiaceae).
Koontz, Terri L; Petroff, Alexander; West, Geoffrey B; Brown, James H
2009-05-01
Many characteristics of plants and animals scale with body size as described by allometric equations of the form Y = βM(α), where Y is an attribute of the organism, β is a coefficient that varies with attribute, M is a measure of organism size, and α is another constant, the scaling exponent. In current models, the frequently observed quarter-power scaling exponents are hypothesized to be due to fractal-like structures. However, not all plants or animals conform to the assumptions of these models. Therefore, they might be expected to have different scaling relations. We studied one such plant, Chamaesyce setiloba, a prostrate annual herb that grows to functionally fill a two-dimensional space. Number of leaves scaled slightly less than isometrically with total aboveground plant mass (α ≈ 0.9) and substantially less than isometrically with dry total stem mass (α = 0.82), showing reduced allocation to leaf as opposed to stem tissue with increasing plant size. Additionally, scalings of the lengths and radii of parent and daughter branches differed from those predicted for three-dimensional trees and shrubs. Unlike plants with typical three-dimensional architectures, C. setiloba has distinctive scaling relations associated with its particular prostrate herbaceous growth form.
Weis, Karen L; Lederman, Regina P; Walker, Katherine C; Chan, Wenyaw
To determine the efficacy of the Mentors Offering Maternal Support (MOMS) program to reduce pregnancy-specific anxiety and depression and build self-esteem and resilience in military women. Randomized controlled trial with repeated measures. Large military community in Texas. Pregnant women (N = 246) in a military sample defined as active duty or spouse of military personnel. Participants were randomized in the first trimester to the MOMS program or normal prenatal care. Participants attended eight 1-hour sessions every other week during the first, second, and third trimesters of pregnancy. Pregnancy-specific anxiety, depression, self-esteem, and resilience were measured in each trimester. Linear mixed models were used to compare the two-group difference in slope for prenatal anxiety, depression, self-esteem, and resilience. The Prenatal Self-Evaluation Questionnaire was used to measure perinatal anxiety. Rates of prenatal anxiety on the Identification With a Motherhood Role (p = .049) scale and the Preparation for Labor (p = .017) scale were significantly reduced for participants in MOMS. Nulliparous participants showed significantly lower anxiety on the Acceptance of Pregnancy scale and significantly greater anxiety on the Preparation for Labor scale. Single participants had significantly greater anxiety on the Well-Being of Self and Baby in Labor scale, and participants with deployed husbands had significantly greater anxiety on the Identification With a Motherhood Role scale. Participation in the MOMS program reduced pregnancy-specific prenatal anxiety for the dimensions of Identification With a Motherhood Role and Preparation for Labor. Both dimensions of anxiety were previously found to be significantly associated with preterm birth and low birth weight. Military leaders have recognized the urgent need to support military families. Copyright © 2017 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.
Olson, Mitchell R; Sale, Tom C
2015-01-01
Soil remediation is often inhibited by subsurface heterogeneity, which constrains contaminant/reagent contact. Use of soil mixing techniques for reagent delivery provides a means to overcome contaminant/reagent contact limitations. Furthermore, soil mixing reduces the permeability of treated soils, thus extending the time for reactions to proceed. This paper describes research conducted to evaluate implications of soil mixing on remediation of non-aqueous phase liquid (NAPL) source zones. The research consisted of column studies and subsequent modeling of field-scale systems. For column studies, clean influent water was flushed through columns containing homogenized soils, granular zero valent iron (ZVI), and trichloroethene (TCE) NAPL. Within the columns, NAPL depletion occurred due to dissolution, followed by either column-effluent discharge or ZVI-mediated degradation. Complete removal of TCE NAPL from the columns occurred in 6-8 pore volumes of flow. However, most of the TCE (>96%) was discharged in the column effluent; less than 4% of TCE was degraded. The low fraction of TCE degraded is attributed to the short hydraulic residence time (<4 days) in the columns. Subsequently, modeling was conducted to scale up column results. By scaling up to field-relevant system sizes (>10 m) and reducing permeability by one-or-more orders of magnitude, the residence time could be greatly extended, potentially for periods of years to decades. Model output indicates that the fraction of TCE degraded can be increased to >99.9%, given typical post-mixing soil permeability values. These results suggest that remediation performance can be greatly enhanced by combining contaminant degradation with an extended residence time. Copyright © 2015 Elsevier B.V. All rights reserved.
Developing and validating a measure of community capacity: Why volunteers make the best neighbours.
Lovell, Sarah A; Gray, Andrew R; Boucher, Sara E
2015-05-01
Social support and community connectedness are key determinants of both mental and physical wellbeing. While social capital has been used to indicate the instrumental value of these social relationships, its broad and often competing definitions have hindered practical applications of the concept. Within the health promotion field, the related concept of community capacity, the ability of a group to identify and act on problems, has gained prominence (Labonte and Laverack, 2001). The goal of this study was to develop and validate a scale measuring community capacity including exploring its associations with socio-demographic and civic behaviour variables among the residents of four small (populations 1500-2000) high-deprivation towns in southern New Zealand. The full (41-item) scale was found to have strong internal consistency (Cronbach's alpha = 0.89) but a process of reducing the scale resulted in a shorter 26-item instrument with similar internal consistency (alpha 0.88). Subscales of the reduced instrument displayed at least marginally acceptable levels of internal consistency (0.62-0.77). Using linear regression models, differences in community capacity scores were found for selected criterion, namely time spent living in the location, local voting, and volunteering behaviour, although the first of these was no longer statistically significant in an adjusted model with potential confounders including age, sex, ethnicity, education, marital status, employment, household income, and religious beliefs. This provides support for the scale's concurrent validity. Differences were present between the four towns in unadjusted models and remained statistically significant in adjusted models (including variables mentioned above) suggesting, crucially, that even when such factors are accounted for, perceptions of one's community may still depend on place. Copyright © 2014. Published by Elsevier Ltd.
Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.
2010-01-01
Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.
AgMIP Coordinated Global and Regional Assessments for 1.5°C and 2.0°C
NASA Astrophysics Data System (ADS)
Rosenzweig, C.
2017-12-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has developed novel methods for Coordinated Global and Regional Assessments (CGRA) of agriculture and food security in a changing world. The present study performs a proof-of-concept of the CGRA to demonstrate advantages and challenges of the framework. This effort responds to the request by UNFCCC for the implications of limiting global temperature increases to 1.5°C and 2.0°C above pre-industrial conditions. The protocols for the 1.5°C/2.0°C assessment establish explicit and testable linkages across disciplines and scales, connecting outputs and inputs from the Shared Socio-economic Pathways (SSPs), Representative Agricultural Pathways (RAPs), HAPPI and CMIP5 ensemble scenarios, global gridded crop models, global agricultural economic models, site-based crop models, and within-country regional economic models. CGRA results show that at the global scale, mixed areas of positive and negative simulated yield changes, with declines in some breadbasket regions led to overall declines in productivity at both 1.5°C and 2.0°C. These projected global yield changes resulted in increases in prices of major commodities in a global economic model. Simulations for 1.5°C and 2.0°C using site-based crop models had mixed results depending on region and crop, but with more negative effects on productivity at 2.0°C than at 1.5°C for the most part. In conjunction with price changes from the global economics models, these productivity declines resulted generally in small positive effects on regional farm livelihoods, showing that farming systems should continue to be viable under high mitigation scenarios. CGRA protocols focus on how mitigation actions and effects differ across scales, with main mechanisms studied in the integrated assessment models being policies and technologies that reduce direct non-CO2 emissions from agriculture, reduce CO2 emissions from land use change and forest sink enhancement, and utilize biomass for energy production. At regional scales, increasing soil organic carbon (SOC) is of active interest.
Tae, Young Sook; Heitkemper, Margaret; Kim, Mi Yea
2012-01-01
To test a hypothetical model of depression in Korean women with breast cancer and to test the mediating effects of self-esteem and hope. Cross-sectional design. Participants were recruited from three general hospitals and one cancer hospital in Busan, South Korea. 214 Korean women diagnosed with breast cancer (stages I-III). All participants completed questionnaires (e.g., Zung Self-Rating Depression scale, Herth Hope Scale, Rosenberg Self-Esteem Scale, Health Self-Rating Scale in Health and Activity survey, Kang's Family Support Scale). Based on the literature, Mplus, version 3.0, was used to determine the best depression model with path analysis. Depression, self-esteem, hope, perceived health status, religious beliefs, family support, economic status, and fatigue. Self-esteem was directly affected by perceived health status, religious beliefs, family support, economic status, and fatigue. Hope was directly affected by family support, self-esteem, and how patients perceived their health status. Depression was directly affected by self-esteem and hope. The path analysis model explained 31% of the variance in depression in Korean women with breast cancer. A model of depression in Korean women with breast cancer was developed, and self-esteem and hope were mediating factors of depression. Self-esteem and hope must be considered when developing services to reduce depression in Korean women with breast cancer.
NASA Astrophysics Data System (ADS)
Dipu, Sudhakar; Quaas, Johannes; Wolke, Ralf; Stoll, Jens; Mühlbauer, Andreas; Sourdeval, Odran; Salzmann, Marc; Heinold, Bernd; Tegen, Ina
2017-06-01
The regional atmospheric model Consortium for Small-scale Modeling (COSMO) coupled to the Multi-Scale Chemistry Aerosol Transport model (MUSCAT) is extended in this work to represent aerosol-cloud interactions. Previously, only one-way interactions (scavenging of aerosol and in-cloud chemistry) and aerosol-radiation interactions were included in this model. The new version allows for a microphysical aerosol effect on clouds. For this, we use the optional two-moment cloud microphysical scheme in COSMO and the online-computed aerosol information for cloud condensation nuclei concentrations (Cccn), replacing the constant Cccn profile. In the radiation scheme, we have implemented a droplet-size-dependent cloud optical depth, allowing now for aerosol-cloud-radiation interactions. To evaluate the models with satellite data, the Cloud Feedback Model Intercomparison Project Observation Simulator Package (COSP) has been implemented. A case study has been carried out to understand the effects of the modifications, where the modified modeling system is applied over the European domain with a horizontal resolution of 0.25° × 0.25°. To reduce the complexity in aerosol-cloud interactions, only warm-phase clouds are considered. We found that the online-coupled aerosol introduces significant changes for some cloud microphysical properties. The cloud effective radius shows an increase of 9.5 %, and the cloud droplet number concentration is reduced by 21.5 %.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Xiaojuan, E-mail: xjlian2005@gmail.com; Cartoixà, Xavier; Miranda, Enrique
2014-06-28
We depart from first-principle simulations of electron transport along paths of oxygen vacancies in HfO{sub 2} to reformulate the Quantum Point Contact (QPC) model in terms of a bundle of such vacancy paths. By doing this, the number of model parameters is reduced and a much clearer link between the microscopic structure of the conductive filament (CF) and its electrical properties can be provided. The new multi-scale QPC model is applied to two different HfO{sub 2}-based devices operated in the unipolar and bipolar resistive switching (RS) modes. Extraction of the QPC model parameters from a statistically significant number of CFsmore » allows revealing significant structural differences in the CF of these two types of devices and RS modes.« less
NASA Astrophysics Data System (ADS)
Woolf, Dominic; Lehmann, Johannes
2014-05-01
With CO2 emissions still tracking the upper bounds of projected emissions scenarios, it is becoming increasingly urgent to reduce net greenhouse gas (GHG) emissions, and increasingly likely that restricting future atmospheric GHG concentrations to within safe limits will require an eventual transition towards net negative GHG emissions. Few measures capable of providing negative emissions at a globally-significant scale are currently known. Two that are most often considered include carbon sequestration in biomass and soil, and biomass energy with carbon capture and storage (BECCS). In common with these two approaches, biochar also relies on the use of photosynthetically-bound carbon in biomass. But, because biomass and land are limited, it is critical that these resources are efficiently allocated between biomass/soil sequestration, bioenergy, BECCS, biochar, and other competing uses such as food, fiber and biodiversity. In many situations, biochar can offer advantages that may make it the preferred use of a limited biomass supply. These advantages include that: 1) Biochar can provide valuable benefits to agriculture by improving soil fertility and crop production, and reducing fertlizer and irrigation requirements. 2) Biochar is significantly more stable than biomass or other forms of soil carbon, thus lowering the risk of future losses compared to sequestration in biomass or soil organic carbon. 3) Gases and volatiles produced by pyrolysis can be combusted for energy (which may offset fossil fuel emissions). 4) Biochar can further lower GHG emissions by reducing nitrous oxide emissions from soil and by enhancing net primary production. Determining the optimal use of biomass requires that we are able to model not only the climate-change mitigation impact of each option, but also their economic and wider environmental impacts. Thus, what is required is a systems modelling approach that integrates components representing soil biogeochemistry, hydrology, crop production, land use, thermochemical conversion (to both biochar and energy products), climate, economics, and also the interactions between these components. Early efforts to model the life-cycle impacts of biochar systems have typically used simple empirical estimates of the strength of various feedback mechanisms, such as the impact of biochar on crop-growth, soil GHG fluxes, and native soil organic carbon. However, an environmental management perspective demands consideration of impacts over a longer time-scale and in broader agroecological situations than can be reliably extrapolated from simple empirical relationships derived from trials and experiments of inevitably limited scope and duration. Therefore, reliable quantification of long-term and large-scale impacts demands an understanding of the fundamental underlying mechanisms. Here, a systems-modelling approach that incorporates mechanistic assumptions will be described, and used to examine how uncertainties in the biogeochemical processes which drive the biochar-plant-soil interactions (particularly those responsible for priming, crop-growth and soil GHG emissions) translate into sensitivities of large scale and long-term impacts. This approach elucidates the aspects of process-level biochar biogeochemistry most critical to determining the large-scale GHG and economic impacts, and thus provides a useful guide to future model-led research.
Acoustic Performance of Drive Rig Mufflers for Model Scale Engine Testing
NASA Technical Reports Server (NTRS)
Stephens, David, B.
2013-01-01
Aircraft engine component testing at the NASA Glenn Research Center (GRC) includes acoustic testing of scale model fans and propellers in the 9- by15-Foot Low Speed Wind Tunnel (LSWT). This testing utilizes air driven turbines to deliver power to the article being studied. These air turbines exhaust directly downstream of the model in the wind tunnel test section and have been found to produce significant unwanted noise that reduces the quality of the acoustic measurements of the engine model being tested. This report describes an acoustic test of a muffler designed to mitigate the extraneous turbine noise. The muffler was found to provide acoustic attenuation of at least 8 dB between 700 Hz and 20 kHz which significantly improves the quality of acoustic measurements in the facility.
Modeling and Design of a Full-Scale Rotor Blade with Embedded Piezocomposite Actuators
NASA Astrophysics Data System (ADS)
Kovalovs, A.; Barkanov, E.; Ruchevskis, S.; Wesolowski, M.
2017-05-01
An optimization methodology for the design of a full-scale rotor blade with an active twist in order to enhance its ability to reduce vibrations and noise is presented. It is based on a 3D finite-element model, the planning of experiments, and the response surface technique to obtain high piezoelectric actuation forces and displacements with a minimum actuator weight and energy applied. To investigate an active twist of the helicopter rotor blade, a structural static analysis using a 3D finite-element model was carried out. Optimum results were obtained at two possible applications of macrofiber composite actuators. The torsion angle found from the finite-element simulation of helicopter rotor blades was successfully validated by its experimental values, which confirmed the modeling accuracy.
High Temperature Oxidation and Electrochemical Studies Related to Hot Corrosion
1992-05-01
sulfidation. In sulfidation, NaCI reacts with sulfur found in the fuel to form Na2SO4. The sodium sulfate reacts with the protective oxide scale resulting...fluxing or acid -base reaction model. In sufidation, 4 Bornstein explains that the oxide scales are insoluble in stoichiometric sodium sulfate , but due to...oxygen partial pressures an electron hopping mechanism dominates. Reduced cerium ions and Ce3+- oxygen vacancy associates generate these conducting
Initial conditions and modeling for simulations of shock driven turbulent material mixing
Grinstein, Fernando F.
2016-11-17
Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less
The design of dapog rice seeder model for laboratory scale
NASA Astrophysics Data System (ADS)
Purba, UI; Rizaldi, T.; Sumono; Sigalingging, R.
2018-02-01
The dapog system is seeding rice seeds using a special nursery tray. Rice seedings with dapog systems can produce seedlings in the form of higher quality and uniform seed rolls. This study aims to reduce the cost of making large-scale apparatus by designing models for small-scale and can be used for learning in the laboratory. Parameters observed were soil uniformity, seeds and fertilizers, soil looses, seeds and fertilizers, effective capacity of apparatus, and power requirements. The results showed a high uniformity in soil, seed and fertilizer respectively 92.8%, 1-3 seeds / cm2 and 82%. The scattered materials for soil, seed and fertilizer were respectively 6.23%, 2.7% and 2.23%. The effective capacity of apparatus was 360 boxes / hour with 237.5 kWh of required power.
NASA Astrophysics Data System (ADS)
Duran, P.; Holloway, T.; Brinkman, G.; Denholm, P.; Littlefield, C. M.
2011-12-01
Solar photovoltaics (PV) are an attractive technology because they can be locally deployed and tend to yield high production during periods of peak electric demand. These characteristics can reduce the need for conventional large-scale electricity generation, thereby reducing emissions of criteria air pollutants (CAPs) and improving ambient air quality with regard to such pollutants as nitrogen oxides, sulfur oxides and fine particulates. Such effects depend on the local climate, time-of-day emissions, available solar resources, the structure of the electric grid, and existing electricity production among other factors. This study examines the air quality impacts of distributed PV across the United States Eastern Interconnection. In order to accurately model the air quality impact of distributed PV in space and time, we used the National Renewable Energy Lab's (NREL) Regional Energy Deployment System (ReEDS) model to form three unique PV penetration scenarios in which new PV construction is distributed spatially based upon economic drivers and natural solar resources. Those scenarios are 2006 Eastern Interconnection business as usual, 10% PV penetration, and 20% PV penetration. With the GridView (ABB, Inc) dispatch model, we used historical load data from 2006 to model electricity production and distribution for each of the three scenarios. Solar PV electric output was estimated using historical weather data from 2006. To bridge the gap between dispatch and air quality modeling, we will create emission profiles for electricity generating units (EGUs) in the Eastern Interconnection from historical Continuous Emissions Monitoring System (CEMS) data. Via those emissions profiles, we will create hourly emission data for EGUs in the Eastern Interconnect for each scenario during 2006. Those data will be incorporated in the Community Multi-scale Air Quality (CMAQ) model using the Sparse Matrix Operator Kernel Emissions (SMOKE) model. Initial results indicate that PV penetration significantly reduces conventional peak electricity production and that, due to reduced emissions during periods of extremely active photochemistry, air quality could see benefits.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2017-08-05
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Quantifying the relative contribution of climate and human impacts on streamflow at seasonal scale
NASA Astrophysics Data System (ADS)
Xin, Z.; Zhang, L.; Li, Y.; Zhang, C.
2017-12-01
Both climate change and human activities have induced changes to hydrology. The quantification of their impacts on streamflow is a challenge, especially at the seasonal scale due to seasonality of climate and human impacts, i.e., water use for irrigation and water storage and release due to reservoir operation. In this study, the decomposition method based on the Budyko hypothesis is extended to the seasonal scale and is used to quantify the climate and human impacts on annual and seasonal streamflow changes. The results are further compared and verified with those simulated by the hydrological method of abcd model. Data are split into two periods (1953-1974 and 1975-2005) to quantify the change. Three seasons, including wet, dry and irrigation seasons are defined by introducing the monthly aridity index. In general, results showed a satisfactory agreement between the Budyko decomposition method and abcd model. Both climate change and human activities were found to induce a decrease in streamflow at the annual scale, with 67% of the change contributed by human activities. At the seasonal scale, the human-induced contribution to the reduced stream flow was 64% and 73% for dry and wet seasons, respectively; whereas in the irrigation season, the impact of human activities on reducing the streamflow was more pronounced (180%) since the climate contributes to increased streamflow. In addition, the quantification results were analyzed for each month in the wet season to reveal the effects of intense precipitation and reservoir operation rules during flood season.
Field Scale Optimization for Long-Term Sustainability of Best Management Practices in Watersheds
NASA Astrophysics Data System (ADS)
Samuels, A.; Babbar-Sebens, M.
2012-12-01
Agricultural and urban land use changes have led to disruption of natural hydrologic processes and impairment of streams and rivers. Multiple previous studies have evaluated Best Management Practices (BMPs) as means for restoring existing hydrologic conditions and reducing impairment of water resources. However, planning of these practices have relied on watershed scale hydrologic models for identifying locations and types of practices at scales much coarser than the actual field scale, where landowners have to plan, design and implement the practices. Field scale hydrologic modeling provides means for identifying relationships between BMP type, spatial location, and the interaction between BMPs at a finer farm/field scale that is usually more relevant to the decision maker (i.e. the landowner). This study focuses on development of a simulation-optimization approach for field-scale planning of BMPs in the School Branch stream system of Eagle Creek Watershed, Indiana, USA. The Agricultural Policy Environmental Extender (APEX) tool is used as the field scale hydrologic model, and a multi-objective optimization algorithm is used to search for optimal alternatives. Multiple climate scenarios downscaled to the watershed-scale are used to test the long term performance of these alternatives and under extreme weather conditions. The effectiveness of these BMPs under multiple weather conditions are included within the simulation-optimization approach as a criteria/goal to assist landowners in identifying sustainable design of practices. The results from these scenarios will further enable efficient BMP planning for current and future usage.
Modeling the effects of LID practices on streams health at watershed scale
NASA Astrophysics Data System (ADS)
Shannak, S.; Jaber, F. H.
2013-12-01
Increasing impervious covers due to urbanization will lead to an increase in runoff volumes, and eventually increase flooding. Stream channels adjust by widening and eroding stream bank which would impact downstream property negatively (Chin and Gregory, 2001). Also, urban runoff drains in sediment bank areas in what's known as riparian zones and constricts stream channels (Walsh, 2009). Both physical and chemical factors associated with urbanization such as high peak flows and low water quality further stress aquatic life and contribute to overall biological condition of urban streams (Maxted et al., 1995). While LID practices have been mentioned and studied in literature for stormwater management, they have not been studied in respect to reducing potential impact on stream health. To evaluate the performance and the effectiveness of LID practices at a watershed scale, sustainable detention pond, bioretention, and permeable pavement will be modeled at watershed scale. These measures affect the storm peak flows and base flow patterns over long periods, and there is a need to characterize their effect on stream bank and bed erosion, and aquatic life. These measures will create a linkage between urban watershed development and stream conditions specifically biological health. The first phase of this study is to design and construct LID practices at the Texas A&M AgriLife Research and Extension Center-Dallas, TX to collect field data about the performance of these practices on a smaller scale. The second phase consists of simulating the performance of LID practices on a watershed scale. This simulation presents a long term model (23 years) using SWAT to evaluate the potential impacts of these practices on; potential stream bank and bed erosion, and potential impact on aquatic life in the Blunn Watershed located in Austin, TX. Sub-daily time step model simulations will be developed to simulate the effectiveness of the three LID practices with respect to reducing potential erosion from stream beds and banks by studying annual average excess shear and reducing potential impact on aquatic life by studying rapid changes and variation in flow regimes in urban streams. This study will contribute to develop a methodology that evaluates the impact of hydrological changes that occur due to urban development, on aquatic life, stream bank and bed erosion. This is an ongoing research project and results will be shared and discussed at the conference.
Network bandwidth utilization forecast model on high bandwidth networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wuchert; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less
Network Bandwidth Utilization Forecast Model on High Bandwidth Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less
Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale
NASA Astrophysics Data System (ADS)
Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.
2005-12-01
Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time
Cost-effectiveness of scaling up voluntary counselling and testing in West-Java, Indonesia.
Tromp, Noor; Siregar, Adiatma; Leuwol, Barnabas; Komarudin, Dindin; van der Ven, Andre; van Crevel, Reinout; Baltussen, Rob
2013-01-01
to evaluate the costs-effectiveness of scaling up community-based VCT in West-Java. the Asian epidemic model (AEM) and resource needs model (RNM) were used to calculate incremental costs per HIV infection averted and per disability-adjusted life years saved (DALYs). Locally monitored demographic, epidemiological behavior and cost data were used as model input. scaling up community-based VCT in West-Java will reduce the overall population prevalence by 36% in 2030 and costs US$248 per HIV infection averted and US$9.17 per DALY saved. Cost-effectiveness estimation were most sensitive to the impact of VCT on condom use and to the population size of clients of female sex workers (FSWs), but were overall robust. The total costs for scaling up community-based VCT range between US$1.3 and 3.8 million per year and require the number of VCT integrated clinics at public community health centers to increase from 73 in 2010 to 594 in 2030. scaling up community-based VCT seems both an effective and cost-effective intervention. However, in order to prioritize VCT in HIV/AIDS control in West-Java, issues of budget availability and organizational capacity should be addressed.
Development of a Dynamically Scaled Generic Transport Model Testbed for Flight Research Experiments
NASA Technical Reports Server (NTRS)
Jordan, Thomas; Langford, William; Belcastro, Christine; Foster, John; Shah, Gautam; Howland, Gregory; Kidd, Reggie
2004-01-01
This paper details the design and development of the Airborne Subscale Transport Aircraft Research (AirSTAR) test-bed at NASA Langley Research Center (LaRC). The aircraft is a 5.5% dynamically scaled, remotely piloted, twin-turbine, swept wing, Generic Transport Model (GTM) which will be used to provide an experimental flight test capability for research experiments pertaining to dynamics modeling and control beyond the normal flight envelope. The unique design challenges arising from the dimensional, weight, dynamic (inertial), and actuator scaling requirements necessitated by the research community are described along with the specific telemetry and control issues associated with a remotely piloted subscale research aircraft. Development of the necessary operational infrastructure, including operational and safety procedures, test site identification, and research pilots is also discussed. The GTM is a unique vehicle that provides significant research capacity due to its scaling, data gathering, and control characteristics. By combining data from this testbed with full-scale flight and accident data, wind tunnel data, and simulation results, NASA will advance and validate control upset prevention and recovery technologies for transport aircraft, thereby reducing vehicle loss-of-control accidents resulting from adverse and upset conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less
J. Greg Jones; Woodam Chung; Carl Seielstad; Janet Sullivan; Kurt Krueger
2010-01-01
There is a recognized need to apply and maintain fuel treatments to reduce catastrophic wildland fires. A number of models and decision support systems have been developed for addressing different aspects of fuel treatments while considering other important resource management issues and constraints. Although these models address diverse aspects of the fuel treatment-...
Coarse-grained description of cosmic structure from Szekeres models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sussman, Roberto A.; Gaspar, I. Delgado; Hidalgo, Juan Carlos, E-mail: sussman@nucleares.unam.mx, E-mail: ismael.delgadog@uaem.edu.mx, E-mail: hidalgo@fis.unam.mx
2016-03-01
We show that the full dynamical freedom of the well known Szekeres models allows for the description of elaborated 3-dimensional networks of cold dark matter structures (over-densities and/or density voids) undergoing ''pancake'' collapse. By reducing Einstein's field equations to a set of evolution equations, which themselves reduce in the linear limit to evolution equations for linear perturbations, we determine the dynamics of such structures, with the spatial comoving location of each structure uniquely specified by standard early Universe initial conditions. By means of a representative example we examine in detail the density contrast, the Hubble flow and peculiar velocities ofmore » structures that evolved, from linear initial data at the last scattering surface, to fully non-linear 10–20 Mpc scale configurations today. To motivate further research, we provide a qualitative discussion on the connection of Szekeres models with linear perturbations and the pancake collapse of the Zeldovich approximation. This type of structure modelling provides a coarse grained—but fully relativistic non-linear and non-perturbative —description of evolving large scale cosmic structures before their virialisation, and as such it has an enormous potential for applications in cosmological research.« less
Tawhai, Merryn H; Bates, Jason H T
2011-05-01
Multi-scale modeling of biological systems has recently become fashionable due to the growing power of digital computers as well as to the growing realization that integrative systems behavior is as important to life as is the genome. While it is true that the behavior of a living organism must ultimately be traceable to all its components and their myriad interactions, attempting to codify this in its entirety in a model misses the insights gained from understanding how collections of system components at one level of scale conspire to produce qualitatively different behavior at higher levels. The essence of multi-scale modeling thus lies not in the inclusion of every conceivable biological detail, but rather in the judicious selection of emergent phenomena appropriate to the level of scale being modeled. These principles are exemplified in recent computational models of the lung. Airways responsiveness, for example, is an organ-level manifestation of events that begin at the molecular level within airway smooth muscle cells, yet it is not necessary to invoke all these molecular events to accurately describe the contraction dynamics of a cell, nor is it necessary to invoke all phenomena observable at the level of the cell to account for the changes in overall lung function that occur following methacholine challenge. Similarly, the regulation of pulmonary vascular tone has complex origins within the individual smooth muscle cells that line the blood vessels but, again, many of the fine details of cell behavior average out at the level of the organ to produce an effect on pulmonary vascular pressure that can be described in much simpler terms. The art of multi-scale lung modeling thus reduces not to being limitlessly inclusive, but rather to knowing what biological details to leave out.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Some consequences of shear on galactic dynamos with helicity fluxes
NASA Astrophysics Data System (ADS)
Zhou, Hongzhe; Blackman, Eric G.
2017-08-01
Galactic dynamo models sustained by supernova (SN) driven turbulence and differential rotation have revealed that the sustenance of large-scale fields requires a flux of small-scale magnetic helicity to be viable. Here we generalize a minimalist analytic version of such galactic dynamos to explore some heretofore unincluded contributions from shear on the total turbulent energy and turbulent correlation time, with the helicity fluxes maintained by either winds, diffusion or magnetic buoyancy. We construct an analytic framework for modelling the turbulent energy and correlation time as a function of SN rate and shear. We compare our prescription with previous approaches that include only rotation. The solutions depend separately on the rotation period and the eddy turnover time and not just on their ratio (the Rossby number). We consider models in which these two time-scales are allowed to be independent and also a case in which they are mutually dependent on radius when a radial-dependent SN rate model is invoked. For the case of a fixed rotation period (or a fixed radius), we show that the influence of shear is dramatic for low Rossby numbers, reducing the correlation time of the turbulence, which, in turn, strongly reduces the saturation value of the dynamo compared to the case when the shear is ignored. We also show that even in the absence of winds or diffusive fluxes, magnetic buoyancy may be able to sustain sufficient helicity fluxes to avoid quenching.
Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldhaber, Steve; Holland, Marika
The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less
NASA Technical Reports Server (NTRS)
Falarski, M. D.; Aoyagi, K.; Koenig, D. G.
1973-01-01
The upper-surface blown (USB) flap as a powered-lift concept has evolved because of the potential acoustic shielding provided when turbofan engines are installed on a wing upper surface. The results from a wind tunnel investigation of a large-scale USB model powered by two JT15D-1 turbofan engines are-presented. The effects of coanda flap extent and deflection, forward speed, and exhaust nozzle configuration were investigated. To determine the wing shielding the acoustics of a single engine nacelle removed from the model were also measured. Effective shielding occurred in the aft underwing quadrant. In the forward quadrant the shielding of the high frequency noise was counteracted by an increase in the lower frequency wing-exhaust interaction noise. The fuselage provided shielding of the opposite engine noise such that the difference between single and double engine operation was 1.5 PNdB under the wing. The effects of coanda flap deflection and extent, angle of attack, and forward speed were small. Forward speed reduced the perceived noise level (PNL) by reducing the wing-exhaust interaction noise.
Predator prey oscillations in a simple cascade model of drift wave turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berionni, V.; Guercan, Oe. D.
2011-11-15
A reduced three shell limit of a simple cascade model of drift wave turbulence, which emphasizes nonlocal interactions with a large scale mode, is considered. It is shown to describe both the well known predator prey dynamics between the drift waves and zonal flows and to reduce to the standard three wave interaction equations. Here, this model is considered as a dynamical system whose characteristics are investigated. The analytical solutions for the purely nonlinear limit are given in terms of the Jacobi elliptic functions. An approximate analytical solution involving Jacobi elliptic functions and exponential growth is computed using scale separationmore » for the case of unstable solutions that are observed when the energy injection rate is high. The fixed points of the system are determined, and the behavior around these fixed points is studied. The system is shown to display periodic solutions corresponding to limit cycle oscillations, apparently chaotic phase space orbits, as well as unstable solutions that grow slowly while oscillating rapidly. The period doubling route to transition to chaos is examined.« less
A Generalized Simple Formulation of Convective Adjustment ...
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la
NASA Astrophysics Data System (ADS)
Rao, M.; George, L. A.
2012-12-01
Nitrogen dioxide (NO2), an atmospheric pollutant generated primarily by anthropogenic combustion processes, is typically found at higher concentrations in urban areas compared to non-urbanized environments. Elevated NO2 levels have multiple ecosystem effects at different spatial scales. At the local scale, elevated levels affect human health directly and through the formation of secondary pollutants such as ozone and aerosols; at the regional scale secondary pollutants such as nitric acid and organic nitrates have deleterious effects on non-urbanized areas; and, at the global scale, nitrogen oxide emissions significantly alter the natural biogeochemical nitrogen cycle. As cities globally become larger and larger sources of nitrogen oxide emissions, it is important to assess possible mitigation strategies to reduce the impact of emissions locally, regionally and globally. In this study, we build a national land-use regression (LUR) model to compare the impacts of deciduous and evergreen trees on urban NO2 levels in the United States. We use the EPA monitoring network values of NO2 levels for 2006, the 2006 NLCD tree canopy data for deciduous and evergreen canopies, and the US Census Bureau's TIGER shapefiles for roads, railroads, impervious area & population density as proxies for NO2 sources on-road traffic, railroad traffic, off-road and area sources respectively. Our preliminary LUR model corroborates previous LUR studies showing that the presence of trees is associated with reduced urban NO2 levels. Additionally, our model indicates that deciduous and evergreen trees reduce NO2 to different extents, and that the amount of NO2 reduced varies seasonally. The model indicates that every square kilometer of deciduous canopy within a 2km buffer is associated with a reduction in ambient NO2 levels of 0.64 ppb in summer and 0.46ppb in winter. Similarly, every square kilometer of evergreen tree canopy within a 2 km buffer is associated with a reduction in ambient NO2 by 0.53 ppb in summer and 0.84 ppb in winter. Thus, the model indicates that deciduous trees are associated with a 30% smaller reduction in NO2 in winter as compared to summer, while evergreens are associated with a 60% increase in the reduction of NO2 in winter, for every square kilometer of deciduous or evergreen canopy within a 2 km buffer. Leaf- and local canopy-level studies have shown that trees are a sink for urban NO2 through deposition as well as stomatal and cuticular uptake. The winter time versus summer time effects suggest that leaf-level deposition may not be the only uptake mechanism and points to the need for a more holistic analysis of tree and canopy-level deposition for urban air pollution models. Since deposition velocities for NO2 vary by tree species, the reduction may also vary by species. These findings have implications for designing cities to reduce the impact of air pollution.
Eddebbarh, A.-A.; Zyvoloski, G.A.; Robinson, B.A.; Kwicklis, E.M.; Reimus, P.W.; Arnold, B.W.; Corbet, T.; Kuzio, S.P.; Faunt, C.
2003-01-01
The US Department of Energy is pursuing Yucca Mountain, Nevada, for the development of a geologic repository for the disposal of spent nuclear fuel and high-level radioactive waste, if the repository is able to meet applicable radiation protection standards established by the US Nuclear Regulatory Commission and the US Environmental Protection Agency (EPA). Effective performance of such a repository would rely on a number of natural and engineered barriers to isolate radioactive waste from the accessible environment. Groundwater beneath Yucca Mountain is the primary medium through which most radionuclides might move away from the potential repository. The saturated zone (SZ) system is expected to act as a natural barrier to this possible movement of radionuclides both by delaying their transport and by reducing their concentration before they reach the accessible environment. Information obtained from Yucca Mountain Site Characterization Project activities is used to estimate groundwater flow rates through the site-scale SZ flow and transport model area and to constrain general conceptual models of groundwater flow in the site-scale area. The site-scale conceptual model is a synthesis of what is known about flow and transport processes at the scale required for total system performance assessment of the site. This knowledge builds on and is consistent with knowledge that has accumulated at the regional scale but is more detailed because more data are available at the site-scale level. The mathematical basis of the site-scale model and the associated numerical approaches are designed to assist in quantifying the uncertainty in the permeability of rocks in the geologic framework model and to represent accurately the flow and transport processes included in the site-scale conceptual model. Confidence in the results of the mathematical model was obtained by comparing calculated to observed hydraulic heads, estimated to measured permeabilities, and lateral flow rates calculated by the site-scale model to those calculated by the regional-scale flow model. In addition, it was confirmed that the flow paths leaving the region of the potential repository are consistent with those inferred from gradients of measured head and those independently inferred from water-chemistry data. The general approach of the site-scale SZ flow and transport model analysis is to calculate unit breakthrough curves for radionuclides at the interface between the SZ and the biosphere using the three-dimensional site-scale SZ flow and transport model. Uncertainties are explicitly incorporated into the site-scale SZ flow and transport abstractions through key parameters and conceptual models. ?? 2002 Elsevier Science B.V. All rights reserved.
Pore-scale and continuum simulations of solute transport micromodel benchmark experiments
Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...
2014-06-18
Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less
NASA Astrophysics Data System (ADS)
Petrie, Ruth Elizabeth; Bannister, Ross Noel; Priestley Cullen, Michael John
2017-12-01
In developing methods for convective-scale data assimilation (DA), it is necessary to consider the full range of motions governed by the compressible Navier-Stokes equations (including non-hydrostatic and ageostrophic flow). These equations describe motion on a wide range of timescales with non-linear coupling. For the purpose of developing new DA techniques that suit the convective-scale problem, it is helpful to use so-called toy models
that are easy to run and contain the same types of motion as the full equation set. Such a model needs to permit hydrostatic and geostrophic balance at large scales but allow imbalance at small scales, and in particular, it needs to exhibit intermittent convection-like behaviour. Existing toy models
are not always sufficient for investigating these issues. A simplified system of intermediate complexity derived from the Euler equations is presented, which supports dispersive gravity and acoustic modes. In this system, the separation of timescales can be greatly reduced by changing the physical parameters. Unlike in existing toy models, this allows the acoustic modes to be treated explicitly and hence inexpensively. In addition, the non-linear coupling induced by the equation of state is simplified. This means that the gravity and acoustic modes are less coupled than in conventional models. A vertical slice formulation is used which contains only dry dynamics. The model is shown to give physically reasonable results, and convective behaviour is generated by localised compressible effects. This model provides an affordable and flexible framework within which some of the complex issues of convective-scale DA can later be investigated. The model is called the ABC model
after the three tunable parameters introduced: A (the pure gravity wave frequency), B (the modulation of the divergent term in the continuity equation), and C (defining the compressibility).
Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiguang Ju; Frederick Dryer
2009-02-07
Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.
Investigation of approximate models of experimental temperature characteristics of machines
NASA Astrophysics Data System (ADS)
Parfenov, I. V.; Polyakov, A. N.
2018-05-01
This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pang, Yuan-Ping, E-mail: pang@mayo.edu
Highlights: • 1–4 interaction scaling factors are used to adjust conformational energy. • This article reports the effects of these factors on protein conformations. • Reducing these factors changes a helix to a strand in molecular dynamics simulation. • Increasing these factors causes the reverse conformational change. • These factors control the conformational equilibrium between helix and strand. - Abstract: 1–4 interaction scaling factors are used in AMBER forcefields to reduce the exaggeration of short-range repulsion caused by the 6–12 Lennard-Jones potential and a nonpolarizable charge model and to obtain better agreements of small-molecule conformational energies with experimental data. However,more » the effects of these scaling factors on protein secondary structure conformations have not been investigated until now. This article reports the finding that the 1–4 interactions among the protein backbone atoms separated by three consecutive covalent bonds are more repulsive in the α-helix conformation than in two β-strand conformations. Therefore, the 1–4 interaction scaling factors of protein backbone torsions ϕ and ψ control the conformational equilibrium between α-helix and β-strand. Molecular dynamics simulations confirm that reducing the ϕ and ψ scaling factors readily converts the α-helix conformation of AcO-(AAQAA){sub 3}-NH{sub 2} to a β-strand conformation, and the reverse occurs when these scaling factors are increased. These results suggest that the ϕ and ψ scaling factors can be used to generate the α-helix or β-strand conformation in situ and to control the propensities of a forcefield for adopting secondary structure elements.« less
Scaling the low-shear pulsatile TORVAD for pediatric heart failure
Gohean, Jeffrey R.; Larson, Erik R.; Hsi, Brian H.; Kurusz, Mark; Smalling, Richard W.; Longoria, Raul G.
2016-01-01
This article provides an overview of the design challenges associated with scaling the low-shear pulsatile TORVAD ventricular assist device (VAD) for treating pediatric heart failure. A cardiovascular system model was used to determine that a 15 ml stroke volume device with a maximum flow rate of 4 L/min can provide full support to pediatric patients with body surface areas between 0.6 to 1.5 m2. Low shear stress in the blood is preserved as the device is scaled down and remains at least two orders of magnitude less than continuous flow VADs. A new magnetic linkage coupling the rotor and piston has been optimized using a finite element model (FEM) resulting in increased heat transfer to the blood while reducing the overall size of TORVAD. Motor FEM has also been used to reduce motor size and improve motor efficiency and heat transfer. FEM analysis predicts no more than 1°C temperature rise on any blood or tissue contacting surface of the device. The iterative computational approach established provides a methodology for developing a TORVAD platform technology with various device sizes for supporting the circulation of infants to adults. PMID:27832001
Scaling the Low-Shear Pulsatile TORVAD for Pediatric Heart Failure.
Gohean, Jeffrey R; Larson, Erik R; Hsi, Brian H; Kurusz, Mark; Smalling, Richard W; Longoria, Raul G
This article provides an overview of the design challenges associated with scaling the low-shear pulsatile TORVAD ventricular assist device (VAD) for treating pediatric heart failure. A cardiovascular system model was used to determine that a 15 ml stroke volume device with a maximum flow rate of 4 L/min can provide full support to pediatric patients with body surface areas between 0.6 and 1.5 m. Low-shear stress in the blood is preserved as the device is scaled down and remains at least two orders of magnitude less than continuous flow VADs. A new magnetic linkage coupling the rotor and piston has been optimized using a finite element model (FEM) resulting in increased heat transfer to the blood while reducing the overall size of TORVAD. Motor FEM has also been used to reduce motor size and improve motor efficiency and heat transfer. FEM analysis predicts no more than 1°C temperature rise on any blood or tissue contacting surface of the device. The iterative computational approach established provides a methodology for developing a TORVAD platform technology with various device sizes for supporting the circulation of infants to adults.
Techno-economic and life-cycle assessment of an attached growth algal biorefinery.
Barlow, Jay; Sims, Ronald C; Quinn, Jason C
2016-11-01
This study examined the sustainability of generating renewable diesel via hydrothermal liquefaction (HTL) of biomass from a rotating algal biofilm reactor. Pilot-scale growth studies and laboratory-scale HTL experiments were used to validate an engineering system model. The engineering system model served as the foundation to evaluate the economic feasibility and environmental impact of the system at full scale. Techno-economic results indicate that biomass feedstock costs dominated the minimum fuel selling price (MFSP), with a base case of $104.31per gallon. Life-cycle assessment results show a base-case global warming potential (GWP) of 80gCO2-eMJ(-1) and net energy ratio (NER) of 1.65 based on a well-to-product system boundary. Optimization of the system reduces MFSP, GWP and NER to $11.90Gal(-1), -44gCO2-eMJ(-1), and 0.33, respectively. The systems-level impacts of integrating algae cultivation with wastewater treatment were found to significantly reduce environmental impact. Sensitivity analysis showed that algal productivity most significantly affected fuel selling price, emphasizing the importance of optimizing biomass productivity. Copyright © 2016 Elsevier Ltd. All rights reserved.
The SRB heat shield: Aeroelastic stability during reentry
NASA Technical Reports Server (NTRS)
Ventres, C. S.; Dowell, E. H.
1977-01-01
Wind tunnel tests of a 3% scale model of the aft portion of the SRB equipped with partially scaled heat shields were conducted for the purpose of measuring fluctuating pressure levels in the aft skirt region. During these tests, the heat shields were observed to oscillate violently, the oscillations in some instances causing the heat shields to fail. High speed films taken during the tests reveal a regular pattern of waves in the fabric starting near the flow stagnation point and progressing around both sides of the annulus. The amplitude of the waves was too great, and their pattern too regular, for them to be attributed to the fluctuating pressure levels measured during the tests. The cause of the oscillations observed in the model heat shields, and whether or not similar oscillations will occur in the full scale SRB heat shield during reentry were investigated. Suggestions for modifying the heat shield so as to avoid the oscillations are provided, and recommendations are made for a program of vibration and wind tunnel tests of reduced-scale aeroelastic models of the heat shield.
Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi
2014-03-01
We propose a hierarchical reduction scheme to cope with coupled rate equations that describe the dynamics of multi-time-scale photosynthetic reactions. To numerically solve nonlinear dynamical equations containing a wide temporal range of rate constants, we first study a prototypical three-variable model. Using a separation of the time scale of rate constants combined with identified slow variables as (quasi-)conserved quantities in the fast process, we achieve a coarse-graining of the dynamical equations reduced to those at a slower time scale. By iteratively employing this reduction method, the coarse-graining of broadly multi-scale dynamical equations can be performed in a hierarchical manner. We then apply this scheme to the reaction dynamics analysis of a simplified model for an illuminated photosystem II, which involves many processes of electron and excitation-energy transfers with a wide range of rate constants. We thus confirm a good agreement between the coarse-grained and fully (finely) integrated results for the population dynamics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Naming games in two-dimensional and small-world-connected random geometric networks.
Lu, Qiming; Korniss, G; Szymanski, B K
2008-01-01
We investigate a prototypical agent-based model, the naming game, on two-dimensional random geometric networks. The naming game [Baronchelli, J. Stat. Mech.: Theory Exp. (2006) P06014] is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the naming games with local broadcasts on random geometric graphs, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially embedded autonomous agents. Among the relevant observables capturing the temporal properties of the agreement process, we investigate the cluster-size distribution and the distribution of the agreement times, both exhibiting dynamic scaling. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement. We construct a finite-size scaling analysis for the agreement times in this case.
Hybrid MD-Nernst Planck Model of Alpha-hemolysin Conductance Properties
NASA Technical Reports Server (NTRS)
Cozmuta, Ioana; O'Keefer, James T.; Bose, Deepak; Stolc, Viktor
2006-01-01
Motivated by experiments in which an applied electric field translocates polynucleotides through an alpha-hemolysin protein channel causing ionic current transient blockade, a hybrid simulation model is proposed to predict the conductance properties of the open channel. Time scales corresponding to ion permeation processes are reached using the Poisson-Nemst-Planck (PNP) electro-diffusion model in which both solvent and local ion concentrations are represented as a continuum. The diffusion coefficients of the ions (K(+) and Cl(-)) input in the PNP model are, however, calculated from all-atom molecular dynamics (MD). In the MD simulations, a reduced representation of the channel is used. The channel is solvated in a 1 M KCI solution, and an external electric field is applied. The pore specific diffusion coefficients for both ionic species are reduced 5-7 times in comparison to bulk values. Significant statistical variations (17-45%) of the pore-ions diffusivities are observed. Within the statistics, the ionic diffusivities remain invariable for a range of external applied voltages between 30 and 240mV. In the 2D-PNP calculations, the pore stem is approximated by a smooth cylinder of radius approx. 9A with two constriction blocks where the radius is reduced to approx. 6A. The electrostatic potential includes the contribution from the atomistic charges. The MD-PNP model shows that the atomic charges are responsible for the rectifying behaviour and for the slight anion selectivity of the a-hemolysin pore. Independent of the hierarchy between the anion and cation diffusivities, the anionic contribution to the total ionic current will dominate. The predictions of the MD-PNP model are in good agreement with experimental data and give confidence in the present approach of bridging time scales by combining a microscopic and macroscopic model.
NASA Astrophysics Data System (ADS)
van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.; van Geer, F. C.; Torfs, P. J. J. F.; de Louw, P. G. B.
2011-03-01
Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for the estimation of flow route volumes and for predictions of catchment discharge. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2) and simple process descriptions were applied to relate groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from the hydrographs of two nested catchments (0.4 and 6.5 km2). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 76-79% at the field-site to 34-61% and 25-50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements improves simulations of nitrate loads and predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.
Modeling of the HiPco process for carbon nanotube production. II. Reactor-scale analysis
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Dateo, Christopher E.; Meyyappan, M.
2002-01-01
The high-pressure carbon monoxide (HiPco) process, developed at Rice University, has been reported to produce single-walled carbon nanotubes from gas-phase reactions of iron carbonyl in carbon monoxide at high pressures (10-100 atm). Computational modeling is used here to develop an understanding of the HiPco process. A detailed kinetic model of the HiPco process that includes of the precursor, decomposition metal cluster formation and growth, and carbon nanotube growth was developed in the previous article (Part I). Decomposition of precursor molecules is necessary to initiate metal cluster formation. The metal clusters serve as catalysts for carbon nanotube growth. The diameter of metal clusters and number of atoms in these clusters are some of the essential information for predicting carbon nanotube formation and growth, which is then modeled by the Boudouard reaction with metal catalysts. Based on the detailed model simulations, a reduced kinetic model was also developed in Part I for use in reactor-scale flowfield calculations. Here this reduced kinetic model is integrated with a two-dimensional axisymmetric reactor flow model to predict reactor performance. Carbon nanotube growth is examined with respect to several process variables (peripheral jet temperature, reactor pressure, and Fe(CO)5 concentration) with the use of the axisymmetric model, and the computed results are compared with existing experimental data. The model yields most of the qualitative trends observed in the experiments and helps to understanding the fundamental processes in HiPco carbon nanotube production.
Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby; ...
2016-12-05
The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less
Podolak, Charles J.
2013-01-01
An ensemble of rule-based models was constructed to assess possible future braided river planform configurations for the Toklat River in Denali National Park and Preserve, Alaska. This approach combined an analysis of large-scale influences on stability with several reduced-complexity models to produce the predictions at a practical level for managers concerned about the persistence of bank erosion while acknowledging the great uncertainty in any landscape prediction. First, a model of confluence angles reproduced observed angles of a major confluence, but showed limited susceptibility to a major rearrangement of the channel planform downstream. Second, a probabilistic map of channel locations was created with a two-parameter channel avulsion model. The predicted channel belt location was concentrated in the same area as the current channel belt. Finally, a suite of valley-scale channel and braid plain characteristics were extracted from a light detection and ranging (LiDAR)-derived surface. The characteristics demonstrated large-scale stabilizing topographic influences on channel planform. The combination of independent analyses increased confidence in the conclusion that the Toklat River braided planform is a dynamically stable system due to large and persistent valley-scale influences, and that a range of avulsive perturbations are likely to result in a relatively unchanged planform configuration in the short term.
Inference of scale-free networks from gene expression time series.
Daisuke, Tominaga; Horton, Paul
2006-04-01
Quantitative time-series observation of gene expression is becoming possible, for example by cell array technology. However, there are no practical methods with which to infer network structures using only observed time-series data. As most computational models of biological networks for continuous time-series data have a high degree of freedom, it is almost impossible to infer the correct structures. On the other hand, it has been reported that some kinds of biological networks, such as gene networks and metabolic pathways, may have scale-free properties. We hypothesize that the architecture of inferred biological network models can be restricted to scale-free networks. We developed an inference algorithm for biological networks using only time-series data by introducing such a restriction. We adopt the S-system as the network model, and a distributed genetic algorithm to optimize models to fit its simulated results to observed time series data. We have tested our algorithm on a case study (simulated data). We compared optimization under no restriction, which allows for a fully connected network, and under the restriction that the total number of links must equal that expected from a scale free network. The restriction reduced both false positive and false negative estimation of the links and also the differences between model simulation and the given time-series data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taraphdar, S.; Mukhopadhyay, P.; Leung, L. Ruby
The prediction skill of tropical synoptic scale transients (SSTR) such as monsoon low and depression during the boreal summer of 2007–2009 are assessed using high resolution ECMWF and NCEP TIGGE forecasts data. By analyzing 246 forecasts for lead times up to 10 days, it is found that the models have good skills in forecasting the planetary scale means but the skills of SSTR remain poor, with the latter showing no skill beyond 2 days for the global tropics and Indian region. Consistent forecast skills among precipitation, velocity potential, and vorticity provide evidence that convection is the primary process responsible formore » precipitation. The poor skills of SSTR can be attributed to the larger random error in the models as they fail to predict the locations and timings of SSTR. Strong correlation between the random error and synoptic precipitation suggests that the former starts to develop from regions of convection. As the NCEP model has larger biases of synoptic scale precipitation, it has a tendency to generate more random error that ultimately reduces the prediction skill of synoptic systems in that model. Finally, the larger biases in NCEP may be attributed to the model moist physics and/or coarser horizontal resolution compared to ECMWF.« less
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
NASA Astrophysics Data System (ADS)
Mendoza, D. L.; Lin, J. C.; Mitchell, L.; Gurney, K. R.; Patarasuk, R.; Mallia, D. V.; Fasoli, B.; Bares, R.; Catharine, D.; O'Keeffe, D.; Song, Y.; Huang, J.; Horel, J.; Crosman, E.; Hoch, S.; Ehleringer, J. R.
2016-12-01
We address the need for robust highly-resolved emissions and trace gas concentration data required for planning purposes and policy development aimed at managing pollutant sources. Adverse health effects resulting from urban pollution exposure are the result of proximity to emission sources and atmospheric mixing, necessitating models with high spatial and temporal resolution. As urban emission sources co-emit carbon dioxide (CO2) and criteria air pollutants (CAPs), efforts to reduce specific pollutants would synergistically reduce others. We present a contemporary (2010-2015) emissions inventory and modeled CO2 and carbon monoxide (CO) concentrations for Salt Lake County, Utah. We compare emissions transported by a dispersion model against stationary measurement data and present a systematic quantification of uncertainties. The emissions inventory for CO2 is based on the Hestia emissions data inventory that resolves emissions at hourly, building and road-link resolutions, as well as on an hourly gridded scale. The emissions were scaled using annual Energy Information Administration (EIA) fuel consumption data. We derived a CO emissions inventory using methods similar to Hestia, downscaling total county emissions from the 2011 Environmental Protection Agency's (EPA) National Emissions Inventory (NEI). The gridded CO emissions were compared against the Hestia CO2 gridded data to characterize spatial similarities and differences between them. Correlations were calculated at multiple scales of aggregation. The Stochastic Time-Inverted Lagrangian Trasport (STILT) dispersion model was used to transport emissions and estimate pollutant concentrations at an hourly resolution. Modeled results were compared against stationary measurements in the Salt Lake County area. This comparison highlights spatial locations and hours of high variability and uncertainty. Sensitivity to biological fluxes as well as to specific economic sectors was tested by varying their contributions to modeled concentrations and calibrating their emissions.
Predicting Fecal Indicator Bacteria Fate and Removal in Urban Stormwater at the Watershed Scale
NASA Astrophysics Data System (ADS)
Wolfand, J.; Hogue, T. S.; Luthy, R. G.
2016-12-01
Urban stormwater is a major cause of water quality impairment, resulting in surface waters that fail to meet water quality standards and support their designated uses. Of the many stormwater pollutants, fecal indicator bacteria are particularly important to track because they are directly linked to pathogens which jeopardize public health; yet, their fate and transport in urban stormwater is poorly understood. Monitoring fecal bacteria in stormwater is possible, but due to the high variability of fecal indicators both spatially and temporally, single grab or composite samples do not fully capture fecal indicator loading. Models have been developed to predict fecal indicator bacteria at the watershed scale, but they are often limited to agricultural areas, or areas that receive frequent rainfall. Further, it is unclear whether best management practices (BMPs), such as bioretention or engineered wetlands, are able to reduce bacteria to meet water quality standards at watershed outlets. This research seeks to develop a model to predict fecal indicator bacteria in urban stormwater in a semi-arid climate at the watershed scale. Using the highly developed Ballona Creek watershed (89 mi2) located in Los Angeles County as a case study, several existing mechanistic models are coupled with a hydrologic model to predict fecal indicator concentrations (E. coli, enterococci, fecal coliform, and total coliform) at the outfall of Ballona Creek watershed, Santa Monica Bay. The hydrologic model was developed using InfoSWMM Sustain, calibrated for flow from WY 1998-2006 (NSE = 0.94; R2 = 0.95), and validated from WY 2007-2015 (NSE = 0.93; R2 = 0.95). The developed coupled model is being used to predict fecal indicator fate and transport and evaluate how BMPs can be optimized to reduce fecal indicator loading to surface waters and recreational beaches.
Prospects for mirage mediation
NASA Astrophysics Data System (ADS)
Pierce, Aaron; Thaler, Jesse
2006-09-01
Mirage mediation reduces the fine-tuning in the minimal supersymmetric standard model by dynamically arranging a cancellation between anomaly-mediated and modulus-mediated supersymmetry breaking. We explore the conditions under which a mirage ``messenger scale'' is generated near the weak scale and the little hierarchy problem is solved. We do this by explicitly including the dynamics of the SUSY-breaking sector needed to cancel the cosmological constant. The most plausible scenario for generating a low mirage scale does not readily admit an extra-dimensional interpretation. We also review the possibilities for solving the μ/Bμ problem in such theories, a potential hidden source of fine-tuning.
Assessing Land Management Change Effects on Forest Carbon and Emissions Under Changing Climate
NASA Astrophysics Data System (ADS)
Law, B. E.
2014-12-01
There has been limited focus on fine-scale land management change effects on forest carbon under future environmental conditions (climate, nitrogen deposition, increased atmospheric CO2). Forest management decisions are often made at the landscape to regional levels before analyses have been conducted to determine the potential outcomes and effectiveness of such actions. Scientists need to evaluate plausible land management actions in a timely manner to help shape policy and strategic land management. Issues of interest include species-level adaptation to climate, resilience and vulnerability to mortality within forested landscapes and regions. Efforts are underway to improve land system model simulation of future mortality related to climate, and to develop and evaluate plausible land management options that could help mitigate or avoid future die-offs. Vulnerability to drought-related mortality varies among species and with tree size or age. Predictors of species ability to survive in specific environments are still not resolved. A challenge is limited observations for fine-scale (e.g. 4 km2) modeling, particularly physiological parameters. Uncertainties are primarily associated with future land management and policy decisions. They include the interface with economic factors and with other ecosystem services (biodiversity, water availability, wildlife habitat). The outcomes of future management scenarios should be compared with business-as-usual management under the same environmental conditions to determine the effects of management changes on forest carbon and net emissions to the atmosphere. For example, in the western U.S., land system modeling and life cycle assessment of several management options to reduce impacts of fire reduced long-term forest carbon gain and increased carbon emissions compared with business-as-usual management under future environmental conditions. The enhanced net carbon uptake with climate and reduced fire emissions after thinning did not compensate for the increased wood removals over 90 years, leading to reduced net biome production. Analysis of land management change scenarios at fine scales is needed, and should consider other ecological values in addition to carbon.
Examples of data assimilation in mesoscale models
NASA Technical Reports Server (NTRS)
Carr, Fred; Zack, John; Schmidt, Jerry; Snook, John; Benjamin, Stan; Stauffer, David
1993-01-01
The keynote address was the problem of physical initialization of mesoscale models. The classic purpose of physical or diabatic initialization is to reduce or eliminate the spin-up error caused by the lack, at the initial time, of the fully developed vertical circulations required to support regions of large rainfall rates. However, even if a model has no spin-up problem, imposition of observed moisture and heating rate information during assimilation can improve quantitative precipitation forecasts, especially early in the forecast. The two key issues in physical initialization are the choice of assimilating technique and sources of hydrologic/hydrometeor data. Another example of data assimilation in mesoscale models was presented in a series of meso-beta scale model experiments with and 11 km version of the MASS model designed to investigate the sensitivity of convective initiation forced by thermally direct circulations resulting from differential surface heating to four dimensional assimilation of surface and radar data. The results of these simulations underscore the need to accurately initialize and simulate grid and sub-grid scale clouds in meso- beta scale models. The status of the application of the CSU-RAMS mesoscale model by the NOAA Forecast Systems Lab for producing real-time forecasts with 10-60 km mesh resolutions over (4000 km)(exp 2) domains for use by the aviation community was reported. Either MAPS or LAPS model data are used to initialize the RAMS model on a 12-h cycle. The use of MAPS (Mesoscale Analysis and Prediction System) model was discussed. Also discussed was the mesobeta-scale data assimilation using a triply-nested nonhydrostatic version of the MM5 model.
NASA Technical Reports Server (NTRS)
Hinson, William F.; Lee, John B.
1959-01-01
As a continuation of an investigation of the release characteristics of an MB-1 rocket carried internally by the Convair F-106A airplane, six missile-bay baffle configurations and a rocket end plate have been investigated in the 27- by 27-inch preflight jet of the NASA Wallops Station. The MB-1 rocket used had retractable fins and was ejected from a missile bay modified by the addition of six different baffle configurations. For some tests a rocket end plate was added to the model. Dynamically scaled models (0.04956 scale) were tested at a simulated altitude of 22,450 feet and Mach numbers of 0.86, 1.59, and 1.98, and at a simulated altitude of 29,450 feet and a Mach number of 1.98. The results of this investigation indicate that the missile-bay baffle configurations and the rocket end plate may be used to reduce the positive pitch amplitude of the MB-1 rocket after release. The initial negative pitching velocity applied to the MB-1 rocket might then be reduced in order to maintain a near-level-flight attitude after release. As the fuselage angle of attack is increased, the negative pitch amplitude of the rocket is decreased.
McCurry, Daniel L; Ishida, Kenneth P; Oelker, Gregg L; Mitch, William A
2017-08-01
UV-based advanced oxidation processes (AOPs) effectively degrade N-nitrosodimethylamine (NDMA) passing through reverse osmosis (RO) units within advanced treatment trains for the potable reuse of municipal wastewater. However, certain utilities have observed the re-formation of NDMA after the AOP from reactions between residual chloramines and NDMA precursors in the AOP product water. Using kinetic modeling and bench-scale RO experiments, we demonstrate that the low pH in the RO permeate (∼5.5) coupled with the effective rejection of NH 4 + promotes conversion of the residual monochloramine (NH 2 Cl) in the permeate to dichloramine (NHCl 2 ) via the reaction: 2 NH 2 Cl + H + ↔ NHCl 2 + NH 4 + . Dichloramine is the chloramine species known to react with NDMA precursors to form NDMA. After UV/AOP, utilities generally use lime or other techniques to increase the pH of the finished water to prevent distribution system corrosion. Modeling indicated that, while the increase in pH halts dichloramine formation, it converts amine-based NDMA precursors to their more reactive, neutral forms. With modeling, and experiments at both bench-scale and field-scale, we demonstrate that reducing the time interval between RO treatment and final pH adjustment can significantly reduce NDMA re-formation by minimizing the amount of dichloramine formed prior to reaching the final target pH.
A multi-scale model for geared transmission aero-thermodynamics
NASA Astrophysics Data System (ADS)
McIntyre, Sean M.
A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.
Zhuang, Kai; Izallalen, Mounir; Mouser, Paula; Richter, Hanno; Risso, Carla; Mahadevan, Radhakrishnan; Lovley, Derek R
2011-01-01
The advent of rapid complete genome sequencing, and the potential to capture this information in genome-scale metabolic models, provide the possibility of comprehensively modeling microbial community interactions. For example, Rhodoferax and Geobacter species are acetate-oxidizing Fe(III)-reducers that compete in anoxic subsurface environments and this competition may have an influence on the in situ bioremediation of uranium-contaminated groundwater. Therefore, genome-scale models of Geobacter sulfurreducens and Rhodoferax ferrireducens were used to evaluate how Geobacter and Rhodoferax species might compete under diverse conditions found in a uranium-contaminated aquifer in Rifle, CO. The model predicted that at the low rates of acetate flux expected under natural conditions at the site, Rhodoferax will outcompete Geobacter as long as sufficient ammonium is available. The model also predicted that when high concentrations of acetate are added during in situ bioremediation, Geobacter species would predominate, consistent with field-scale observations. This can be attributed to the higher expected growth yields of Rhodoferax and the ability of Geobacter to fix nitrogen. The modeling predicted relative proportions of Geobacter and Rhodoferax in geochemically distinct zones of the Rifle site that were comparable to those that were previously documented with molecular techniques. The model also predicted that under nitrogen fixation, higher carbon and electron fluxes would be diverted toward respiration rather than biomass formation in Geobacter, providing a potential explanation for enhanced in situ U(VI) reduction in low-ammonium zones. These results show that genome-scale modeling can be a useful tool for predicting microbial interactions in subsurface environments and shows promise for designing bioremediation strategies. PMID:20668487
Using Ecosystem Experiments to Improve Vegetation Models
Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...
2015-05-21
Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less
Carbon dynamics and land-use choices: building a regional-scale multidisciplinary model
Kerr, Suzi; Liu, Shu-Guang; Pfaff, Alexander S.P.; Hughes, R. Flint
2003-01-01
Policy enabling tropical forests to approach their potential contribution to global-climate-change mitigation requires forecasts of land use and carbon storage on a large scale over long periods. In this paper, we present an integrated modeling methodology that addresses these needs. We model the dynamics of the human land-use system and of C pools contained in each ecosystem, as well as their interactions. The model is national scale, and is currently applied in a preliminary way to Costa Rica using data spanning a period of over 50 years. It combines an ecological process model, parameterized using field and other data, with an economic model, estimated using historical data to ensure a close link to actual behavior. These two models are linked so that ecological conditions affect land-use choices and vice versa. The integrated model predicts land use and its consequences for C storage for policy scenarios. These predictions can be used to create baselines, reward sequestration, and estimate the value in both environmental and economic terms of including C sequestration in tropical forests as part of the efforts to mitigate global climate change. The model can also be used to assess the benefits from costly activities to increase accuracy and thus reduce errors and their societal costs.
NASA Astrophysics Data System (ADS)
Nunes, Ana
2015-04-01
Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Penn, Colin A.; Bearup, Lindsay A.; Maxwell, Reed M.; Clow, David W.
2016-01-01
The effects of mountain pine beetle (MPB)-induced tree mortality on a headwater hydrologic system were investigated using an integrated physical modeling framework with a high-resolution computational grid. Simulations of MPB-affected and unaffected conditions, each with identical atmospheric forcing for a normal water year, were compared at multiple scales to evaluate the effects of scale on MPB-affected hydrologic systems. Individual locations within the larger model were shown to maintain hillslope-scale processes affecting snowpack dynamics, total evapotranspiration, and soil moisture that are comparable to several field-based studies and previous modeling work. Hillslope-scale analyses also highlight the influence of compensating changes in evapotranspiration and snow processes. Reduced transpiration in the Grey Phase of MPB-induced tree mortality was offset by increased late-summer evaporation, while overall snowpack dynamics were more dependent on elevation effects than MPB-induced tree mortality. At the watershed scale, unaffected areas obscured the magnitude of MPB effects. Annual water yield from the watershed increased during Grey Phase simulations by 11 percent; a difference that would be difficult to diagnose with long-term gage observations that are complicated by inter-annual climate variability. The effects on hydrology observed and simulated at the hillslope scale can be further damped at the watershed scale, which spans more life zones and a broader range of landscape properties. These scaling effects may change under extreme conditions, e.g., increased total MPB-affected area or a water year with above average snowpack.
Monodisperse granular flows in viscous dispersions in a centrifugal acceleration field
NASA Astrophysics Data System (ADS)
Cabrera, Miguel Angel; Wu, Wei
2016-04-01
Granular flows are encountered in geophysical flows and innumerable industrial applications with particulate materials. When mixed with a fluid, a complex network of interactions between the particle- and fluid-phase develops, resulting in a compound material with a yet unclear physical behaviour. In the study of granular suspensions mixed with a viscous dispersion, the scaling of the stress-strain characteristics of the fluid phase needs to account for the level of inertia developed in experiments. However, the required model dimensions and amount of material becomes a main limitation for their study. In recent years, centrifuge modelling has been presented as an alternative for the study of particle-fluid flows in a reduced scaled model in an augmented acceleration field. By formulating simple scaling principles proportional to the equivalent acceleration Ng in the model, the resultant flows share many similarities with field events. In this work we study the scaling principles of the fluid phase and its effects on the flow of granular suspensions. We focus on the dense flow of a monodisperse granular suspension mixed with a viscous fluid phase, flowing down an inclined plane and being driven by a centrifugal acceleration field. The scaled model allows the continuous monitoring of the flow heights, velocity fields, basal pressure and mass flow rates at different Ng levels. The experiments successfully identify the effects of scaling the plastic viscosity of the fluid phase, its relation with the deposition of particles over the inclined plane, and allows formulating a discussion on the suitability of simulating particle-fluid flows in a centrifugal acceleration field.
2014-01-01
Background The sore throat pain model has been conducted by different clinical investigators to demonstrate the efficacy of acute analgesic drugs in single-dose randomized clinical trials. The model used here was designed to study the multiple-dose safety and efficacy of lozenges containing flurbiprofen at 8.75 mg. Methods Adults (n = 198) with moderate or severe acute sore throat and findings of pharyngitis on a Tonsillo-Pharyngitis Assessment (TPA) were randomly assigned to use either flurbiprofen 8.75 mg lozenges (n = 101) or matching placebo lozenges (n = 97) under double-blind conditions. Patients sucked one lozenge every three to six hours as needed, up to five lozenges per day, and rated symptoms on 100-mm scales: the Sore Throat Pain Intensity Scale (STPIS), the Difficulty Swallowing Scale (DSS), and the Swollen Throat Scale (SwoTS). Results Reductions in pain (lasting for three hours) and in difficulty swallowing and throat swelling (for four hours) were observed after a single dose of the flurbiprofen 8.75 mg lozenge (P <0.05 compared with placebo). After using multiple doses over 24 hours, flurbiprofen-treated patients experienced a 59% greater reduction in throat pain, 45% less difficulty swallowing, and 44% less throat swelling than placebo-treated patients (all P <0.01). There were no serious adverse events. Conclusions Utilizing the sore throat pain model with multiple doses over 24 hours, flurbiprofen 8.75 mg lozenges were shown to be an effective, well-tolerated treatment for sore throat pain. Other pharmacologic actions (reduced difficulty swallowing and reduced throat swelling) and overall patient satisfaction from the flurbiprofen lozenges were also demonstrated in this multiple-dose implementation of the sore throat pain model. Trial registration This trial was registered with ClinicalTrials.gov, registration number: NCT01048866, registration date: January 13, 2010. PMID:24988909
Numerical renormalization group method for entanglement negativity at finite temperature
NASA Astrophysics Data System (ADS)
Shim, Jeongmin; Sim, H.-S.; Lee, Seung-Sup B.
2018-04-01
We develop a numerical method to compute the negativity, an entanglement measure for mixed states, between the impurity and the bath in quantum impurity systems at finite temperature. We construct a thermal density matrix by using the numerical renormalization group (NRG), and evaluate the negativity by implementing the NRG approximation that reduces computational cost exponentially. We apply the method to the single-impurity Kondo model and the single-impurity Anderson model. In the Kondo model, the negativity exhibits a power-law scaling at temperature much lower than the Kondo temperature and a sudden death at high temperature. In the Anderson model, the charge fluctuation of the impurity contributes to the negativity even at zero temperature when the on-site Coulomb repulsion of the impurity is finite, while at low temperature the negativity between the impurity spin and the bath exhibits the same power-law scaling behavior as in the Kondo model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heo, Yeonsook; Augenbroe, Godfried; Graziano, Diane
2015-05-01
The increasing interest in retrofitting of existing buildings is motivated by the need to make a major contribution to enhancing building energy efficiency and reducing energy consumption and CO2 emission by the built environment. This paper examines the relevance of calibration in model-based analysis to support decision-making for energy and carbon efficiency retrofits of individual buildings and portfolios of buildings. The authors formulate a set of real retrofit decision-making situations and evaluate the role of calibration by using a case study that compares predictions and decisions from an uncalibrated model with those of a calibrated model. The case study illustratesmore » both the mechanics and outcomes of a practical alternative to the expert- and time-intense application of dynamic energy simulation models for large-scale retrofit decision-making under uncertainty.« less
Zhang, Ling Yu; Liu, Zhao Gang
2017-12-01
Based on the data collected from 108 permanent plots of the forest resources survey in Maoershan Experimental Forest Farm during 2004-2016, this study investigated the spatial distribution of recruitment trees in natural secondary forest by global Poisson regression and geographically weighted Poisson regression (GWPR) with four bandwidths of 2.5, 5, 10 and 15 km. The simulation effects of the 5 regressions and the factors influencing the recruitment trees in stands were analyzed, a description was given to the spatial autocorrelation of the regression residuals on global and local levels using Moran's I. The results showed that the spatial distribution of the number of natural secondary forest recruitment was significantly influenced by stands and topographic factors, especially average DBH. The GWPR model with small scale (2.5 km) had high accuracy of model fitting, a large range of model parameter estimates was generated, and the localized spatial distribution effect of the model parameters was obtained. The GWPR model at small scale (2.5 and 5 km) had produced a small range of model residuals, and the stability of the model was improved. The global spatial auto-correlation of the GWPR model residual at the small scale (2.5 km) was the lowe-st, and the local spatial auto-correlation was significantly reduced, in which an ideal spatial distribution pattern of small clusters with different observations was formed. The local model at small scale (2.5 km) was much better than the global model in the simulation effect on the spatial distribution of recruitment tree number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Purba, Victor; Jafarpour, Saber
Given that next-generation infrastructures will contain large numbers of grid-connected inverters and these interfaces will be satisfying a growing fraction of system load, it is imperative to analyze the impacts of power electronics on such systems. However, since each inverter model has a relatively large number of dynamic states, it would be impractical to execute complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the pointmore » of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. That is, we show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as an individual inverter in the paralleled system. Numerical simulations validate the reduced-order models.« less
Komini Babu, S.; Chung, H. T.; Wu, G.; ...
2014-08-18
This paper reports the development of a model for simulating polymer electrolyte fuel cells (PEFCs) with non-precious metal catalyst (NPMC) cathodes. NPMCs present an opportunity to dramatically reduce the cost of PEFC electrodes by removing the costly Pt catalyst. To address the significant transport losses in thick NPMC cathodes (ca. >60 µm), we developed a hierarchical electrode model that resolves the unique structure of the NPMCs we studied. A unique feature of the approach is the integration of the model with morphology data extracted from nano-scale resolution X-ray computed tomography (nano-CT) imaging of the electrodes. A notable finding is themore » impact of the liquid water accumulation in the electrode and the significant performance improvement possible if electrode flooding is mitigated.« less
NASA Technical Reports Server (NTRS)
Jones, Gregory; Balakrishna, Sundareswara; DeMoss, Joshua; Goodliff, Scott; Bailey, Matthew
2015-01-01
Pressure fluctuations have been measured over the course of several tests in the National Transonic Facility to study unsteady phenomenon both with and without the influence of a model. Broadband spectral analysis will be used to characterize the length scales of the tunnel. Special attention will be given to the large-scale, low frequency data that influences the Mach number and force and moment variability. This paper will also discuss the significance of the vorticity and sound fields that can be related to the Common Research Model and will also highlight the comparisons to an empty tunnel configuration. The effectiveness of vortex generators placed at the interface of the test section and wind tunnel diffuser showed promise in reducing the empty tunnel unsteadiness, however, the vortex generators were ineffective in the presence of a model.
Dash, Ranjan K; Li, Yanjun; Kim, Jaeyeon; Beard, Daniel A; Saidel, Gerald M; Cabrera, Marco E
2008-09-09
Control mechanisms of cellular metabolism and energetics in skeletal muscle that may become evident in response to physiological stresses such as reduction in blood flow and oxygen supply to mitochondria can be quantitatively understood using a multi-scale computational model. The analysis of dynamic responses from such a model can provide insights into mechanisms of metabolic regulation that may not be evident from experimental studies. For the purpose, a physiologically-based, multi-scale computational model of skeletal muscle cellular metabolism and energetics was developed to describe dynamic responses of key chemical species and reaction fluxes to muscle ischemia. The model, which incorporates key transport and metabolic processes and subcellular compartmentalization, is based on dynamic mass balances of 30 chemical species in both capillary blood and tissue cells (cytosol and mitochondria) domains. The reaction fluxes in cytosol and mitochondria are expressed in terms of a general phenomenological Michaelis-Menten equation involving the compartmentalized energy controller ratios ATP/ADP and NADH/NAD(+). The unknown transport and reaction parameters in the model are estimated simultaneously by minimizing the differences between available in vivo experimental data on muscle ischemia and corresponding model outputs in coupled with the resting linear flux balance constraints using a robust, nonlinear, constrained-based, reduced gradient optimization algorithm. With the optimal parameter values, the model is able to simulate dynamic responses to reduced blood flow and oxygen supply to mitochondria associated with muscle ischemia of several key metabolite concentrations and metabolic fluxes in the subcellular cytosolic and mitochondrial compartments, some that can be measured and others that can not be measured with the current experimental techniques. The model can be applied to test complex hypotheses involving dynamic regulation of cellular metabolism and energetics in skeletal muscle during physiological stresses such as ischemia, hypoxia, and exercise.
Map scale effects on estimating the number of undiscovered mineral deposits
Singer, D.A.; Menzie, W.D.
2008-01-01
Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring detailed and expensive exploration, and land-use planners benefit from reduced areas of concern. ?? 2008 International Association for Mathematical Geology.
Stormwater pollution in suburban ecosystems: the role of residential rooftop connectivity
NASA Astrophysics Data System (ADS)
Miles, B.; Band, L. E.
2013-12-01
Stormwater pollution has been recognized as a major concern of urban sustainability. Understanding interactions between urban landcover and stormwater pollution can be advanced through the development of spatially explicit ecohydrology models that simulate fine-scale residential stormwater management; this requires high-resolution LIDAR and landcover data, as well as field observation at the household scale. The objective of my research is to improve understanding of how parcel-scale heterogeneity of impervious and previous surfaces effect stormwater volume. In support of this objective, I present results from work to: (1) perform field observation of existing patterns of residential rooftop connectivity to nearby impervious surfaces; (2) modify the Regional Hydro-Ecological Simulation System (RHESSys) to explicitly represent non-topographic surface flow routing of rooftops; and (3) develop RHESSys models for urban-suburban headwater watersheds in Baltimore, MD (as part of the Baltimore Ecosystem Study (BES) NSF Long-Term Ecological Research (LTER) site) and Durham, NC (as part of the NSF Urban Long-Term Research Area (ULTRA) program). I use these models to simulate stormwater volume resulting from both baseline residential rooftop impervious connectivity and for disconnection scenarios (e.g. roof drainage to lawn v. engineered rain garden, upslope v. riparian). This research will help to improve representation of fine-scale surface flow features in urban ecohydrology modeling while informing policy decisions over how best to implement parcel-scale retrofits in existing neighborhoods to reduce stormwater pollution at the watershed scale.
Flexible twist for pitch control in a high altitude long endurance aircraft with nonlinear response
NASA Astrophysics Data System (ADS)
Bond, Vanessa L.
Information dominance is the key motivator for employing high-altitude long-endurance (HALE) aircraft to provide continuous coverage in the theaters of operation. A joined-wing configuration of such a craft gives the advantage of a platform for higher resolution sensors. Design challenges emerge with structural flexibility that arise from a long-endurance aircraft design. The goal of this research was to demonstrate that scaling the nonlinear response of a full-scale finite element model was possible if the model was aeroelastically and "nonlinearly" scaled. The research within this dissertation showed that using the first three modes and the first bucking modes was not sufficient for proper scaling. In addition to analytical scaling several experiments were accomplished to understand and overcome design challenges of HALE aircraft. One such challenge is combated by eliminating pitch control surfaces and replacing them with an aft-wing twist concept. This design option was physically realized through wind tunnel measurement of forces, moments and pressures on a subscale experimental model. This design and experiment demonstrated that pitch control with aft-wing twist is feasible. Another challenge is predicting the nonlinear response of long-endurance aircraft. This was addressed by experimental validation of modeling nonlinear response on a subscale experimental model. It is important to be able to scale nonlinear behavior in this type of craft due to its highly flexible nature. The validation accomplished during this experiment on a subscale model will reduce technical risk for full-scale development of such pioneering craft. It is also important to experimentally reproduce the air loads following the wing as it deforms. Nonlinearities can be attributed to these follower forces that might otherwise be overlooked. This was found to be a significant influence in HALE aircraft to include the case study of the FEM and experimental models herein.
Computational and experimental study of airflow around a fan powered UVGI lamp
NASA Astrophysics Data System (ADS)
Kaligotla, Srikar; Tavakoli, Behtash; Glauser, Mark; Ahmadi, Goodarz
2011-11-01
The quality of indoor air environment is very important for improving the health of occupants and reducing personal exposure to hazardous pollutants. An effective way of controlling air quality is by eliminating the airborne bacteria and viruses or by reducing their emissions. Ultraviolet Germicidal Irradiation (UVGI) lamps can effectively reduce these bio-contaminants in an indoor environment, but the efficiency of these systems depends on airflow in and around the device. UVGI lamps would not be as effective in stagnant environments as they would be when the moving air brings the bio-contaminant in their irradiation region. Introducing a fan into the UVGI system would augment the efficiency of the system's kill rate. Airflows in ventilated spaces are quite complex due to the vast range of length and velocity scales. The purpose of this research is to study these complex airflows using CFD techniques and validate computational model with airflow measurements around the device using Particle Image Velocimetry measurements. The experimental results including mean velocities, length scales and RMS values of fluctuating velocities are used in the CFD validation. Comparison of these data at different locations around the device with the CFD model predictions are performed and good agreement was observed.
Fine scale vegetation classification and fuel load mapping for prescribed burning
Andrew D. Bailey; Robert Mickler
2007-01-01
Fire managers in the Coastal Plain of the Southeastern United States use prescribed burning as a tool to reduce fuel loads in a variety of vegetation types, many of which have elevated fuel loads due to a history of fire suppression. While standardized fuel models are useful in prescribed burn planning, those models do not quantify site-specific fuel loads that reflect...
NASA Astrophysics Data System (ADS)
Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.
2016-12-01
Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.
Visualizing and measuring flow in shale matrix using in situ synchrotron X-ray microtomography
NASA Astrophysics Data System (ADS)
Kohli, A. H.; Kiss, A. M.; Kovscek, A. R.; Bargar, J.
2017-12-01
Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.
Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS)
Shea, Tracey L; Tennant, Alan; Pallant, Julie F
2009-01-01
Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study. PMID:19426512
Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS).
Shea, Tracey L; Tennant, Alan; Pallant, Julie F
2009-05-09
There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.
Micro-CT Pore Scale Study Of Flow In Porous Media: Effect Of Voxel Resolution
NASA Astrophysics Data System (ADS)
Shah, S.; Gray, F.; Crawshaw, J.; Boek, E.
2014-12-01
In the last few years, pore scale studies have become the key to understanding the complex fluid flow processes in the fields of groundwater remediation, hydrocarbon recovery and environmental issues related to carbon storage and capture. A pore scale study is often comprised of two key procedures: 3D pore scale imaging and numerical modelling techniques. The essence of a pore scale study is to test the physics implemented in a model of complicated fluid flow processes at one scale (microscopic) and then apply the model to solve the problems associated with water resources and oil recovery at other scales (macroscopic and field). However, the process of up-scaling from the pore scale to the macroscopic scale has encountered many challenges due to both pore scale imaging and modelling techniques. Due to the technical limitations in the imaging method, there is always a compromise between the spatial (voxel) resolution and the physical volume of the sample (field of view, FOV) to be scanned by the imaging methods, specifically X-ray micro-CT (XMT) in our case In this study, a careful analysis was done to understand the effect of voxel size, using XMT to image the 3D pore space of a variety of porous media from sandstones to carbonates scanned at different voxel resolution (4.5 μm, 6.2 μm, 8.3 μm and 10.2 μm) but keeping the scanned FOV constant for all the samples. We systematically segment the micro-CT images into three phases, the macro-pore phase, an intermediate phase (unresolved micro-pores + grains) and the grain phase and then study the effect of voxel size on the structure of the macro-pore and the intermediate phases and the fluid flow properties using lattice-Boltzmann (LB) and pore network (PN) modelling methods. We have also applied a numerical coarsening algorithm (up-scale method) to reduce the computational power and time required to accurately predict the flow properties using the LB and PN method.
Improved simulation of tropospheric ozone by a global-multi-regional two-way coupling model system
NASA Astrophysics Data System (ADS)
Yan, Yingying; Lin, Jintai; Chen, Jinxuan; Hu, Lu
2016-02-01
Small-scale nonlinear chemical and physical processes over pollution source regions affect the tropospheric ozone (O3), but these processes are not captured by current global chemical transport models (CTMs) and chemistry-climate models that are limited by coarse horizontal resolutions (100-500 km, typically 200 km). These models tend to contain large (and mostly positive) tropospheric O3 biases in the Northern Hemisphere. Here we use the recently built two-way coupling system of the GEOS-Chem CTM to simulate the regional and global tropospheric O3 in 2009. The system couples the global model (at 2.5° long. × 2° lat.) and its three nested models (at 0.667° long. × 0.5° lat.) covering Asia, North America and Europe, respectively. Specifically, the nested models take lateral boundary conditions (LBCs) from the global model, better capture small-scale processes and feed back to modify the global model simulation within the nested domains, with a subsequent effect on their LBCs. Compared to the global model alone, the two-way coupled system better simulates the tropospheric O3 both within and outside the nested domains, as found by evaluation against a suite of ground (1420 sites from the World Data Centre for Greenhouse Gases (WDCGG), the United States National Oceanic and Atmospheric Administration (NOAA) Earth System Research Laboratory Global Monitoring Division (GMD), the Chemical Coordination Centre of European Monitoring and Evaluation Programme (EMEP), and the United States Environmental Protection Agency Air Quality System (AQS)), aircraft (the High-performance Instrumented Airborne Platform for Environmental Research (HIAPER) Pole-to-Pole Observations (HIPPO) and Measurement of Ozone and Water Vapor by Airbus In- Service Aircraft (MOZAIC)) and satellite measurements (two Ozone Monitoring Instrument (OMI) products). The two-way coupled simulation enhances the correlation in day-to-day variation of afternoon mean surface O3 with the ground measurements from 0.53 to 0.68, and it reduces the mean model bias from 10.8 to 6.7 ppb. Regionally, the coupled system reduces the bias by 4.6 ppb over Europe, 3.9 ppb over North America and 3.1 ppb over other regions. The two-way coupling brings O3 vertical profiles much closer to the HIPPO (for remote areas) and MOZAIC (for polluted regions) data, reducing the tropospheric (0-9 km) mean bias by 3-10 ppb at most MOZAIC sites and by 5.3 ppb for HIPPO profiles. The two-way coupled simulation also reduces the global tropospheric column ozone by 3.0 DU (9.5 %, annual mean), bringing them closer to the OMI data in all seasons. Additionally, the two-way coupled simulation also reduces the global tropospheric mean hydroxyl radical by 5 % with improved estimates of methyl chloroform and methane lifetimes. Simulation improvements are more significant in the Northern Hemisphere, and are mainly driven by improved representation of spatial inhomogeneity in chemistry/emissions. Within the nested domains, the two-way coupled simulation reduces surface ozone biases relative to typical GEOS-Chem one-way nested simulations, due to much improved LBCs. The bias reduction is 1-7 times the bias reduction from the global to the one-way nested simulation. Improving model representations of small-scale processes is important for understanding the global and regional tropospheric chemistry.
NASA Technical Reports Server (NTRS)
Herkes, William
2000-01-01
Acoustic and propulsion performance testing of a model-scale Axisymmetric Coannular Ejector nozzle was conducted in the Boeing Low-speed Aeroacoustic Facility. This nozzle is a plug nozzle with an ejector design to provide aspiration of about 20% of the engine flow. A variety of mixing enhancers were designed to promote mixing of the engine and the aspirated flows. These included delta tabs, tone-injection rods, and wheeler ramps. This report addresses the acoustic aspects of the testing. The spectral characteristics of the various configurations of the nozzle are examined on a model-scale basis. This includes indentifying particular noise sources contributing to the spectra and the data are projected to full-scale flyover conditions to evaluate the effectiveness of the nozzle, and of the various mixing enhancers, on reducing the Effective Perceived Noise Levels.
Model diagnostics in reduced-rank estimation
Chen, Kun
2016-01-01
Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches. PMID:28003860
Model diagnostics in reduced-rank estimation.
Chen, Kun
2016-01-01
Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches.
Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks
NASA Astrophysics Data System (ADS)
Leube, P.; Nowak, W.; Sanchez-Vila, X.
2013-12-01
High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.
Reduced-form air quality modeling for community-scale applications
Transportation plays an important role in modern society, but its impact on air quality has been shown to have significant adverse effects on public health. Numerous reviews (HEI, CDC, WHO) summarizing findings of hundreds of studies conducted mainly in the last decade, conclude ...
REDUCING ENERGY AND SPACE REQUIREMENTS BY ELECTROSTATIC AUGMENTATION OF A PULSE-JET FABRIC FILTER
In work performed several years ago by EPA's research lab then known as Air and Energy Engineering Research Laboratory (EPA/AEERL), small-scale testing and modeling of electrostatically stimulated fabric filtration (ESFF) has indicated than substantial performance benefits could ...
Brain Mechanisms Underlying Individual Differences in Reaction to Stress: An Animal Model
1988-10-29
Schooler, et al., 1976; Gershon & Buchsbaum, 1977; Buchsbaum, et al., 1977), personality scales of extraversion- introversion (Haier, 1984) and sensation...exploratory and learned to bar press more quickly and efficiently. Reducers with a lower inhibitory threshold learned the differential reinforcement of
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Effect of nickel on point defects diffusion in Fe – Ni alloys
Anento, Napoleon; Serra, Anna; Osetsky, Yury N.
2017-05-05
Iron-Nickel alloys are perspective alloys as nuclear energy structural materials because of their good radiation damage tolerance and mechanical properties. Understanding of experimentally observed features such as the effect of Ni content to radiation defects evolution is essential for developing predictive models of radiation. Recently an atomic-scale modelling study has revealed one particular mechanism of Ni effect related to the reduced mobility of clusters of interstitial atoms in Fe-Ni alloys. In this paper we present results of the microsecond-scale molecular dynamics study of point defects, i.e. vacancies and self-interstitial atoms, diffusion in Fe-Ni alloys. It is found that the additionmore » of Ni atoms affects diffusion processes: diffusion of vacancies is enhanced in the presence of Ni, whereas diffusion of interstitials is reduced and these effects increase at high Ni concentration and low temperature. As a result, the role of Ni solutes in radiation damage evolution in Fe-Ni alloys is discussed.« less
NASA Technical Reports Server (NTRS)
Tolhurst, William H., Jr.; Hickey, David H.; Aoyagi, Kiyoshi
1961-01-01
Wind-tunnel tests have been conducted on a large-scale model of a swept-wing jet transport type airplane to study the factors affecting exhaust gas ingestion into the engine inlets when thrust reversal is used during ground roll. The model was equipped with four small jet engines mounted in nacelles beneath the wing. The tests included studies of both cascade and target type reversers. The data obtained included the free-stream velocity at the occurrence of exhaust gas ingestion in the outboard engine and the increment of drag due to thrust reversal for various modifications of thrust reverser configuration. Motion picture films of smoke flow studies were also obtained to supplement the data. The results show that the free-stream velocity at which ingestion occurred in the outboard engines could be reduced considerably, by simple modifications to the reversers, without reducing the effective drag due to reversed thrust.
NASA Astrophysics Data System (ADS)
Cassani, Mary Kay Kuhr
The objective of this study was to evaluate the effect of two pedagogical models used in general education science on non-majors' science teaching self-efficacy. Science teaching self-efficacy can be influenced by inquiry and cooperative learning, through cognitive mechanisms described by Bandura (1997). The Student Centered Activities for Large Enrollment Undergraduate Programs (SCALE-UP) model of inquiry and cooperative learning incorporates cooperative learning and inquiry-guided learning in large enrollment combined lecture-laboratory classes (Oliver-Hoyo & Beichner, 2004). SCALE-UP was adopted by a small but rapidly growing public university in the southeastern United States in three undergraduate, general education science courses for non-science majors in the Fall 2006 and Spring 2007 semesters. Students in these courses were compared with students in three other general education science courses for non-science majors taught with the standard teaching model at the host university. The standard model combines lecture and laboratory in the same course, with smaller enrollments and utilizes cooperative learning. Science teaching self-efficacy was measured using the Science Teaching Efficacy Belief Instrument - B (STEBI-B; Bleicher, 2004). A science teaching self-efficacy score was computed from the Personal Science Teaching Efficacy (PTSE) factor of the instrument. Using non-parametric statistics, no significant difference was found between teaching models, between genders, within models, among instructors, or among courses. The number of previous science courses was significantly correlated with PTSE score. Student responses to open-ended questions indicated that students felt the larger enrollment in the SCALE-UP room reduced individual teacher attention but that the large round SCALE-UP tables promoted group interaction. Students responded positively to cooperative and hands-on activities, and would encourage inclusion of more such activities in all of the courses. The large enrollment SCALE-UP model as implemented at the host university did not increase science teaching self-efficacy of non-science majors, as hypothesized. This was likely due to limited modification of standard cooperative activities according to the inquiry-guided SCALE-UP model. It was also found that larger SCALE-UP enrollments did not decrease science teaching self-efficacy when standard cooperative activities were used in the larger class.
NASA Astrophysics Data System (ADS)
Conti, Roberto; Meli, Enrico; Pugi, Luca; Malvezzi, Monica; Bartolini, Fabio; Allotta, Benedetto; Rindi, Andrea; Toni, Paolo
2012-05-01
Scaled roller rigs used for railway applications play a fundamental role in the development of new technologies and new devices, combining the hardware in the loop (HIL) benefits with the reduction of the economic investments. The main problem of the scaled roller rig with respect to the full scale ones is the improved complexity due to the scaling factors. For this reason, before building the test rig, the development of a software model of the HIL system can be useful to analyse the system behaviour in different operative conditions. One has to consider the multi-body behaviour of the scaled roller rig, the controller and the model of the virtual vehicle, whose dynamics has to be reproduced on the rig. The main purpose of this work is the development of a complete model that satisfies the previous requirements and in particular the performance analysis of the controller and of the dynamical behaviour of the scaled roller rig when some disturbances are simulated with low adhesion conditions. Since the scaled roller rig will be used to simulate degraded adhesion conditions, accurate and realistic wheel-roller contact model also has to be included in the model. The contact model consists of two parts: the contact point detection and the adhesion model. The first part is based on a numerical method described in some previous studies for the wheel-rail case and modified to simulate the three-dimensional contact between revolute surfaces (wheel-roller). The second part consists in the evaluation of the contact forces by means of the Hertz theory for the normal problem and the Kalker theory for the tangential problem. Some numerical tests were performed, in particular low adhesion conditions were simulated, and bogie hunting and dynamical imbalance of the wheelsets were introduced. The tests were devoted to verify the robustness of control system with respect to some of the more frequent disturbances that may influence the roller rig dynamics. In particular we verified that the wheelset imbalance could significantly influence system performance, and to reduce the effect of this disturbance a multistate filter was designed.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Quantifying Biomass from Point Clouds by Connecting Representations of Ecosystem Structure
NASA Astrophysics Data System (ADS)
Hendryx, S. M.; Barron-Gafford, G.
2017-12-01
Quantifying terrestrial ecosystem biomass is an essential part of monitoring carbon stocks and fluxes within the global carbon cycle and optimizing natural resource management. Point cloud data such as from lidar and structure from motion can be effective for quantifying biomass over large areas, but significant challenges remain in developing effective models that allow for such predictions. Inference models that estimate biomass from point clouds are established in many environments, yet, are often scale-dependent, needing to be fitted and applied at the same spatial scale and grid size at which they were developed. Furthermore, training such models typically requires large in situ datasets that are often prohibitively costly or time-consuming to obtain. We present here a scale- and sensor-invariant framework for efficiently estimating biomass from point clouds. Central to this framework, we present a new algorithm, assignPointsToExistingClusters, that has been developed for finding matches between in situ data and clusters in remotely-sensed point clouds. The algorithm can be used for assessing canopy segmentation accuracy and for training and validating machine learning models for predicting biophysical variables. We demonstrate the algorithm's efficacy by using it to train a random forest model of above ground biomass in a shrubland environment in Southern Arizona. We show that by learning a nonlinear function to estimate biomass from segmented canopy features we can reduce error, especially in the presence of inaccurate clusterings, when compared to a traditional, deterministic technique to estimate biomass from remotely measured canopies. Our random forest on cluster features model extends established methods of training random forest regressions to predict biomass of subplots but requires significantly less training data and is scale invariant. The random forest on cluster features model reduced mean absolute error, when evaluated on all test data in leave one out cross validation, by 40.6% from deterministic mesquite allometry and 35.9% from the inferred ecosystem-state allometric function. Our framework should allow for the inference of biomass more efficiently than common subplot methods and more accurately than individual tree segmentation methods in densely vegetated environments.
1987-09-01
can be reduced substantially, compared to using numerical methods to model inter - " connect parasitics. Although some accuracy might be lost with...conductor widths and spacings listed in Table 2 1 , have been employed for simulation. In the first set of the simulations, planar dielectric inter ...model, there are no restrictions on the iumber ol diele-iric and conductors. andl the shape of the conductors and the dielectric inter - a.e,, In the
High-Speed Tests of a Model Twin-Engine Low-Wing Transport Airplane
NASA Technical Reports Server (NTRS)
Becker, John V; LEONARD LLOYD H
1942-01-01
Report presents the results of force tests made of a 1/8-scale model of a twin-engine low-wing transport airplane in the NACA 8-foot high-speed tunnel to investigate compressibility and interference effects of speeds up to 450 miles per hour. In addition to tests of the standard arrangement of the model, tests were made with several modifications designed to reduce the drag and to increase the critical speed.
Edgeworth streaming model for redshift space distortions
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael; Haugg, Thomas
2015-09-01
We derive the Edgeworth streaming model (ESM) for the redshift space correlation function starting from an arbitrary distribution function for biased tracers of dark matter by considering its two-point statistics and show that it reduces to the Gaussian streaming model (GSM) when neglecting non-Gaussianities. We test the accuracy of the GSM and ESM independent of perturbation theory using the Horizon Run 2 N -body halo catalog. While the monopole of the redshift space halo correlation function is well described by the GSM, higher multipoles improve upon including the leading order non-Gaussian correction in the ESM: the GSM quadrupole breaks down on scales below 30 Mpc /h whereas the ESM stays accurate to 2% within statistical errors down to 10 Mpc /h . To predict the scale-dependent functions entering the streaming model we employ convolution Lagrangian perturbation theory (CLPT) based on the dust model and local Lagrangian bias. Since dark matter halos carry an intrinsic length scale given by their Lagrangian radius, we extend CLPT to the coarse-grained dust model and consider two different smoothing approaches operating in Eulerian and Lagrangian space, respectively. The coarse graining in Eulerian space features modified fluid dynamics different from dust while the coarse graining in Lagrangian space is performed in the initial conditions with subsequent single-streaming dust dynamics, implemented by smoothing the initial power spectrum in the spirit of the truncated Zel'dovich approximation. Finally, we compare the predictions of the different coarse-grained models for the streaming model ingredients to N -body measurements and comment on the proper choice of both the tracer distribution function and the smoothing scale. Since the perturbative methods we considered are not yet accurate enough on small scales, the GSM is sufficient when applied to perturbation theory.
Scaling depth-induced wave-breaking in two-dimensional spectral wave models
NASA Astrophysics Data System (ADS)
Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.
2015-03-01
Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.
Mondal, Sudip; Hegarty, Evan; Sahn, James J; Scott, Luisa L; Gökçe, Sertan Kutal; Martin, Chris; Ghorashian, Navid; Satarasinghe, Praveen Navoda; Iyer, Sangeetha; Sae-Lee, Wisath; Hodges, Timothy R; Pierce, Jonathan T; Martin, Stephen F; Ben-Yakar, Adela
2018-05-16
The nematode Caenorhabditis elegans, with tractable genetics and a well-defined nervous system, provides a unique whole-animal model system to identify novel drug targets and therapies for neurodegenerative diseases. Large-scale drug or target screens in models that recapitulate the subtle age- and cell-specific aspects of neurodegenerative diseases are limited by a technological requirement for high-throughput analysis of neuronal morphology. Recently, we developed a single-copy model of amyloid precursor protein (SC_APP) induced neurodegeneration that exhibits progressive degeneration of select cholinergic neurons. Our previous work with this model suggests that small molecule ligands of the sigma 2 receptor (σ2R), which was recently cloned and identified as transmembrane protein 97 (TMEM97), are neuroprotective. To determine structure-activity relationships for unexplored chemical space in our σ2R/Tmem97 ligand collection, we developed an in vivo high-content screening (HCS) assay to identify potential drug leads. The HCS assay uses our recently developed large-scale microfluidic immobilization chip and automated imaging platform. We discovered norbenzomorphans that reduced neurodegeneration in our C. elegans model, including two compounds that demonstrated significant neuroprotective activity at multiple doses. These findings provide further evidence that σ2R/Tmem97-binding norbenzomorphans may represent a new drug class for treating neurodegenerative diseases.
NASA Technical Reports Server (NTRS)
Axelson, John A.; Emerson, Horace F.
1949-01-01
High-speed wind-tunnel tests were conducted of two versions of a 0.17-scale model of the McDonnell XF2H-1 airplane to ascertain the high-speed stability and control characteristics and to study means for raising the high-speed buffet limit of the airplane, The results for the revised model, employing a thinner wing and tail than the original model, revealed a mild diving tendency from 0.75 to 0.80 Mach number, followed by a marked climbing tendency from 0.80 to 0.875 Mach number. The high-speed climbing tendency was caused principally by the pitching-moment characteristics of the wing. At 0.875 Mach number the results for the revised model indicated stick-fixed directional instability over a limited range of yaw angles, apparently caused by separated flow over the vertical tail. The test results indicate that the high-speed buffet limit of the airplane can probably be raised by reducing the thickness and changing the relative location of the horizontal and vertical tails, and by revising the inner portion of the wing to have a lower thickness-to-chord ratio and reduced trailing-edge angle. The addition of the wing-tip tanks to the revised model resulted in a forward shift in the neutral point below 0.82 Mach number.
Regional Air Quality Model Application of the Aqueous-Phase ...
In most ecosystems, atmospheric deposition is the primary input of mercury. The total wet deposition of mercury in atmospheric chemistry models is sensitive to parameterization of the aqueous-phase reduction of divalent oxidized mercury (Hg2+). However, most atmospheric chemistry models use a parameterization of the aqueous-phase reduction of Hg2+ that has been shown to be unlikely under normal ambient conditions or use a non mechanistic value derived to optimize wet deposition results. Recent laboratory experiments have shown that Hg2+ can be photochemically reduced to elemental mercury (Hg) in the aqueous-phase by dissolved organic matter and a mechanism and the rate for Hg2+ photochemical reduction by dicarboxylic acids (DCA) has been proposed. For the first time in a regional scale model, the DCA mechanism has been applied. The HO2-Hg2+ reduction mechanism, the proposed DCA reduction mechanism, and no aqueous-phase reduction (NAR) of Hg2+ are evaluated against weekly wet deposition totals, concentrations and precipitation observations from the Mercury Deposition Network (MDN) using the Community Multiscale Air Quality (CMAQ) model version 4.7.1. Regional scale simulations of mercury wet deposition using a DCA reduction mechanism evaluated well against observations, and reduced the bias in model evaluation by at least 13% over the other schemes evaluated, although summertime deposition estimates were still biased by −31.4% against observations. The use of t
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
NASA Technical Reports Server (NTRS)
Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong;
2012-01-01
One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5
Analyzing and leveraging self-similarity for variable resolution atmospheric models
NASA Astrophysics Data System (ADS)
O'Brien, Travis; Collins, William
2015-04-01
Variable resolution modeling techniques are rapidly becoming a popular strategy for achieving high resolution in a global atmospheric models without the computational cost of global high resolution. However, recent studies have demonstrated a variety of resolution-dependent, and seemingly artificial, features. We argue that the scaling properties of the atmosphere are key to understanding how the statistics of an atmospheric model should change with resolution. We provide two such examples. In the first example we show that the scaling properties of the cloud number distribution define how the ratio of resolved to unresolved clouds should increase with resolution. We show that the loss of resolved clouds, in the high resolution region of variable resolution simulations, with the Community Atmosphere Model version 4 (CAM4) is an artifact of the model's treatment of condensed water (this artifact is significantly reduced in CAM5). In the second example we show that the scaling properties of the horizontal velocity field, combined with the incompressibility assumption, necessarily result in an intensification of vertical mass flux as resolution increases. We show that such an increase is present in a wide variety of models, including CAM and the regional climate models of the ENSEMBLES intercomparision. We present theoretical arguments linking this increase to the intensification of precipitation with increasing resolution.
Boundary-Layer-Ingesting Inlet Flow Control
NASA Technical Reports Server (NTRS)
Owens, Lewis R.; Allan, Brian G.; Gorton, Susan A.
2006-01-01
This paper gives an overview of a research study conducted in support of the small-scale demonstration of an active flow control system for a boundary-layer-ingesting (BLI) inlet. The effectiveness of active flow control in reducing engine inlet circumferential distortion was assessed using a 2.5% scale model of a 35% boundary-layer-ingesting flush-mounted, offset, diffusing inlet. This experiment was conducted in the NASA Langley 0.3-meter Transonic Cryogenic Tunnel at flight Mach numbers with a model inlet specifically designed for this type of testing. High mass flow actuators controlled the flow through distributed control jets providing the active flow control. A vortex generator point design configuration was also tested for comparison purposes and to provide a means to examine a hybrid vortex generator and control jets configuration. Measurements were made of the onset boundary layer, the duct surface static pressures, and the mass flow through the duct and the actuators. The distortion and pressure recovery were determined by 40 total pressure measurements on 8 rake arms each separated by 45 degrees and were located at the aerodynamic interface plane. The test matrix was limited to a maximum free-stream Mach number of 0.85 with scaled mass flows through the inlet for that condition. The data show that the flow control jets alone can reduce circumferential distortion (DPCP(sub avg)) from 0.055 to about 0.015 using about 2.5% of inlet mass flow. The vortex generators also reduced the circumferential distortion from 0.055 to 0.010 near the inlet mass flow design point. Lower inlet mass flow settings with the vortex generator configuration produced higher distortion levels that were reduced to acceptable levels using a hybrid vortex generator/control jets configuration that required less than 1% of the inlet mass flow.
Boundary-Layer-Ingesting Inlet Flow Control
NASA Technical Reports Server (NTRS)
Owens, Lewis R.; Allan, Brian G.; Gorton, Susan A.
2006-01-01
This paper gives an overview of a research study conducted in support of the small-scale demonstration of an active flow control system for a boundary-layer-ingesting (BLI) inlet. The effectiveness of active flow control in reducing engine inlet circumferential distortion was assessed using a 2.5% scale model of a 35% boundary-layer-ingesting flush-mounted, offset, diffusing inlet. This experiment was conducted in the NASA Langley 0.3-meter Transonic Cryogenic Tunnel at flight Mach numbers with a model inlet specifically designed for this type of testing. High mass flow actuators controlled the flow through distributed control jets providing the active flow control. A vortex generator point design configuration was also tested for comparison purposes and to provide a means to examine a hybrid vortex generator and control jets configuration. Measurements were made of the onset boundary layer, the duct surface static pressures, and the mass flow through the duct and the actuators. The distortion and pressure recovery were determined by 40 total pressure measurements on 8 rake arms each separated by 45 degrees and were located at the aerodynamic interface plane. The test matrix was limited to a maximum free-stream Mach number of 0.85 with scaled mass flows through the inlet for that condition. The data show that the flow control jets alone can reduce circumferential distortion (DPCPavg) from 0.055 to about 0.015 using about 2.5% of inlet mass flow. The vortex generators also reduced the circumferential distortion from 0.055 to 0.010 near the inlet mass flow design point. Lower inlet mass flow settings with the vortex generator configuration produced higher distortion levels that were reduced to acceptable levels using a hybrid vortex generator/control jets configuration that required less than 1% of the inlet mass flow.
NASA Astrophysics Data System (ADS)
Hirt, Christian; Reußner, Elisabeth; Rexer, Moritz; Kuhn, Michael
2016-09-01
Over the past years, spectral techniques have become a standard to model Earth's global gravity field to 10 km scales, with the EGM2008 geopotential model being a prominent example. For some geophysical applications of EGM2008, particularly Bouguer gravity computation with spectral techniques, a topographic potential model of adequate resolution is required. However, current topographic potential models have not yet been successfully validated to degree 2160, and notable discrepancies between spectral modeling and Newtonian (numerical) integration well beyond the 10 mGal level have been reported. Here we accurately compute and validate gravity implied by a degree 2160 model of Earth's topographic masses. Our experiments are based on two key strategies, both of which require advanced computational resources. First, we construct a spectrally complete model of the gravity field which is generated by the degree 2160 Earth topography model. This involves expansion of the topographic potential to the 15th integer power of the topography and modeling of short-scale gravity signals to ultrahigh degree of 21,600, translating into unprecedented fine scales of 1 km. Second, we apply Newtonian integration in the space domain with high spatial resolution to reduce discretization errors. Our numerical study demonstrates excellent agreement (8 μGgal RMS) between gravity from both forward modeling techniques and provides insight into the convergence process associated with spectral modeling of gravity signals at very short scales (few km). As key conclusion, our work successfully validates the spectral domain forward modeling technique for degree 2160 topography and increases the confidence in new high-resolution global Bouguer gravity maps.
Zhou, Yanling; Li, Guannan; Li, Dan; Cui, Hongmei; Ning, Yuping
2018-05-01
The long-term effects of dose reduction of atypical antipsychotics on cognitive function and symptomatology in stable patients with schizophrenia remain unclear. We sought to determine the change in cognitive function and symptomatology after reducing risperidone or olanzapine dosage in stable schizophrenic patients. Seventy-five stabilized schizophrenic patients prescribed risperidone (≥4 mg/day) or olanzapine (≥10 mg/day) were randomly divided into a dose-reduction group ( n=37) and a maintenance group ( n=38). For the dose-reduction group, the dose of antipsychotics was reduced by 50%; for the maintenance group, the dose remained unchanged throughout the whole study. The Positive and Negative Syndrome Scale, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects, and Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) Consensus Cognitive Battery were measured at baseline, 12, 28, and 52 weeks. Linear mixed models were performed to compare the Positive and Negative Syndrome Scale, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects and MATRICS Consensus Cognitive Battery scores between groups. The linear mixed model showed significant time by group interactions on the Positive and Negative Syndrome Scale negative symptoms, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects, speed of processing, attention/vigilance, working memory and total score of MATRICS Consensus Cognitive Battery (all p<0.05). Post hoc analyses showed significant improvement in Positive and Negative Syndrome Scale negative subscale, Negative Symptom Assessment-16, Rating Scale for Extrapyramidal Side Effects, speed of processing, working memory and total score of MATRICS Consensus Cognitive Battery for the dose reduction group compared with those for the maintenance group (all p<0.05). This study indicated that a risperidone or olanzapine dose reduction of 50% may not lead to more severe symptomatology but can improve speed of processing, working memory and negative symptoms in patients with stabilized schizophrenia.
The Impact and Cost of Scaling up Midwifery and Obstetrics in 58 Low- and Middle-Income Countries
Bartlett, Linda; Weissman, Eva; Gubin, Rehana; Patton-Molitors, Rachel; Friberg, Ingrid K.
2014-01-01
Background and Methods To guide achievement of the Millennium Development Goals, we used the Lives Saved Tool to provide a novel simulation of potential maternal, fetal, and newborn lives and costs saved by scaling up midwifery and obstetrics services, including family planning, in 58 low- and middle-income countries. Typical midwifery and obstetrics interventions were scaled to either 60% of the national population (modest coverage) or 99% (universal coverage). Findings Under even a modest scale-up, midwifery services including family planning reduce maternal, fetal, and neonatal deaths by 34%. Increasing midwifery alone or integrated with obstetrics is more cost-effective than scaling up obstetrics alone; when family planning was included, the midwifery model was almost twice as cost-effective as the obstetrics model, at $2,200 versus $4,200 per death averted. The most effective strategy was the most comprehensive: increasing midwives, obstetricians, and family planning could prevent 69% of total deaths under universal scale-up, yielding a cost per death prevented of just $2,100. Within this analysis, the interventions which midwifery and obstetrics are poised to deliver most effectively are different, with midwifery benefits delivered across the continuum of pre-pregnancy, prenatal, labor and delivery, and postpartum-postnatal care, and obstetrics benefits focused mostly on delivery. Including family planning within each scope of practice reduced the number of likely births, and thus deaths, and increased the cost-effectiveness of the entire package (e.g., a 52% reduction in deaths with midwifery and obstetrics increased to 69% when family planning was added; cost decreased from $4,000 to $2,100 per death averted). Conclusions This analysis suggests that scaling up midwifery and obstetrics could bring many countries closer to achieving mortality reductions. Midwives alone can achieve remarkable mortality reductions, particularly when they also perform family planning services - the greatest return on investment occurs with the scale-up of midwives and obstetricians together. PMID:24941336
Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Shahab, Azin
In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.
Deep water characteristics and circulation in the South China Sea
NASA Astrophysics Data System (ADS)
Wang, Aimei; Du, Yan; Peng, Shiqiu; Liu, Kexiu; Huang, Rui Xin
2018-04-01
This study investigates the deep circulation in the South China Sea (SCS) using oceanographic observations combined with results from a bottom layer reduced gravity model. The SCS water, 2000 m below the surface, is quite different from that in the adjacent Pacific Ocean, and it is characterized by its low dissolved oxygen (DO), high temperature and low salinity. The horizontal distribution of deep water properties indicates a basin-scale cyclonic circulation driven by the Luzon overflow. The results of the bottom layer reduced gravity model are consistent with the existence of the cyclonic circulation in the deep SCS. The circulation is stronger at the northern/western boundary. After overflowing the sill of the Luzon Strait, the deep water moves broadly southwestward, constrained by the 3500 m isobath. The broadening of the southward flow is induced by the downwelling velocity in the interior of the deep basin. The main deep circulation bifurcates into two branches after the Zhongsha Islands. The southward branch continues flowing along the 3500 m isobath, and the eastward branch forms the sub-basin scale cyclonic circulation around the seamounts in the central deep SCS. The returning flow along the east boundary is fairly weak. The numerical experiments of the bottom layer reduced gravity model reveal the important roles of topography, bottom friction, and the upwelling/downwelling pattern in controlling the spatial structure, particularly the strong, deep western boundary current.
NASA Astrophysics Data System (ADS)
Huang, Chunlin; Chen, Weijin; Wang, Weizhen; Gu, Juan
2017-04-01
Uncertainties in model parameters can easily cause systematic differences between model states and observations from ground or satellites, which significantly affect the accuracy of soil moisture estimation in data assimilation systems. In this paper, a novel soil moisture assimilation scheme is developed to simultaneously assimilate AMSR-E brightness temperature (TB) and MODIS Land Surface Temperature (LST), which can correct model bias by simultaneously updating model states and parameters with dual ensemble Kalman filter (DEnKS). The Common Land Model (CoLM) and a Q-h Radiative Transfer Model (RTM) are adopted as model operator and observation operator, respectively. The assimilation experiment is conducted in Naqu, Tibet Plateau, from May 31 to September 27, 2011. Compared with in-situ measurements, the accuracy of soil moisture estimation is tremendously improved in terms of a variety of scales. The updated soil temperature by assimilating MODIS LST as input of RTM can reduce the differences between the simulated and observed brightness temperatures to a certain degree, which helps to improve the estimation of soil moisture and model parameters. The updated parameters show large discrepancy with the default ones and the former effectively reduces the states bias of CoLM. Results demonstrate the potential of assimilating both microwave TB and MODIS LST to improve the estimation of soil moisture and related parameters. Furthermore, this study also indicates that the developed scheme is an effective soil moisture downscaling approach for coarse-scale microwave TB.