De Vilmorin, Philippe; Slocum, Ashley; Jaber, Tareq; Schaefer, Oliver; Ruppach, Horst; Genest, Paul
2015-01-01
This article describes a four virus panel validation of EMD Millipore's (Bedford, MA) small virus-retentive filter, Viresolve® Pro, using TrueSpike(TM) viruses for a Biogen Idec process intermediate. The study was performed at Charles River Labs in King of Prussia, PA. Greater than 900 L/m(2) filter throughput was achieved with the approximately 8 g/L monoclonal antibody feed. No viruses were detected in any filtrate samples. All virus log reduction values were between ≥3.66 and ≥5.60. The use of TrueSpike(TM) at Charles River Labs allowed Biogen Idec to achieve a more representative scaled-down model and potentially reduce the cost of its virus filtration step and the overall cost of goods. The body of data presented here is an example of the benefits of following the guidance from the PDA Technical Report 47, The Preparation of Virus Spikes Used for Viral Clearance Studies. The safety of biopharmaceuticals is assured through the use of multiple steps in the purification process that are capable of virus clearance, including filtration with virus-retentive filters. The amount of virus present at the downstream stages in the process is expected to be and is typically low. The viral clearance capability of the filtration step is assessed in a validation study. The study utilizes a small version of the larger manufacturing size filter, and a large, known amount of virus is added to the feed prior to filtration. Viral assay before and after filtration allows the virus log reduction value to be quantified. The representativeness of the small-scale model is supported by comparing large-scale filter performance to small-scale filter performance. The large-scale and small-scale filtration runs are performed using the same operating conditions. If the filter performance at both scales is comparable, it supports the applicability of the virus log reduction value obtained with the small-scale filter to the large-scale manufacturing process. However, the virus preparation used to spike the feed material often contains impurities that contribute adversely to virus filter performance in the small-scale model. The added impurities from the virus spike, which are not present at manufacturing scale, compromise the scale-down model and put into question the direct applicability of the virus clearance results. Another consequence of decreased filter performance due to virus spike impurities is the unnecessary over-sizing of the manufacturing system to match the low filter capacity observed in the scale-down model. This article describes how improvements in mammalian virus spike purity ensure the validity of the log reduction value obtained with the scale-down model and support economically optimized filter usage. © PDA, Inc. 2015.
A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets
NASA Astrophysics Data System (ADS)
Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.
2009-12-01
The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone. Bayesian inversion is then applied to assign scaling factors that align the surface fluxes with the CO2 time series. Our project demonstrates how bottom-up and top-down techniques can be reconciled to arrive at a more robust and balanced spatial carbon budget. We will show how to evaluate existing flux products through regionally representative atmospheric observations, i.e. how well the underlying model assumptions represent processes on the regional scale. Adapting process model parameterizations sets for e.g. sub-regions, disturbance regimes, or land cover classes, in order to optimize the agreement between surface fluxes and atmospheric observations can lead to improved understanding of the underlying flux mechanisms, and reduces uncertainties in the regional carbon budgets.
Macro Scale Independently Homogenized Subcells for Modeling Braided Composites
NASA Technical Reports Server (NTRS)
Blinzler, Brina J.; Goldberg, Robert K.; Binienda, Wieslaw K.
2012-01-01
An analytical method has been developed to analyze the impact response of triaxially braided carbon fiber composites, including the penetration velocity and impact damage patterns. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. Currently, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. To determine the stiffness and strength properties required for the constitutive model, a top-down approach for determining the strength properties is merged with a bottom-up approach for determining the stiffness properties. The top-down portion uses global strengths obtained from macro-scale coupon level testing to characterize the material strengths for each subcell. The bottom-up portion uses micro-scale fiber and matrix stiffness properties to characterize the material stiffness for each subcell. Simulations of quasi-static coupon level tests for several representative composites are conducted along with impact simulations.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Crash Testing and Simulation of a Cessna 172 Aircraft: Pitch Down Impact Onto Soft Soil
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Jackson, Karen E.
2016-01-01
During the summer of 2015, NASA Langley Research Center conducted three full-scale crash tests of Cessna 172 (C-172) aircraft at the NASA Langley Landing and Impact Research (LandIR) Facility. The first test represented a flare-to-stall emergency or hard landing onto a rigid surface. The second test, which is the focus of this paper, represented a controlled-flight-into-terrain (CFIT) with a nose-down pitch attitude of the aircraft, which impacted onto soft soil. The third test, also conducted onto soil, represented a CFIT with a nose-up pitch attitude of the aircraft, which resulted in a tail strike condition. These three crash tests were performed for the purpose of evaluating the performance of Emergency Locator Transmitters (ELTs) and to generate impact test data for model validation. LS-DYNA finite element models were generated to simulate the three test conditions. This paper describes the model development and presents test-analysis comparisons of acceleration and velocity time-histories, as well as a comparison of the time sequence of events for Test 2 onto soft soil.
Scaling properties of European research units
Jamtveit, Bjørn; Jettestuen, Espen; Mathiesen, Joachim
2009-01-01
A quantitative characterization of the scale-dependent features of research units may provide important insight into how such units are organized and how they grow. The relative importance of top-down versus bottom-up controls on their growth may be revealed by their scaling properties. Here we show that the number of support staff in Scandinavian research units, ranging in size from 20 to 7,800 staff members, is related to the number of academic staff by a power law. The scaling exponent of ≈1.30 is broadly consistent with a simple hierarchical model of the university organization. Similar scaling behavior between small and large research units with a wide range of ambitions and strategies argues against top-down control of the growth. Top-down effects, and externally imposed effects from changing political environments, can be observed as fluctuations around the main trend. The observed scaling law implies that cost-benefit arguments for merging research institutions into larger and larger units may have limited validity unless the productivity per academic staff and/or the quality of the products are considerably higher in larger institutions. Despite the hierarchical structure of most large-scale research units in Europe, the network structures represented by the academic component of such units are strongly antihierarchical and suboptimal for efficient communication within individual units. PMID:19625626
Pezzulo, Giovanni; Levin, Michael
2016-11-01
It is widely assumed in developmental biology and bioengineering that optimal understanding and control of complex living systems follows from models of molecular events. The success of reductionism has overshadowed attempts at top-down models and control policies in biological systems. However, other fields, including physics, engineering and neuroscience, have successfully used the explanations and models at higher levels of organization, including least-action principles in physics and control-theoretic models in computational neuroscience. Exploiting the dynamic regulation of pattern formation in embryogenesis and regeneration requires new approaches to understand how cells cooperate towards large-scale anatomical goal states. Here, we argue that top-down models of pattern homeostasis serve as proof of principle for extending the current paradigm beyond emergence and molecule-level rules. We define top-down control in a biological context, discuss the examples of how cognitive neuroscience and physics exploit these strategies, and illustrate areas in which they may offer significant advantages as complements to the mainstream paradigm. By targeting system controls at multiple levels of organization and demystifying goal-directed (cybernetic) processes, top-down strategies represent a roadmap for using the deep insights of other fields for transformative advances in regenerative medicine and systems bioengineering. © 2016 The Author(s).
2016-01-01
It is widely assumed in developmental biology and bioengineering that optimal understanding and control of complex living systems follows from models of molecular events. The success of reductionism has overshadowed attempts at top-down models and control policies in biological systems. However, other fields, including physics, engineering and neuroscience, have successfully used the explanations and models at higher levels of organization, including least-action principles in physics and control-theoretic models in computational neuroscience. Exploiting the dynamic regulation of pattern formation in embryogenesis and regeneration requires new approaches to understand how cells cooperate towards large-scale anatomical goal states. Here, we argue that top-down models of pattern homeostasis serve as proof of principle for extending the current paradigm beyond emergence and molecule-level rules. We define top-down control in a biological context, discuss the examples of how cognitive neuroscience and physics exploit these strategies, and illustrate areas in which they may offer significant advantages as complements to the mainstream paradigm. By targeting system controls at multiple levels of organization and demystifying goal-directed (cybernetic) processes, top-down strategies represent a roadmap for using the deep insights of other fields for transformative advances in regenerative medicine and systems bioengineering. PMID:27807271
NASA Technical Reports Server (NTRS)
Ott, L.; Putman, B.; Collatz, J.; Gregg, W.
2012-01-01
Column CO2 observations from current and future remote sensing missions represent a major advancement in our understanding of the carbon cycle and are expected to help constrain source and sink distributions. However, data assimilation and inversion methods are challenged by the difference in scale of models and observations. OCO-2 footprints represent an area of several square kilometers while NASA s future ASCENDS lidar mission is likely to have an even smaller footprint. In contrast, the resolution of models used in global inversions are typically hundreds of kilometers wide and often cover areas that include combinations of land, ocean and coastal areas and areas of significant topographic, land cover, and population density variations. To improve understanding of scales of atmospheric CO2 variability and representativeness of satellite observations, we will present results from a global, 10-km simulation of meteorology and atmospheric CO2 distributions performed using NASA s GEOS-5 general circulation model. This resolution, typical of mesoscale atmospheric models, represents an order of magnitude increase in resolution over typical global simulations of atmospheric composition allowing new insight into small scale CO2 variations across a wide range of surface flux and meteorological conditions. The simulation includes high resolution flux datasets provided by NASA s Carbon Monitoring System Flux Pilot Project at half degree resolution that have been down-scaled to 10-km using remote sensing datasets. Probability distribution functions are calculated over larger areas more typical of global models (100-400 km) to characterize subgrid-scale variability in these models. Particular emphasis is placed on coastal regions and regions containing megacities and fires to evaluate the ability of coarse resolution models to represent these small scale features. Additionally, model output are sampled using averaging kernels characteristic of OCO-2 and ASCENDS measurement concepts to create realistic pseudo-datasets. Pseudo-data are averaged over coarse model grid cell areas to better understand the ability of measurements to characterize CO2 distributions and spatial gradients on both short (daily to weekly) and long (monthly to seasonal) time scales
Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony
2012-08-17
A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simulating the Impact Response of Three Full-Scale Crash Tests of Cessna 172 Aircraft
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.; Littell, Justin D.; Annett, Martin S.; Stimson, Chad M.
2017-01-01
During the summer of 2015, a series of three full-scale crash tests were performed at the Landing and Impact Research Facility located at NASA Langley Research Center of Cessna 172 aircraft. The first test (Test 1) represented a flare-to-stall emergency or hard landing onto a rigid surface. The second test (Test 2) represented a controlled-flight- into-terrain (CFIT) with a nose down pitch attitude of the aircraft, which impacted onto soft soil. The third test (Test 3) also represented a CFIT with a nose up pitch attitude of the aircraft, which resulted in a tail strike condition. Test 3 was also conducted onto soft soil. These crash tests were performed for the purpose of evaluating the performance of Emergency Locator Transmitters and to generate impact test data for model calibration. Finite element models were generated and impact analyses were conducted to simulate the three impact conditions using the commercial nonlinear, transient dynamic finite element code, LS-DYNA®. The objective of this paper is to summarize test-analysis results for the three full-scale crash tests.
Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.
2010-01-01
A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.
ERIC Educational Resources Information Center
Al-Mohtadi, Reham Mohammad; Al-Msubheen, Moonerh Mheel
2017-01-01
This study drives at identifying the influence of religious awareness program in scaling down the death anxiety among sample consisted of (50) students; (30) males and (20) females, at the late childhood stage. The sample distributed randomly into (25) students representing main group and (25) students as experimental group. Religious Awareness…
NASA Technical Reports Server (NTRS)
Mata, C. T.; Rakov, V. A.; Mata, A. G.
2010-01-01
A new comprehensive lightning instrumentation system has been designed for Launch Complex 39B (LC3913) at the Kennedy Space Center, Florida. This new instrumentation system includes the synchronized recording of six high-speed video cameras; currents through the nine downconductors of the new lightning protection system for LC3913; four dH/dt, 3-axis measurement stations; and five dE/dt stations composed of two antennas each. A 20:1 scaled down model of the new Lightning Protection System (LPS) of LC39B was built at the International Center for Lightning Research and Testing, Camp Blanding, FL. This scaled down lightning protection system was instrumented with the transient recorders, digitizers, and sensors to be used in the final instrumentation installation at LC3913. The instrumentation used at the ICLRT is also a scaled-down instrumentation of the LC39B instrumentation. The scaled-down LPS was subjected to seven direct lightning strikes and six (four triggered and two natural nearby flashes) in 2010. The following measurements were acquired at the ICLRT: currents through the nine downconductors; two dl-/dt, 3-axis stations, one at the center of the LPS (underneath the catenary wires), and another 40 meters south from the center of the LPS; ten dE/dt stations, nine of them on the perimeter of the LPS and one at the center of the LPS (underneath the catenary wire system); and the incident current. Data from representative events are presented and analyzed in this paper.
Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity
NASA Astrophysics Data System (ADS)
Luce, C. H.; Lute, A.
2017-12-01
Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations confirm transferability of the relationships in space and time contingent upon full representation of validation conditions in the calibration data set. The ability of the top-down space-for-time models to predict in new time periods and locations lends confidence to their application for assessments and for improving finer time scale models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu
A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
Modeling stream temperature in the Anthropocene: An earth system modeling approach
Li, Hong -Yi; Leung, L. Ruby; Tesfa, Teklu; ...
2015-10-29
A new large-scale stream temperature model has been developed within the Community Earth System Model (CESM) framework. The model is coupled with the Model for Scale Adaptive River Transport (MOSART) that represents river routing and a water management model (WM) that represents the effects of reservoir operations and water withdrawals on flow regulation. The coupled models allow the impacts of reservoir operations and withdrawals on stream temperature to be explicitly represented in a physically based and consistent way. The models have been applied to the Contiguous United States driven by observed meteorological forcing. It is shown that the model ismore » capable of reproducing stream temperature spatiotemporal variation satisfactorily by comparison against the observed streamflow from over 320 USGS stations. Including water management in the models improves the agreement between the simulated and observed streamflow at a large number of stream gauge stations. Both climate and water management are found to have important influence on the spatiotemporal patterns of stream temperature. More interestingly, it is quantitatively estimated that reservoir operation could cool down stream temperature in the summer low-flow season (August – October) by as much as 1~2oC over many places, as water management generally mitigates low flow, which has important implications to aquatic ecosystems. In conclusion, sensitivity of the simulated stream temperature to input data and reservoir operation rules used in the WM model motivates future directions to address some limitations in the current modeling framework.« less
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.
1999-01-01
The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.
A Testbed for Model Development
NASA Astrophysics Data System (ADS)
Berry, J. A.; Van der Tol, C.; Kornfeld, A.
2014-12-01
Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.
Layered intrusion formation by top down thermal migration zone refining (Invited)
NASA Astrophysics Data System (ADS)
Lundstrom, C.
2009-12-01
The formation of layered mafic intrusions by crystallization from cooling magmas represents the textbook example of igneous differentiation, often attributed to fractional crystallization through gravitational settling. Yet in detail, such interpretations have significant problems such that it remains unclear how these important features form. Put in the Earth perspective that no km scale blob of >50% melt has ever been imaged geophysically and that geochronological studies repeatedly indicate age inconsistencies with “big tank” magma chambers, it may be questioned if km scale magma chambers have ever existed. I will present the case for forming layered intrusions by a top down process whereby arriving basaltic magma reaches a permeability barrier, begins to underplate and forms the intrusion incrementally by sill injection with the body growing downward at ~1 mm/yr rate or less. A temperature gradient zone occurs in the overlying previously emplaced sills, leading to chemical components migrating by diffusion. As long as the rate of diffusion can keep up with rate of sill addition, the body will differentiate along a path similar to a liquid line of descent. In this talk, I will integrate data from 3 areas: 1) laboratory experiments examining the behavior of partially molten silicates in a temperature gradient (thermal migration); 2) numerical modeling of the moving temperature gradient zone process using IRIDIUM (Boudreau, 2003); 3) measurements of Fe isotope ratios and geochronology from the Sonju Lake Intrusion in the Duluth Complex. This model provides the ability to form km scale intrusions by a seismically invisible means, can explain million year offsets in chronology, and has implications for reef development and PGE concentration. Most importantly, this model of top down layered intrusion formation, following a similar recent proposal for granitoid formation (Lundstrom, 2009), represents a testable hypothesis: because temperature gradient driven diffusion leads to the prediction of heavy isotope ratios near the top of the intrusion and light ratios near the bottom of the intrusion, analyses of Fe, Mg and Si isotopes provide an important new tool for examining igneous differentiation.
Design of scaled down structural models
NASA Technical Reports Server (NTRS)
Simitses, George J.
1994-01-01
In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.
Design of scaled down structural models
NASA Astrophysics Data System (ADS)
Simitses, George J.
1994-07-01
In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.
Solid object visualization of 3D ultrasound data
NASA Astrophysics Data System (ADS)
Nelson, Thomas R.; Bailey, Michael J.
2000-04-01
Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.
Multi-scale hydrometeorological observation and modelling for flash flood understanding
NASA Astrophysics Data System (ADS)
Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.
2014-09-01
This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2), where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2), where the river routing and flooding processes become important. These observations are part of the HyMeX (HYdrological cycle in the Mediterranean EXperiment) enhanced observation period (EOP), which will last 4 years (2012-2015). In terms of hydrological modelling, the objective is to set up regional-scale models, while addressing small and generally ungauged catchments, which represent the scale of interest for flood risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set-up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes on various scales.
Climate-mediated changes in marine ecosystem regulation during El Niño.
Lindegren, Martin; Checkley, David M; Koslow, Julian A; Goericke, Ralf; Ohman, Mark D
2018-02-01
The degree to which ecosystems are regulated through bottom-up, top-down, or direct physical processes represents a long-standing issue in ecology, with important consequences for resource management and conservation. In marine ecosystems, the role of bottom-up and top-down forcing has been shown to vary over spatio-temporal scales, often linked to highly variable and heterogeneously distributed environmental conditions. Ecosystem dynamics in the Northeast Pacific have been suggested to be predominately bottom-up regulated. However, it remains unknown to what extent top-down regulation occurs, or whether the relative importance of bottom-up and top-down forcing may shift in response to climate change. In this study, we investigate the effects and relative importance of bottom-up, top-down, and physical forcing during changing climate conditions on ecosystem regulation in the Southern California Current System (SCCS) using a generalized food web model. This statistical approach is based on nonlinear threshold models and a long-term data set (~60 years) covering multiple trophic levels from phytoplankton to predatory fish. We found bottom-up control to be the primary mode of ecosystem regulation. However, our results also demonstrate an alternative mode of regulation represented by interacting bottom-up and top-down forcing, analogous to wasp-waist dynamics, but occurring across multiple trophic levels and only during periods of reduced bottom-up forcing (i.e., weak upwelling, low nutrient concentrations, and primary production). The shifts in ecosystem regulation are caused by changes in ocean-atmosphere forcing and triggered by highly variable climate conditions associated with El Niño. Furthermore, we show that biota respond differently to major El Niño events during positive or negative phases of the Pacific Decadal Oscillation (PDO), as well as highlight potential concerns for marine and fisheries management by demonstrating increased sensitivity of pelagic fish to exploitation during El Niño. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Baker, I. T.; Prihodko, L.; Vivoni, E. R.; Denning, A. S.
2017-12-01
Arid and semiarid regions represent a large fraction of global land, with attendant importance of surface energy and trace gas flux to global totals. These regions are characterized by strong seasonality, especially in precipitation, that defines the level of ecosystem stress. Individual plants have been observed to respond non-linearly to increasing soil moisture stress, where plant function is generally maintained as soils dry down to a threshold at which rapid closure of stomates occurs. Incorporating this nonlinear mechanism into landscape-scale models can result in unrealistic binary "on-off" behavior that is especially problematic in arid landscapes. Subsequently, models have `relaxed' their simulation of soil moisture stress on evapotranspiration (ET). Unfortunately, these relaxations are not physically based, but are imposed upon model physics as a means to force a more realistic response. Previously, we have introduced a new method to represent soil moisture regulation of ET, whereby the landscape is partitioned into `BINS' of soil moisture wetness, each associated with a fractional area of the landscape or grid cell. A physically- and observationally-based nonlinear soil moisture stress function is applied, but when convolved with the relative area distribution represented by wetness BINS the system has the emergent property of `smoothing' the landscape-scale response without the need for non-physical impositions on model physics. In this research we confront BINS simulations of Bowen ratio, soil moisture variability and trace gas flux with soil moisture and eddy covariance observations taken at the Jornada LTER dryland site in southern New Mexico. We calculate the mean annual wetting cycle and associated variability about the mean state and evaluate model performance against this variability and time series of land surface fluxes from the highly instrumented Tromble Weir watershed. The BINS simulations capture the relatively rapid reaction to wetting events and more prolonged response to drying cycles, as opposed to binary behavior in the control.
Resolving dispersion and induction components for polarisable molecular simulations of ionic liquids
NASA Astrophysics Data System (ADS)
Pádua, Agílio A. H.
2017-05-01
One important development in interaction potential models, or atomistic force fields, for molecular simulation is the inclusion of explicit polarisation, which represents the induction effects of charged or polar molecules on polarisable electron clouds. Polarisation can be included through fluctuating charges, induced multipoles, or Drude dipoles. This work uses Drude dipoles and is focused on room-temperature ionic liquids, for which fixed-charge models predict too slow dynamics. The aim of this study is to devise a strategy to adapt existing non-polarisable force fields upon addition of polarisation, because induction was already contained to an extent, implicitly, due to parametrisation against empirical data. Therefore, a fraction of the van der Waals interaction energy should be subtracted so that the Lennard-Jones terms only account for dispersion and the Drude dipoles for induction. Symmetry-adapted perturbation theory is used to resolve the dispersion and induction terms in dimers and to calculate scaling factors to reduce the Lennard-Jones terms from the non-polarisable model. Simply adding Drude dipoles to an existing fixed-charge model already improves the prediction of transport properties, increasing diffusion coefficients, and lowering the viscosity. Scaling down the Lennard-Jones terms leads to still faster dynamics and densities that match experiment extremely well. The concept developed here improves the overall prediction of density and transport properties and can be adapted to other models and systems. In terms of microscopic structure of the ionic liquids, the inclusion of polarisation and the down-scaling of Lennard-Jones terms affect only slightly the ordering of the first shell of counterions, leading to small decreases in coordination numbers. Remarkably, the effect of polarisation is major beyond first neighbours, significantly weakening spatial correlations, a structural effect that is certainly related to the faster dynamics of polarisable models.
The importance of structural anisotropy in computational models of traumatic brain injury.
Carlsen, Rika W; Daphalapurkar, Nitin P
2015-01-01
Understanding the mechanisms of injury might prove useful in assisting the development of methods for the management and mitigation of traumatic brain injury (TBI). Computational head models can provide valuable insight into the multi-length-scale complexity associated with the primary nature of diffuse axonal injury. It involves understanding how the trauma to the head (at the centimeter length scale) translates to the white-matter tissue (at the millimeter length scale), and even further down to the axonal-length scale, where physical injury to axons (e.g., axon separation) may occur. However, to accurately represent the development of TBI, the biofidelity of these computational models is of utmost importance. There has been a focused effort to improve the biofidelity of computational models by including more sophisticated material definitions and implementing physiologically relevant measures of injury. This paper summarizes recent computational studies that have incorporated structural anisotropy in both the material definition of the white matter and the injury criterion as a means to improve the predictive capabilities of computational models for TBI. We discuss the role of structural anisotropy on both the mechanical response of the brain tissue and on the development of injury. We also outline future directions in the computational modeling of TBI.
Multiscale pore structure and constitutive models of fine-grained rocks
NASA Astrophysics Data System (ADS)
Heath, J. E.; Dewers, T. A.; Shields, E. A.; Yoon, H.; Milliken, K. L.
2017-12-01
A foundational concept of continuum poromechanics is the representative elementary volume or REV: an amount of material large enough that pore- or grain-scale fluctuations in relevant properties are dissipated to a definable mean, but smaller than length scales of heterogeneity. We determine 2D-equivalent representative elementary areas (REAs) of pore areal fraction of three major types of mudrocks by applying multi-beam scanning electron microscopy (mSEM) to obtain terapixel image mosaics. Image analysis obtains pore areal fraction and pore size and shape as a function of progressively larger measurement areas. Using backscattering imaging and mSEM data, pores are identified by the components within which they occur, such as in organics or the clastic matrix. We correlate pore areal fraction with nano-indentation, micropillar compression, and axysimmetic testing at multiple length scales on a terrigenous-argillaceous mudrock sample. The combined data set is used to: investigate representative elementary volumes (and areas for the 2D images); determine if scale separation occurs; and determine if transport and mechanical properties at a given length scale can be statistically defined. Clear scale separation occurs between REAs and observable heterogeneity in two of the samples. A highly-laminated sample exhibits fine-scale heterogeneity and an overlapping in scales, in which case typical continuum assumptions on statistical variability may break down. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
NASA Astrophysics Data System (ADS)
Shi, Ao; Lu, Bo; Yang, Dangguo; Wang, Xiansheng; Wu, Junqiang; Zhou, Fangqi
2018-05-01
Coupling between aero-acoustic noise and structural vibration under high-speed open cavity flow-induced oscillation may bring about severe random vibration of the structure, and even cause structure to fatigue destruction, which threatens the flight safety. Carrying out the research on vibro-acoustic experiments of scaled down model is an effective means to clarify the effects of high-intensity noise of cavity on structural vibration. Therefore, in allusion to the vibro-acoustic experiments of cavity in wind tunnel, taking typical elastic cavity as the research object, dimensional analysis and finite element method were adopted to establish the similitude relations of structural inherent characteristics and dynamics for distorted model, and verifying the proposed similitude relations by means of experiments and numerical simulation. Research shows that, according to the analysis of scale-down model, the established similitude relations can accurately simulate the structural dynamic characteristics of actual model, which provides theoretic guidance for structural design and vibro-acoustic experiments of scaled down elastic cavity model.
Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs
ERIC Educational Resources Information Center
Hung, David; Lee, Shu-Shing
2015-01-01
Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…
Mejias, Jorge F; Murray, John D; Kennedy, Henry; Wang, Xiao-Jing
2016-11-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions.
Mejias, Jorge F.; Murray, John D.; Kennedy, Henry; Wang, Xiao-Jing
2016-01-01
Interactions between top-down and bottom-up processes in the cerebral cortex hold the key to understanding attentional processes, predictive coding, executive control, and a gamut of other brain functions. However, the underlying circuit mechanism remains poorly understood and represents a major challenge in neuroscience. We approached this problem using a large-scale computational model of the primate cortex constrained by new directed and weighted connectivity data. In our model, the interplay between feedforward and feedback signaling depends on the cortical laminar structure and involves complex dynamics across multiple (intralaminar, interlaminar, interareal, and whole cortex) scales. The model was tested by reproducing, as well as providing insights into, a wide range of neurophysiological findings about frequency-dependent interactions between visual cortical areas, including the observation that feedforward pathways are associated with enhanced gamma (30 to 70 Hz) oscillations, whereas feedback projections selectively modulate alpha/low-beta (8 to 15 Hz) oscillations. Furthermore, the model reproduces a functional hierarchy based on frequency-dependent Granger causality analysis of interareal signaling, as reported in recent monkey and human experiments, and suggests a mechanism for the observed context-dependent hierarchy dynamics. Together, this work highlights the necessity of multiscale approaches and provides a modeling platform for studies of large-scale brain circuit dynamics and functions. PMID:28138530
Towards end-to-end models for investigating the effects of climate and fishing in marine ecosystems
NASA Astrophysics Data System (ADS)
Travers, M.; Shin, Y.-J.; Jennings, S.; Cury, P.
2007-12-01
End-to-end models that represent ecosystem components from primary producers to top predators, linked through trophic interactions and affected by the abiotic environment, are expected to provide valuable tools for assessing the effects of climate change and fishing on ecosystem dynamics. Here, we review the main process-based approaches used for marine ecosystem modelling, focusing on the extent of the food web modelled, the forcing factors considered, the trophic processes represented, as well as the potential use and further development of the models. We consider models of a subset of the food web, models which represent the first attempts to couple low and high trophic levels, integrated models of the whole ecosystem, and size spectrum models. Comparisons within and among these groups of models highlight the preferential use of functional groups at low trophic levels and species at higher trophic levels and the different ways in which the models account for abiotic processes. The model comparisons also highlight the importance of choosing an appropriate spatial dimension for representing organism dynamics. Many of the reviewed models could be extended by adding components and by ensuring that the full life cycles of species components are represented, but end-to-end models should provide full coverage of ecosystem components, the integration of physical and biological processes at different scales and two-way interactions between ecosystem components. We suggest that this is best achieved by coupling models, but there are very few existing cases where the coupling supports true two-way interaction. The advantages of coupling models are that the extent of discretization and representation can be targeted to the part of the food web being considered, making their development time- and cost-effective. Processes such as predation can be coupled to allow the propagation of forcing factors effects up and down the food web. However, there needs to be a stronger focus on enabling two-way interaction, carefully selecting the key functional groups and species, reconciling different time and space scales and the methods of converting between energy, nutrients and mass.
NASA Astrophysics Data System (ADS)
Offner, Avshalom; Ramon, Guy Z.
2016-11-01
Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garraffo, Cecilia; Drake, Jeremy J.; Cohen, Ofer
Rotation evolution of late-type stars is dominated by magnetic braking and the underlying factors that control this angular momentum loss are important for the study of stellar spin-down. In this work, we study angular momentum loss as a function of two different aspects of magnetic activity using a calibrated Alfvén wave-driven magnetohydrodynamic wind model: the strengths of magnetic spots and their distribution in latitude. By driving the model using solar and modified solar surface magnetograms, we show that the topology of the field arising from the net interaction of both small-scale and large-scale field is important for spin-down rates andmore » that angular momentum loss is not a simple function of large scale magnetic field strength. We find that changing the latitude of magnetic spots can modify mass and angular momentum loss rates by a factor of two. The general effect that causes these differences is the closing down of large-scale open field at mid- and high-latitudes by the addition of the small-scale field. These effects might give rise to modulation of mass and angular momentum loss through stellar cycles, and present a problem for ab initio attempts to predict stellar spin-down based on wind models. For all the magnetogram cases considered here, from dipoles to various spotted distributions, we find that angular momentum loss is dominated by the mass loss at mid-latitudes. The spin-down torque applied by magnetized winds therefore acts at specific latitudes and is not evenly distributed over the stellar surface, though this aspect is unlikely to be important for understanding spin-down and surface flows on stars.« less
Strength and Microstructure of Ceramics
1989-11-01
processing defects (pores or inclusions), etc. Theoretically, flaws have been represented as scaled-down versions of large cracks, so that the...no spurious reflections. confirming that the defects were not microtwins, From the TEM evidence. alhing with corresponding observations of fault...Lawn Vol. 71. No. I Interfaces. can be viewred as high-energy planar defects . AS Such. V. Conclusions they represent favored sites for microcrack
A new paradigm for predicting zonal-mean climate and climate change
NASA Astrophysics Data System (ADS)
Armour, K.; Roe, G.; Donohoe, A.; Siler, N.; Markle, B. R.; Liu, X.; Feldl, N.; Battisti, D. S.; Frierson, D. M.
2016-12-01
How will the pole-to-equator temperature gradient, or large-scale patterns of precipitation, change under global warming? Answering such questions typically involves numerical simulations with comprehensive general circulation models (GCMs) that represent the complexities of climate forcing, radiative feedbacks, and atmosphere and ocean dynamics. Yet, our understanding of these predictions hinges on our ability to explain them through the lens of simple models and physical theories. Here we present evidence that zonal-mean climate, and its changes, can be understood in terms of a moist energy balance model that represents atmospheric heat transport as a simple diffusion of latent and sensible heat (as a down-gradient transport of moist static energy, with a diffusivity coefficient that is nearly constant with latitude). We show that the theoretical underpinnings of this model derive from the principle of maximum entropy production; that its predictions are empirically supported by atmospheric reanalyses; and that it successfully predicts the behavior of a hierarchy of climate models - from a gray radiation aquaplanet moist GCM, to comprehensive GCMs participating in CMIP5. As an example of the power of this paradigm, we show that, given only patterns of local radiative feedbacks and climate forcing, the moist energy balance model accurately predicts the evolution of zonal-mean temperature and atmospheric heat transport as simulated by the CMIP5 ensemble. These results suggest that, despite all of its dynamical complexity, the atmosphere essentially responds to energy imbalances by simply diffusing latent and sensible heat down-gradient; this principle appears to explain zonal-mean climate and its changes under global warming.
NASA Astrophysics Data System (ADS)
Hoffman, F. M.; Kumar, J.; Maddalena, D. M.; Langford, Z.; Hargrove, W. W.
2014-12-01
Disparate in situ and remote sensing time series data are being collected to understand the structure and function of ecosystems and how they may be affected by climate change. However, resource and logistical constraints limit the frequency and extent of observations, particularly in the harsh environments of the arctic and the tropics, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent variability at desired scales. These regions host large areas of potentially vulnerable ecosystems that are poorly represented in Earth system models (ESMs), motivating two new field campaigns, called Next Generation Ecosystem Experiments (NGEE) for the Arctic and Tropics, funded by the U.S. Department of Energy. Multivariate Spatio-Temporal Clustering (MSTC) provides a quantitative methodology for stratifying sampling domains, informing site selection, and determining the representativeness of measurement sites and networks. We applied MSTC to down-scaled general circulation model results and data for the State of Alaska at a 4 km2 resolution to define maps of ecoregions for the present (2000-2009) and future (2090-2099), showing how combinations of 37 bioclimatic characteristics are distributed and how they may shift in the future. Optimal representative sampling locations were identified on present and future ecoregion maps, and representativeness maps for candidate sampling locations were produced. We also applied MSTC to remotely sensed LiDAR measurements and multi-spectral imagery from the WorldView-2 satellite at a resolution of about 5 m2 within the Barrow Environmental Observatory (BEO) in Alaska. At this resolution, polygonal ground features—such as centers, edges, rims, and troughs—can be distinguished. Using these remote sensing data, we up-scaled vegetation distribution data collected on these polygonal ground features to a large area of the BEO to provide distributions of plant functional types that can be used to parameterize ESMs. In addition, we applied MSTC to 4 km2 global bioclimate data to define global ecoregions and understand the representativeness of CTFS-ForestGEO, Fluxnet, and RAINFOR sampling networks. These maps identify tropical forests underrepresented in existing observations of individual and combined networks.
The impact of ARM on climate modeling
Randall, David A.; Del Genio, Anthony D.; Donner, Lee J.; ...
2016-07-15
Climate models are among humanity’s most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of Earth down to 100 km or smaller and implicitly include the effects of processes on even smaller scales down to a micron or so. In addition, themore » atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heath, Garvin A.
The overall objective of the Research Partnership to Secure Energy for America (RPSEA)-funded research project is to develop independent estimates of methane emissions using top-down and bottom-up measurement approaches and then to compare the estimates, including consideration of uncertainty. Such approaches will be applied at two scales: basin and facility. At facility scale, multiple methods will be used to measure methane emissions of the whole facility (controlled dual tracer and single tracer releases, aircraft-based mass balance and Gaussian back-trajectory), which are considered top-down approaches. The bottom-up approach will sum emissions from identified point sources measured using appropriate source-level measurement techniquesmore » (e.g., high-flow meters). At basin scale, the top-down estimate will come from boundary layer airborne measurements upwind and downwind of the basin, using a regional mass balance model plus approaches to separate atmospheric methane emissions attributed to the oil and gas sector. The bottom-up estimate will result from statistical modeling (also known as scaling up) of measurements made at selected facilities, with gaps filled through measurements and other estimates based on other studies. The relative comparison of the bottom-up and top-down estimates made at both scales will help improve understanding of the accuracy of the tested measurement and modeling approaches. The subject of this CRADA is NREL's contribution to the overall project. This project resulted from winning a competitive solicitation no. RPSEA RFP2012UN001, proposal no. 12122-95, which is the basis for the overall project. This Joint Work Statement (JWS) details the contributions of NREL and Colorado School of Mines (CSM) in performance of the CRADA effort.« less
Indirect nitrous oxide emissions from streams within the US Corn Belt scale with stream order
Turner, Peter A.; Griffis, Timothy J.; Lee, Xuhui; Baker, John M.; Venterea, Rodney T.; Wood, Jeffrey D.
2015-01-01
N2O is an important greenhouse gas and the primary stratospheric ozone depleting substance. Its deleterious effects on the environment have prompted appeals to regulate emissions from agriculture, which represents the primary anthropogenic source in the global N2O budget. Successful implementation of mitigation strategies requires robust bottom-up inventories that are based on emission factors (EFs), simulation models, or a combination of the two. Top-down emission estimates, based on tall-tower and aircraft observations, indicate that bottom-up inventories severely underestimate regional and continental scale N2O emissions, implying that EFs may be biased low. Here, we measured N2O emissions from streams within the US Corn Belt using a chamber-based approach and analyzed the data as a function of Strahler stream order (S). N2O fluxes from headwater streams often exceeded 29 nmol N2O-N m−2⋅s−1 and decreased exponentially as a function of S. This relation was used to scale up riverine emissions and to assess the differences between bottom-up and top-down emission inventories at the local to regional scale. We found that the Intergovernmental Panel on Climate Change (IPCC) indirect EF for rivers (EF5r) is underestimated up to ninefold in southern Minnesota, which translates to a total tier 1 agricultural underestimation of N2O emissions by 40%. We show that accounting for zero-order streams as potential N2O hotspots can more than double the agricultural budget. Applying the same analysis to the US Corn Belt demonstrates that the IPCC EF5r underestimation explains the large differences observed between top-down and bottom-up emission estimates. PMID:26216994
Universal scaling in the branching of the tree of life.
Herrada, E Alejandro; Tessone, Claudio J; Klemm, Konstantin; Eguíluz, Víctor M; Hernández-García, Emilio; Duarte, Carlos M
2008-07-23
Understanding the patterns and processes of diversification of life in the planet is a key challenge of science. The Tree of Life represents such diversification processes through the evolutionary relationships among the different taxa, and can be extended down to intra-specific relationships. Here we examine the topological properties of a large set of interspecific and intraspecific phylogenies and show that the branching patterns follow allometric rules conserved across the different levels in the Tree of Life, all significantly departing from those expected from the standard null models. The finding of non-random universal patterns of phylogenetic differentiation suggests that similar evolutionary forces drive diversification across the broad range of scales, from macro-evolutionary to micro-evolutionary processes, shaping the diversity of life on the planet.
NASA Astrophysics Data System (ADS)
Youssof, M.; Thybo, H.; Artemieva, I. M.; Levander, A.
2015-06-01
We present a 3D high-resolution seismic model of the southern African cratonic region from teleseismic tomographic inversion of the P- and S-body wave dataset recorded by the Southern African Seismic Experiment (SASE). Utilizing 3D sensitivity kernels, we invert traveltime residuals of teleseismic body waves to calculate velocity anomalies in the upper mantle down to a 700 km depth with respect to the ak135 reference model. Various resolution tests allow evaluation of the extent of smearing effects and help defining the optimum inversion parameters (i.e., damping and smoothness) for regularizing the inversion calculations. The fast lithospheric keels of the Kaapvaal and Zimbabwe cratons reach depths of 300-350 km and 200-250 km, respectively. The paleo-orogenic Limpopo Belt is represented by negative velocity perturbations down to a depth of ˜ 250 km, implying the presence of chemically fertile material with anomalously low wave speeds. The Bushveld Complex has low velocity down to ˜ 150 km, which is attributed to chemical modification of the cratonic mantle. In the present model, the finite-frequency sensitivity kernels allow to resolve relatively small-scale anomalies, such as the Colesberg Magnetic Lineament in the suture zone between the eastern and western blocks of the Kaapvaal Craton, and a small northern block of the Kaapvaal Craton, located between the Limpopo Belt and the Bushveld Complex.
Quark contact interactions at the LHC
NASA Astrophysics Data System (ADS)
Bazzocchi, F.; De Sanctis, U.; Fabbrichesi, M.; Tonero, A.
2012-06-01
Quark contact interactions are an important signal of new physics. We introduce a model in which the presence of a symmetry protects these new interactions from giving large corrections in flavor changing processes at low energies. This minimal model provides the basic set of operators which must be considered to contribute to the high-energy processes. To discuss their experimental signature in jet pairs produced in proton-proton collisions, we simplify the number of possible operators down to two. We show (for a representative integrated luminosity of 200pb-1 at s=7TeV) how the presence of two operators significantly modifies the bound on the characteristic energy scale of the contact interactions, which is obtained by keeping a single operator.
Advanced core-analyses for subsurface characterization
NASA Astrophysics Data System (ADS)
Pini, R.
2017-12-01
The heterogeneity of geological formations varies over a wide range of length scales and represents a major challenge for predicting the movement of fluids in the subsurface. Although they are inherently limited in the accessible length-scale, laboratory measurements on reservoir core samples still represent the only way to make direct observations on key transport properties. Yet, properties derived on these samples are of limited use and should be regarded as sample-specific (or `pseudos'), if the presence of sub-core scale heterogeneities is not accounted for in data processing and interpretation. The advent of imaging technology has significantly reshaped the landscape of so-called Special Core Analysis (SCAL) by providing unprecedented insight on rock structure and processes down to the scale of a single pore throat (i.e. the scale at which all reservoir processes operate). Accordingly, improved laboratory workflows are needed that make use of such wealth of information by e.g., referring to the internal structure of the sample and in-situ observations, to obtain accurate parameterisation of both rock- and flow-properties that can be used to populate numerical models. We report here on the development of such workflow for the study of solute mixing and dispersion during single- and multi-phase flows in heterogeneous porous systems through a unique combination of two complementary imaging techniques, namely X-ray Computed Tomography (CT) and Positron Emission Tomography (PET). The experimental protocol is applied to both synthetic and natural porous media, and it integrates (i) macroscopic observations (tracer effluent curves), (ii) sub-core scale parameterisation of rock heterogeneities (e.g., porosity, permeability and capillary pressure), and direct 3D observation of (iii) fluid saturation distribution and (iv) the dynamic spreading of the solute plumes. Suitable mathematical models are applied to reproduce experimental observations, including both 1D and 3D numerical schemes populated with the parameterisation above. While it validates the core-flooding experiments themselves, the calibrated mathematical model represents a key element for extending them to conditions prevalent in the subsurface, which would be otherwise not attainable in the laboratory.
NASA Astrophysics Data System (ADS)
Syvitski, J. P.; Arango, H.; Harris, C. K.; Meiburg, E. H.; Jenkins, C. J.; Auad, G.; Hutton, E.; Kniskern, T. A.; Radhakrishnan, S.
2016-12-01
A loosely coupled numerical workflow is developed to address land-sea pathways for sediment routing from terrestrial and coastal sources, across the continental shelf and ultimately down the continental slope canyon system of the northern Gulf of Mexico (GOM). Model simulations represent a range of environmental conditions that might lead to the generation of turbidity-currents. The workflow comprises: 1) A simulator for the water and sediment discharged from rivers into the GOM with WMBsedv2 with calibration using USGS and USACE gauged river data; 2) Domain grids and bathymetry (ETOPO2) for the ocean models and realistic seabed sediment texture grids (dbSEABED) for the sediment transport models; 3) A spectral wave action simulator (10 km resolution) (WaveWatch III) driven by GFDL - GFS winds; 4) A simulator for ocean dynamics (ROMS) forced with ECMWF ERA winds; 5) A simulator for seafloor resuspension and transport (CSTMS); 6) Simulators (HurriSlip) of seafloor failure and flow ignition locations for boundary input to a turbidity current model; and 7) A RANS turbidity current model (TURBINS) to route sediment flows down GOM canyons, providing estimates of bottom shear stresses. TURBINS was developed first as a DNS model and then converted to an LES model wherein a dynamic turbulence closure scheme was employed. Like most DNS to LES model comparisons (these being done by the UCSB team), turbulence scaling allowed for higher Re applications but were found still not capable of simulating field scale (GOM continental canyons) environments. The LES model was next converted to a non-hydrostatic RANS model capable of field scale applications but only with a daisy-chain approach to multiple model runs along the simulated canyon floor. These model adaptations allowed the workflow to be tested for the year 1-Oct-2007 to 30-Sep-2008 that included two domain Hurricanes (Ike and Gustav). The RANS-TURBINS employed further boundary simplifications on both sediment erosion and deposition in line with the ocean model ROMS-CSTMS.
Reverse engineering systems models of regulation: discovery, prediction and mechanisms.
Ashworth, Justin; Wurtmann, Elisabeth J; Baliga, Nitin S
2012-08-01
Biological systems can now be understood in comprehensive and quantitative detail using systems biology approaches. Putative genome-scale models can be built rapidly based upon biological inventories and strategic system-wide molecular measurements. Current models combine statistical associations, causative abstractions, and known molecular mechanisms to explain and predict quantitative and complex phenotypes. This top-down 'reverse engineering' approach generates useful organism-scale models despite noise and incompleteness in data and knowledge. Here we review and discuss the reverse engineering of biological systems using top-down data-driven approaches, in order to improve discovery, hypothesis generation, and the inference of biological properties. Copyright © 2011 Elsevier Ltd. All rights reserved.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Kelley, Henry L.
2000-01-01
Two large-scale, two-dimensional helicopter tail boom models were used to determine the effects of passive venting on boom down loads and side forces in hovering crosswind conditions. The models were oval shaped and trapezoidal shaped. Completely porous and solid configurations, partial venting in various symmetric and asymmetric configurations, and strakes were tested. Calculations were made to evaluate the trends of venting and strakes on power required when applied to a UH-60 class helicopter. Compared with the UH-60 baseline, passive venting reduced side force but increased down load at flow conditions representing right sideward flight. Selective asymmetric venting resulted in side force benefits close to the fully porous case. Calculated trends on the effects of venting on power required indicated that the high asymmetric oval configuration was the most effective venting configuration for side force reduction, and the high asymmetric with a single strake was the most effective for overall power reduction. Also, curves of side force versus flow angle were noticeable smoother for the vented configurations compared with the solid baseline configuration; this indicated a potential for smoother flight in low-speed crosswind conditions.
Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa
2015-01-01
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.
Prospects for improving the representation of coastal and shelf seas in global ocean models
NASA Astrophysics Data System (ADS)
Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard
2017-02-01
Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions < 500 m deep, but this increases to ˜ 70 % for a 1/72° model, so resolving scales globally requires substantially finer resolution than the current state of the art.We quantify the benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of multiscale modelling options (e.g. finite element, finite volume or a two-way nesting approach), we consider a simple scale analysis and a conceptual grid refining approach. We put this analysis in the context of evolving computer systems, discussing model turnaround time, scalability and resource costs. Using a simple cost model compared to a reference configuration (taken to be a 1/4° global model in 2011) and the increasing performance of the UK Research Councils' computer facility, we estimate an unstructured mesh multiscale approach, resolving process scales down to 1.5 km, would use a comparable share of the computer resource by 2021, the two-way nested multiscale approach by 2022, and a 1/72° global model by 2026. However, we also note that a 1/12° global model would not have a comparable computational cost to a 1° global model in 2017 until 2027. Hence, we conclude that for computationally expensive models (e.g. for oceanographic research or operational oceanography), resolving scales to ˜ 1.5 km would be routinely practical in about a decade given substantial effort on numerical and computational development. For complex Earth system models, this extends to about 2 decades, suggesting the focus here needs to be on improved process parameterisation to meet these challenges.
Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon
2018-07-01
We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Mimoza: web-based semantic zooming and navigation in metabolic networks.
Zhukova, Anna; Sherman, David J
2015-02-26
The complexity of genome-scale metabolic models makes them quite difficult for human users to read, since they contain thousands of reactions that must be included for accurate computer simulation. Interestingly, hidden similarities between groups of reactions can be discovered, and generalized to reveal higher-level patterns. The web-based navigation system Mimoza allows a human expert to explore metabolic network models in a semantically zoomable manner: The most general view represents the compartments of the model; the next view shows the generalized versions of reactions and metabolites in each compartment; and the most detailed view represents the initial network with the generalization-based layout (where similar metabolites and reactions are placed next to each other). It allows a human expert to grasp the general structure of the network and analyze it in a top-down manner Mimoza can be installed standalone, or used on-line at http://mimoza.bordeaux.inria.fr/ , or installed in a Galaxy server for use in workflows. Mimoza views can be embedded in web pages, or downloaded as COMBINE archives.
Brunner, Matthias; Braun, Philipp; Doppler, Philipp; Posch, Christoph; Behrens, Dirk; Herwig, Christoph; Fricke, Jens
2017-07-01
Due to high mixing times and base addition from top of the vessel, pH inhomogeneities are most likely to occur during large-scale mammalian processes. The goal of this study was to set-up a scale-down model of a 10-12 m 3 stirred tank bioreactor and to investigate the effect of pH perturbations on CHO cell physiology and process performance. Short-term changes in extracellular pH are hypothesized to affect intracellular pH and thus cell physiology. Therefore, batch fermentations, including pH shifts to 9.0 and 7.8, in regular one-compartment systems are conducted. The short-term adaption of the cells intracellular pH are showed an immediate increase due to elevated extracellular pH. With this basis of fundamental knowledge, a two-compartment system is established which is capable of simulating defined pH inhomogeneities. In contrast to state-of-the-art literature, the scale-down model is included parameters (e.g. volume of the inhomogeneous zone) as they might occur during large-scale processes. pH inhomogeneity studies in the two-compartment system are performed with simulation of temporary pH zones of pH 9.0. The specific growth rate especially during the exponential growth phase is strongly affected resulting in a decreased maximum viable cell density and final product titer. The gathered results indicate that even short-term exposure of cells to elevated pH values during large-scale processes can affect cell physiology and overall process performance. In particular, it could be shown for the first time that pH perturbations, which might occur during the early process phase, have to be considered in scale-down models of mammalian processes. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Scaling dimensions in spectroscopy of soil and vegetation
NASA Astrophysics Data System (ADS)
Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.
2007-05-01
The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.
Pantea, Michael P.; Blome, Charles D.; Clark, Allan K.
2014-01-01
A three-dimensional model of the Camp Stanley Storage Activity area defines and illustrates the surface and subsurface hydrostratigraphic architecture of the military base and adjacent areas to the south and west using EarthVision software. The Camp Stanley model contains 11 hydrostratigraphic units in descending order: 1 model layer representing the Edwards aquifer; 1 model layer representing the upper Trinity aquifer; 6 model layers representing the informal hydrostratigraphic units that make up the upper part of the middle Trinity aquifer; and 3 model layers representing each, the Bexar, Cow Creek, and the top of the Hammett of the lower part of the middle Trinity aquifer. The Camp Stanley three-dimensional model includes 14 fault structures that generally trend northeast/southwest. The top of Hammett hydrostratigraphic unit was used to propagate and validate all fault structures and to confirm most of the drill-hole data. Differences between modeled and previously mapped surface geology reflect interpretation of fault relations at depth, fault relations to hydrostratigraphic contacts, and surface digital elevation model simplification to fit the scale of the model. In addition, changes based on recently obtained drill-hole data and field reconnaissance done during the construction of the model. The three-dimensional modeling process revealed previously undetected horst and graben structures in the northeastern and southern parts of the study area. This is atypical, as most faults in the area are en echelon that step down southeasterly to the Gulf Coast. The graben structures may increase the potential for controlling or altering local groundwater flow.
Inverse modeling has been used extensively on the global scale to produce top-down estimates of emissions for chemicals such as CO and CH4. Regional scale air quality studies could also benefit from inverse modeling as a tool to evaluate current emission inventories; however, ...
Hyperscaling breakdown and Ising spin glasses: The Binder cumulant
NASA Astrophysics Data System (ADS)
Lundow, P. H.; Campbell, I. A.
2018-02-01
Among the Renormalization Group Theory scaling rules relating critical exponents, there are hyperscaling rules involving the dimension of the system. It is well known that in Ising models hyperscaling breaks down above the upper critical dimension. It was shown by Schwartz (1991) that the standard Josephson hyperscaling rule can also break down in Ising systems with quenched random interactions. A related Renormalization Group Theory hyperscaling rule links the critical exponents for the normalized Binder cumulant and the correlation length in the thermodynamic limit. An appropriate scaling approach for analyzing measurements from criticality to infinite temperature is first outlined. Numerical data on the scaling of the normalized correlation length and the normalized Binder cumulant are shown for the canonical Ising ferromagnet model in dimension three where hyperscaling holds, for the Ising ferromagnet in dimension five (so above the upper critical dimension) where hyperscaling breaks down, and then for Ising spin glass models in dimension three where the quenched interactions are random. For the Ising spin glasses there is a breakdown of the normalized Binder cumulant hyperscaling relation in the thermodynamic limit regime, with a return to size independent Binder cumulant values in the finite-size scaling regime around the critical region.
Model unification and scale-adaptivity in the Eddy-Diffusivity Mass-Flux (EDMF) approach
NASA Astrophysics Data System (ADS)
Neggers, R.; Siebesma, P.
2011-12-01
It has long been understood that the turbulent-convective transport of heat, moisture and momentum plays an important role in the dynamics and climate of the earth's atmosphere. Accordingly, the representation of these processes in General Circulation Models (GCMs) has always been an active research field. Turbulence and convection act on temporal and spatial scales that are unresolved by most present-day GCMs, and have to be represented through parametric relations. Over the years a variety of schemes has been successfully developed. Although differing widely in their details, only two basic transport models stand at the basis of most of these schemes. The first is the diffusive transport model, which can only act down-gradient. An example is the turbulent mixing at small scales. The second is the advective transport model, which can act both down-gradient and counter-gradient. A good example is the transport of heat and moisture by convective updrafts that overshoot into stable layers of air. In practice, diffusive models often make use of a K-profile method or a prognostic TKE budget, while advective models make use of a rising (and entraining) plume budget. While most transport schemes classicaly apply either the diffusive model or advective model, the relatively recently introduced Eddy-Diffusivity Mass-Flux (EDMF) approach aims to combine both techniques. By applying advection and diffusion simultaneously, one can make use of the benefits of both approaches. Since its emergence about a decade ago, the EDMF approach has been successfully applied in both research and operational circulation models. This presentation is dedicated to the EDMF framework. Apart from a short introduction to the EDMF concept and a short overview of its current implementations, our main goal is to elaborate on the opportunities EDMF brings in addressing some long-standing problems in the parameterization of turbulent-convective transport. The first problem is the need for a unified approach in the parameterization of distinct transport regimes. The main objections to a separate representation of regimes are i) artificially discrete regime-transitions, and ii) superfluous and intransparent coding. For a unified approach we need to establish what complexity is sufficient to achieve general applicability. We argue that adding only little complexity already enables the standard EDMF framework to represent multiple boundary-layer transport regimes and smooth transitions between those. The second long-standing problem is that the ever increasing computational capacity and speed has lead to increasingly fine discretizations in GCMs, which requires scale-adaptivity in a sub-grid transport model. It is argued that a flexible partitioning between advection and diffusion within EDMF, as well as the potential to introduce stochastic elements in the advective part of EDMF, creates opportunities to introduce such adaptivity. In the final part of the presentation we will attempt to give an overview of currently ongoing developments of the EDMF framework, both concerning model formulation as well as evaluation efforts of key assumptions against observational datasets and large-eddy simulation results.
Probing Planckian Corrections at the Horizon Scale with LISA Binaries
NASA Astrophysics Data System (ADS)
Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria
2018-02-01
Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.
Probing Planckian Corrections at the Horizon Scale with LISA Binaries.
Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria
2018-02-23
Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.
Bottom-up and top-down computations in word- and face-selective cortex
Kay, Kendrick N; Yeatman, Jason D
2017-01-01
The ability to read a page of text or recognize a person's face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses. DOI: http://dx.doi.org/10.7554/eLife.22341.001 PMID:28226243
Critical Slowing Down Governs the Transition to Neuron Spiking
Meisel, Christian; Klaus, Andreas; Kuehn, Christian; Plenz, Dietmar
2015-01-01
Many complex systems have been found to exhibit critical transitions, or so-called tipping points, which are sudden changes to a qualitatively different system state. These changes can profoundly impact the functioning of a system ranging from controlled state switching to a catastrophic break-down; signals that predict critical transitions are therefore highly desirable. To this end, research efforts have focused on utilizing qualitative changes in markers related to a system’s tendency to recover more slowly from a perturbation the closer it gets to the transition—a phenomenon called critical slowing down. The recently studied scaling of critical slowing down offers a refined path to understand critical transitions: to identify the transition mechanism and improve transition prediction using scaling laws. Here, we outline and apply this strategy for the first time in a real-world system by studying the transition to spiking in neurons of the mammalian cortex. The dynamical system approach has identified two robust mechanisms for the transition from subthreshold activity to spiking, saddle-node and Hopf bifurcation. Although theory provides precise predictions on signatures of critical slowing down near the bifurcation to spiking, quantitative experimental evidence has been lacking. Using whole-cell patch-clamp recordings from pyramidal neurons and fast-spiking interneurons, we show that 1) the transition to spiking dynamically corresponds to a critical transition exhibiting slowing down, 2) the scaling laws suggest a saddle-node bifurcation governing slowing down, and 3) these precise scaling laws can be used to predict the bifurcation point from a limited window of observation. To our knowledge this is the first report of scaling laws of critical slowing down in an experiment. They present a missing link for a broad class of neuroscience modeling and suggest improved estimation of tipping points by incorporating scaling laws of critical slowing down as a strategy applicable to other complex systems. PMID:25706912
NASA Astrophysics Data System (ADS)
Subin, Z. M.; Sulman, B. N.; Malyshev, S.; Shevliakova, E.
2013-12-01
Soil moisture is a crucial control on surface energy fluxes, vegetation properties, and soil carbon cycling. Its interactions with ecosystem processes are highly nonlinear across a large range, as both drought stress and anoxia can impede vegetation and microbial growth. Earth System Models (ESMs) generally only represent an average soil-moisture state in grid cells at scales of 50-200 km, and as a result are not able to adequately represent the effects of subgrid heterogeneity in soil moisture, especially in regions with large wetland areas. We addressed this deficiency by developing the first ESM-coupled subgrid hillslope-hydrological model, TiHy (Tiled-hillslope Hydrology), embedded within the Geophysical Fluid Dynamics Laboratory (GFDL) land model. In each grid cell, one or more representative hillslope geometries are discretized into land model tiles along an upland-to-lowland gradient. These geometries represent ~1 km hillslope-scale hydrological features and allow for flexible representation of hillslope profile and plan shapes, in addition to variation of subsurface properties among or within hillslopes. Each tile (which may represent ~100 m along the hillslope) has its own surface fluxes, vegetation state, and vertically-resolved state variables for soil physics and biogeochemistry. Resolution of water state in deep layers (~200 m) down to bedrock allows for physical integration of groundwater transport with unsaturated overlying dynamics. Multiple tiles can also co-exist at the same vertical position along the hillslope, allowing the simulation of ecosystem heterogeneity due to disturbance. The hydrological model is coupled to the vertically-resolved Carbon, Organisms, Respiration, and Protection in the Soil Environment (CORPSE) model, which captures non-linearity resulting from interactions between vertically-heterogeneous soil carbon and water profiles. We present comparisons of simulated water table depth to observations. We examine sensitivities to alternative parameterizations of hillslope geometry, macroporosity, and surface runoff / inundation, and to the choice of global topographic dataset and groundwater hydraulic conductivity distribution. Simulated groundwater dynamics among hillslopes tend to cluster into three regimes of wet and well-drained, wet but poorly-drained, and dry. In the base model configuration, near-surface gridcell-mean water tables exist in an excessively large area compared to observations, including large areas of the Eastern U.S. and Northern Europe. However, in better-drained areas, the decrease in water table depth along the hillslope gradient allows for realistic increases in ecosystem water availability and soil carbon downslope. The inclusion of subgrid hydrology can increase the equilibrium 0-2 m global soil carbon stock by a large factor, due to the nonlinear effect of anoxia. We conclude that this innovative modeling framework allows for the inclusion of hillslope-scale processes and the potential for wetland dynamics in an ESM without need for a high-resolution 3-dimensional groundwater model. Future work will include investigating the potential for future changes in land carbon fluxes caused by the effects of changing hydrological regime, particularly in peatland-rich areas poorly treated by current ESMs.
Valkenburg, Abraham J; Boerlage, Anneke A; Ista, Erwin; Duivenvoorden, Hugo J; Tibboel, Dick; van Dijk, Monique
2011-09-01
Many pediatric intensive care units use the COMFORT-Behavior scale (COMFORT-B) to assess pain in 0- to 3-year-old children. The objective of this study was to determine whether this scale is also valid for the assessment of pain in 0- to 3-year-old children with Down syndrome. These children often undergo cardiac or intestinal surgery early in life and therefore admission to a pediatric intensive care unit. Seventy-six patients with Down syndrome were included and 466 without Down syndrome. Pain was regularly assessed with the COMFORT-B scale and the pain Numeric Rating Scale (NRS). For either group, confirmatory factor analyses revealed a 1-factor model. Internal consistency between COMFORT-B items was good (Cronbach's α=0.84-0.87). Cutoff values for the COMFORT-B set at 17 or higher discriminated between pain (NRS pain of 4 or higher) and no pain (NRS pain below 4) in both groups. We concluded that the COMFORT-B scale is also valid for 0- to 3-year-old children with Down syndrome. This makes it even more useful in the pediatric intensive care unit setting, doing away with the need to apply another instrument for those children younger than 3. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Patrick Y.; Zhou, Qi-Bin
This paper presents an analysis of lightning-induced magnetic fields in a building. The building of concern is protected by the lightning protection system with an insulated down conductor. In this paper a system model for metallic structure of the building is constructed first using the circuit approach. The circuit model of the insulated down conductor is discussed extensively, and explicit expressions of the circuit parameters are presented. The system model was verified experimentally in the laboratory. The modeling approach is applied to analyze the impulse magnetic fields in a full-scale building during a direct lightning strike. It is found that the impulse magnetic field is significantly high near the down conductor. The field is attenuated if the down conductor is moved to a column in the building. The field can be reduced further if the down conductor is housed in an earthed metal pipe. Recommendations for protecting critical equipment against lightning-induced magnetic fields are also provided in the paper.
ERIC Educational Resources Information Center
Couzens, Donna; Cuskelly, Monica; Haynes, Michele
2011-01-01
Growth models for subtests of the Stanford-Binet Intelligence Scale, 4th edition (R. L. Thorndike, E. P. Hagen, & J. M. Sattler, 1986a, 1986b) were developed for individuals with Down syndrome. Models were based on the assessments of 208 individuals who participated in longitudinal and cross-sectional research between 1987 and 2004. Variation…
Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming
2015-01-01
High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.
Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues.
Gepperth, Alexander R T; Rebhan, Sven; Hasler, Stephan; Fritsch, Jannik
2011-03-01
In this contribution, we present a large-scale hierarchical system for object detection fusing bottom-up (signal-driven) processing results with top-down (model or task-driven) attentional modulation. Specifically, we focus on the question of how the autonomous learning of invariant models can be embedded into a performing system and how such models can be used to define object-specific attentional modulation signals. Our system implements bi-directional data flow in a processing hierarchy. The bottom-up data flow proceeds from a preprocessing level to the hypothesis level where object hypotheses created by exhaustive object detection algorithms are represented in a roughly retinotopic way. A competitive selection mechanism is used to determine the most confident hypotheses, which are used on the system level to train multimodal models that link object identity to invariant hypothesis properties. The top-down data flow originates at the system level, where the trained multimodal models are used to obtain space- and feature-based attentional modulation signals, providing biases for the competitive selection process at the hypothesis level. This results in object-specific hypothesis facilitation/suppression in certain image regions which we show to be applicable to different object detection mechanisms. In order to demonstrate the benefits of this approach, we apply the system to the detection of cars in a variety of challenging traffic videos. Evaluating our approach on a publicly available dataset containing approximately 3,500 annotated video images from more than 1 h of driving, we can show strong increases in performance and generalization when compared to object detection in isolation. Furthermore, we compare our results to a late hypothesis rejection approach, showing that early coupling of top-down and bottom-up information is a favorable approach especially when processing resources are constrained.
Session on techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin
1993-01-01
The session on techniques and resources for storm-scale numerical weather prediction are reviewed. The recommendations of this group are broken down into three area: modeling and prediction, data requirements in support of modeling and prediction, and data management. The current status, modeling and technological recommendations, data requirements in support of modeling and prediction, and data management are addressed.
Aeroelastic Scaling of a Joined Wing Aircraft Concept
2010-01-11
waxed and then peel ply is laid down, next the layers of fabric are laid down (outermost first) with an outer layer of light glass scrim used as the...A parametric model is developed using Phoenix Integration’s Model Center Software (MC). This model includes the vortex lattice software, AVL that...piece of real-time footage taken from the on-board, gimbaled camera. 2009 Progress Report 27 Figure 35 – initial Autonomous Flight After
Wildhaber, Mark L.; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.; Dey, Rima
2017-01-01
We present a hierarchical series of spatially decreasing and temporally increasing models to evaluate the uncertainty in the atmosphere – ocean global climate model (AOGCM) and the regional climate model (RCM) relative to the uncertainty in the somatic growth of the endangered pallid sturgeon (Scaphirhynchus albus). For effects on fish populations of riverine ecosystems, cli- mate output simulated by coarse-resolution AOGCMs and RCMs must be downscaled to basins to river hydrology to population response. One needs to transfer the information from these climate simulations down to the individual scale in a way that minimizes extrapolation and can account for spatio-temporal variability in the intervening stages. The goal is a framework to determine whether, given uncertainties in the climate models and the biological response, meaningful inference can still be made. The non-linear downscaling of climate information to the river scale requires that one realistically account for spatial and temporal variability across scale. Our down- scaling procedure includes the use of fixed/calibrated hydrological flow and temperature models coupled with a stochastically parameterized sturgeon bioenergetics model. We show that, although there is a large amount of uncertainty associated with both the climate model output and the fish growth process, one can establish significant differences in fish growth distributions between models, and between future and current climates for a given model.
Nunez, Paul L.; Srinivasan, Ramesh
2013-01-01
The brain is treated as a nested hierarchical complex system with substantial interactions across spatial scales. Local networks are pictured as embedded within global fields of synaptic action and action potentials. Global fields may act top-down on multiple networks, acting to bind remote networks. Because of scale-dependent properties, experimental electrophysiology requires both local and global models that match observational scales. Multiple local alpha rhythms are embedded in a global alpha rhythm. Global models are outlined in which cm-scale dynamic behaviors result largely from propagation delays in cortico-cortical axons and cortical background excitation level, controlled by neuromodulators on long time scales. The idealized global models ignore the bottom-up influences of local networks on global fields so as to employ relatively simple mathematics. The resulting models are transparently related to several EEG and steady state visually evoked potentials correlated with cognitive states, including estimates of neocortical coherence structure, traveling waves, and standing waves. The global models suggest that global oscillatory behavior of self-sustained (limit-cycle) modes lower than about 20 Hz may easily occur in neocortical/white matter systems provided: Background cortical excitability is sufficiently high; the strength of long cortico-cortical axon systems is sufficiently high; and the bottom-up influence of local networks on the global dynamic field is sufficiently weak. The global models provide "entry points" to more detailed studies of global top-down influences, including binding of weakly connected networks, modulation of gamma oscillations by theta or alpha rhythms, and the effects of white matter deficits. PMID:24505628
Generation of animation sequences of three dimensional models
NASA Technical Reports Server (NTRS)
Poi, Sharon (Inventor); Bell, Brad N. (Inventor)
1990-01-01
The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.
Theory of quasi-spherical accretion in X-ray pulsars
NASA Astrophysics Data System (ADS)
Shakura, N.; Postnov, K.; Kochetkova, A.; Hjalmarsdotter, L.
2012-02-01
A theoretical model for quasi-spherical subsonic accretion on to slowly rotating magnetized neutron stars is constructed. In this model, the accreting matter subsonically settles down on to the rotating magnetosphere forming an extended quasi-static shell. This shell mediates the angular momentum removal from the rotating neutron star magnetosphere during spin-down episodes by large-scale convective motions. The accretion rate through the shell is determined by the ability of the plasma to enter the magnetosphere. The settling regime of accretion can be realized for moderate accretion rates ? g s-1. At higher accretion rates, a free-fall gap above the neutron star magnetosphere appears due to rapid Compton cooling, and accretion becomes highly non-stationary. From observations of the spin-up/spin-down rates (the angular rotation frequency derivative ?, and ? near the torque reversal) of X-ray pulsars with known orbital periods, it is possible to determine the main dimensionless parameters of the model, as well as to estimate the magnetic field of the neutron star. We illustrate the model by determining these parameters for three wind-fed X-ray pulsars GX 301-2, Vela X-1 and GX 1+4. The model explains both the spin-up/spin-down of the pulsar frequency on large time-scales and the irregular short-term frequency fluctuations, which can correlate or anticorrelate with the X-ray flux fluctuations in different systems. It is shown that in real pulsars an almost iso-angular-momentum rotation law with ω˜ 1/R2, due to strongly anisotropic radial turbulent motions sustained by large-scale convection, is preferred.
Durbin, Kenneth R.; Tran, John C.; Zamdborg, Leonid; Sweet, Steve M. M.; Catherman, Adam D.; Lee, Ji Eun; Li, Mingxi; Kellie, John F.; Kelleher, Neil L.
2011-01-01
Applying high-throughput Top-Down MS to an entire proteome requires a yet-to-be-established model for data processing. Since Top-Down is becoming possible on a large scale, we report our latest software pipeline dedicated to capturing the full value of intact protein data in automated fashion. For intact mass detection, we combine algorithms for processing MS1 data from both isotopically resolved (FT) and charge-state resolved (ion trap) LC-MS data, which are then linked to their fragment ions for database searching using ProSight. Automated determination of human keratin and tubulin isoforms is one result. Optimized for the intricacies of whole proteins, new software modules visualize proteome-scale data based on the LC retention time and intensity of intact masses and enable selective detection of PTMs to automatically screen for acetylation, phosphorylation, and methylation. Software functionality was demonstrated using comparative LC-MS data from yeast strains in addition to human cells undergoing chemical stress. We further these advances as a key aspect of realizing Top-Down MS on a proteomic scale. PMID:20848673
Digital terrain modeling and industrial surface metrology: Converging realms
Pike, R.J.
2001-01-01
Digital terrain modeling has a micro-and nanoscale counterpart in surface metrology, the numerical characterization of industrial surfaces. Instrumentation in semiconductor manufacturing and other high-technology fields can now contour surface irregularities down to the atomic scale. Surface metrology has been revolutionized by its ability to manipulate square-grid height matrices that are analogous to the digital elevation models (DEMs) used in physical geography. Because the shaping of industrial surfaces is a spatial process, the same concepts of analytical cartography that represent ground-surface form in geography evolved independently in metrology: The surface topography of manufactured components, exemplified here by automobile-engine cylinders, is routinely modeled by variogram analysis, relief shading, and most other techniques of parameterization and visualization familiar to geography. This article introduces industrial surface-metrology, examines the field in the context of terrain modeling and geomorphology and notes their similarities and differences, and raises theoretical issues to be addressed in progressing toward a unified practice of surface morphometry.
Structural Similitude and Scaling Laws
NASA Technical Reports Server (NTRS)
Simitses, George J.
1998-01-01
Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in other industries as well, Ship building, automobile and railway car construction all rely heavily on testing. Regardless of the application, a scaled-down (by a large factor) model (scale model) which closely represents the structural behavior of the full-scale system (prototype) can prove to be an extremely beneficial tool. This possible development must be based on the existence of certain structural parameters that control the behavior of a structural system when acted upon by static and/or dynamic loads. If such structural parameters exist, a scaled-down replica can be built, which will duplicate the response of the full-scale system. The two systems are then said to be structurally similar. The term, then, that best describes this similarity is structural similitude. Similarity of systems requires that the relevant system parameters be identical and these systems be governed by a unique set of characteristic equations. Thus, if a relation or equation of variables is written for a system, it is valid for all systems which are similar to it. Each variable in a model is proportional to the corresponding variable of the prototype. This ratio, which plays an essential role in predicting the relationship between the model and its prototype, is called the scale factor.
Validation of a 30 m resolution flood hazard model of the conterminous United States
NASA Astrophysics Data System (ADS)
Wing, Oliver E. J.; Bates, Paul D.; Sampson, Christopher C.; Smith, Andrew M.; Johnson, Kris A.; Erickson, Tyler A.
2017-09-01
This paper reports the development of a ˜30 m resolution two-dimensional hydrodynamic model of the conterminous U.S. using only publicly available data. The model employs a highly efficient numerical solution of the local inertial form of the shallow water equations which simulates fluvial flooding in catchments down to 50 km2 and pluvial flooding in all catchments. Importantly, we use the U.S. Geological Survey (USGS) National Elevation Dataset to determine topography; the U.S. Army Corps of Engineers National Levee Dataset to explicitly represent known flood defenses; and global regionalized flood frequency analysis to characterize return period flows and rainfalls. We validate these simulations against the complete catalogue of Federal Emergency Management Agency (FEMA) Special Flood Hazard Area (SFHA) maps and detailed local hydraulic models developed by the USGS. Where the FEMA SFHAs are based on high-quality local models, the continental-scale model attains a hit rate of 86%. This correspondence improves in temperate areas and for basins above 400 km2. Against the higher quality USGS data, the average hit rate reaches 92% for the 1 in 100 year flood, and 90% for all flood return periods. Given typical hydraulic modeling uncertainties in the FEMA maps and USGS model outputs (e.g., errors in estimating return period flows), it is probable that the continental-scale model can replicate both to within error. The results show that continental-scale models may now offer sufficient rigor to inform some decision-making needs with dramatically lower cost and greater coverage than approaches based on a patchwork of local studies.
Trickle-Down Preferences: Preferential Conformity to High Status Peers in Fashion Choices.
Galak, Jeff; Gray, Kurt; Elbert, Igor; Strohminger, Nina
2016-01-01
How much do our choices represent stable inner preferences versus social conformity? We examine conformity and consistency in sartorial choices surrounding a common life event of new norm exposure: relocation. A large-scale dataset of individual purchases of women's shoes (16,236 transactions) across five years and 2,007 women reveals a balance of conformity and consistency, moderated by changes in location socioeconomic status. Women conform to new local norms (i.e., average heel size) when moving to relatively higher status locations, but mostly ignore new local norms when moving to relatively lower status locations. In short, at periods of transition, it is the fashion norms of the rich that trickle down to consumers. These analyses provide the first naturalistic large-scale demonstration of the tension between psychological conformity and consistency, with real decisions in a highly visible context.
Genome-scale analysis of aberrant DNA methylation in colorectal cancer
Hinoue, Toshinori; Weisenberger, Daniel J.; Lange, Christopher P.E.; Shen, Hui; Byun, Hyang-Min; Van Den Berg, David; Malik, Simeen; Pan, Fei; Noushmehr, Houtan; van Dijk, Cornelis M.; Tollenaar, Rob A.E.M.; Laird, Peter W.
2012-01-01
Colorectal cancer (CRC) is a heterogeneous disease in which unique subtypes are characterized by distinct genetic and epigenetic alterations. Here we performed comprehensive genome-scale DNA methylation profiling of 125 colorectal tumors and 29 adjacent normal tissues. We identified four DNA methylation–based subgroups of CRC using model-based cluster analyses. Each subtype shows characteristic genetic and clinical features, indicating that they represent biologically distinct subgroups. A CIMP-high (CIMP-H) subgroup, which exhibits an exceptionally high frequency of cancer-specific DNA hypermethylation, is strongly associated with MLH1 DNA hypermethylation and the BRAFV600E mutation. A CIMP-low (CIMP-L) subgroup is enriched for KRAS mutations and characterized by DNA hypermethylation of a subset of CIMP-H-associated markers rather than a unique group of CpG islands. Non-CIMP tumors are separated into two distinct clusters. One non-CIMP subgroup is distinguished by a significantly higher frequency of TP53 mutations and frequent occurrence in the distal colon, while the tumors that belong to the fourth group exhibit a low frequency of both cancer-specific DNA hypermethylation and gene mutations and are significantly enriched for rectal tumors. Furthermore, we identified 112 genes that were down-regulated more than twofold in CIMP-H tumors together with promoter DNA hypermethylation. These represent ∼7% of genes that acquired promoter DNA methylation in CIMP-H tumors. Intriguingly, 48/112 genes were also transcriptionally down-regulated in non-CIMP subgroups, but this was not attributable to promoter DNA hypermethylation. Together, we identified four distinct DNA methylation subgroups of CRC and provided novel insight regarding the role of CIMP-specific DNA hypermethylation in gene silencing. PMID:21659424
NASA Astrophysics Data System (ADS)
Miller, J. B.; Jacobson, A. R.; Bruhwiler, L.; Michalak, A.; Hayes, D. J.; Vargas, R.
2017-12-01
In just ten years since publication of the original State of the Carbon Cycle Report in 2007, global CO2 concentrations have risen by more than 22 ppm to 405 ppm. This represents 18% of the increase over preindustrial levels of 280 ppm. This increase is being driven unequivocally by fossil fuel combustion with North American emissions comprising roughly 20% of the global total over the past decade. At the global scale, we know by comparing well-known fossil fuel inventories and rates of atmospheric CO2 increase that about half of all emissions are absorbed at Earth's surface. For North America, however, we can not apply a simple mass balance to determine sources and sinks. Instead, contributions from ecosystems must be estimated using top-down and bottom-up methods. SOCCR-2 estimates North American net CO2 uptake from ecosystems using bottom-up (inventory) methods as 577 +/- 433 TgC/yr and 634 +/- 288 TgC/yr from top-down atmospheric inversions. Although the global terrestrial carbon sink is not precisely known, these values represent possibly 30% of the global values. As with net sink estimates reported in SOCCR, these new top-down and bottom-up estimates are statistically consistent with one another. However, the uncertainties on each of these estimates are now substantially smaller, giving us more confidence about where the truth lies. Atmospheric inversions also yield estimates of interannual variations (IAV) in CO2 and CH4 fluxes. Our syntheses suggest that IAV of ecosystem CO2 fluxes is of order 100 TgC/yr, mainly originating in the conterminous US, with lower variability in boreal and arctic regions. Moreover, this variability is much larger than for inventory-based fluxes reported by the US to the UNFCCC. Unlike CO2, bottom-up CH4 emissions are larger than those derived from large-scale atmospheric data, with the continental discrepancy resulting primarily from differences in arctic and boreal regions. In addition to the current state of the science, we will also discuss the primary sources of uncertainty and how existing and emerging measurement and modeling technologies can address them.
Xu, Ping; Clark, Colleen; Ryder, Todd; Sparks, Colleen; Zhou, Jiping; Wang, Michelle; Russell, Reb; Scott, Charo
2017-03-01
Demands for development of biological therapies is rapidly increasing, as is the drive to reduce time to patient. In order to speed up development, the disposable Automated Microscale Bioreactor (Ambr 250) system is increasingly gaining interest due to its advantages, including highly automated control, high throughput capacity, and short turnaround time. Traditional early stage upstream process development conducted in 2 - 5 L bench-top bioreactors requires high foot-print, and running cost. The establishment of the Ambr 250 as a scale-down model leads to many benefits in process development. In this study, a comprehensive characterization of mass transfer coefficient (k L a) in the Ambr 250 was conducted to define optimal operational conditions. Scale-down approaches, including dimensionless volumetric flow rate (vvm), power per unit volume (P/V) and k L a have been evaluated using different cell lines. This study demonstrates that the Ambr 250 generated comparable profiles of cell growth and protein production, as seen at 5-L and 1000-L bioreactor scales, when using k L a as a scale-down parameter. In addition to mimicking processes at large scales, the suitability of the Ambr 250 as a tool for clone selection, which is traditionally conducted in bench-top bioreactors, was investigated. Data show that cell growth, productivity, metabolite profiles, and product qualities of material generated using the Ambr 250 were comparable to those from 5-L bioreactors. Therefore, Ambr 250 can be used for clone selection and process development as a replacement for traditional bench-top bioreactors minimizing resource utilization during the early stages of development in the biopharmaceutical industry. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:478-489, 2017. © 2017 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Wolff, C.; Verschuren, D.; Van Daele, M. E.; Waldmann, N.; Meyer, I.; Lane, C. S.; Van der Meeren, T.; Ombori, T.; Kasanzu, C.; Olago, D.
2017-12-01
Sediments on the bottom of Lake Challa, a 92-m deep crater lake on the border of Kenya and Tanzania near Mt. Kilimanjaro, contain a uniquely long and continuous record of past climate and environmental change in easternmost equatorial Africa. Supported in part by the International Continental Scientific Drilling Programme (ICDP), the DeepCHALLA project has now recovered this sediment record down to 214.8 m below the lake floor, with 100% recovery of the uppermost 121.3 m (the last 160 kyr BP) and ca.85% recovery of the older part of the sequence, down to the lowermost distinct reflector identified in seismic stratigraphy. This acoustic basement represents a ca.2-m thick layer of coarsely laminated, diatom-rich organic mud mixed with volcanic sand and silt deposited 250 kyr ago, overlying an estimated 20-30 m of unsampled lacustrine deposits representing the earliest phase of lake development. Down-hole logging produced profiles of in-situ sediment composition that confer an absolute depth- scale to both the recovered cores and the seismic stratigraphy. An estimated 74% of the recovered sequence is finely laminated (varved), and continuously so over the upper 72.3 m (the last 90 kyr). All other sections display at least cm-scale lamination, demonstrating persistence of a tranquil, profundal depositional environment throughout lake history. The sequence is interrupted only by 32 visible tephra layers 2 to 9 mm thick; and by several dozen fine-grained turbidites up to 108 cm thick, most of which are clearly bracketed between a non-erosive base and a diatom-laden cap. Tie points between sediment markers and the corresponding seismic reflectors support a preliminary age model inferring a near-constant rate of sediment accumulation over at least the last glacial cycle (140 kyr BP to present). This great time span combined with the exquisite temporal resolution of the Lake Challa sediments provides great opportunities to study past tropical climate dynamics at both short (inter-annual to decadal) and long (glacial-interglacial) time scales; and to assess the multi-faceted impact of this climate change on the region's freshwater resources, the functioning of terrestrial ecosystems, and the history of the African landscape in which modern humans (our species, Homo sapiens) originally evolved and have lived ever since.
Top down and bottom up engineering of bone.
Knothe Tate, Melissa L
2011-01-11
The goal of this retrospective article is to place the body of my lab's multiscale mechanobiology work in context of top-down and bottom-up engineering of bone. We have used biosystems engineering, computational modeling and novel experimental approaches to understand bone physiology, in health and disease, and across time (in utero, postnatal growth, maturity, aging and death, as well as evolution) and length scales (a single bone like a femur, m; a sample of bone tissue, mm-cm; a cell and its local environment, μm; down to the length scale of the cell's own skeleton, the cytoskeleton, nm). First we introduce the concept of flow in bone and the three calibers of porosity through which fluid flows. Then we describe, in the context of organ-tissue, tissue-cell and cell-molecule length scales, both multiscale computational models and experimental methods to predict flow in bone and to understand the flow of fluid as a means to deliver chemical and mechanical cues in bone. Addressing a number of studies in the context of multiple length and time scales, the importance of appropriate boundary conditions, site specific material parameters, permeability measures and even micro-nanoanatomically correct geometries are discussed in context of model predictions and their value for understanding multiscale mechanobiology of bone. Insights from these multiscale computational modeling and experimental methods are providing us with a means to predict, engineer and manufacture bone tissue in the laboratory and in the human body. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Peter J.; Feddema, Johannes J.; Bonan, Gordon B.
To assess the climate impacts of historical and projected land cover change and land use in the Community Climate System Model (CCSM4) we have developed new time series of transient Community Land Model (CLM4) Plant Functional Type (PFT) parameters and wood harvest parameters. The new parameters capture the dynamics of the Coupled Model Inter-comparison Project phase 5 (CMIP5) land cover change and wood harvest trajectories for the historical period from 1850 to 2005, and for the four Representative Concentration Pathways (RCP) periods from 2006 to 2100. Analysis of the biogeochemical impacts of land cover change in CCSM4 with the parametersmore » found the model produced an historical cumulative land use flux of 148.4 PgC from 1850 to 2005, which was in good agreement with other global estimates of around 156 PgC for the same period. The biogeophysical impacts of only applying the transient land cover change parameters in CCSM4 were cooling of the near surface atmospheric over land by -0.1OC, through increased surface albedo and reduced shortwave radiation absorption. When combined with other transient climate forcings, the higher albedo from land cover change was overwhelmed at global scales by decreases in snow albedo from black carbon deposition and from high latitude warming. At regional scales however the land cover change forcing persisted resulting in reduced warming, with the biggest impacts in eastern North America. The future CCSM4 RCP simulations showed that the CLM4 transient PFT and wood harvest parameters could be used to represent a wide range of human land cover change and land use scenarios. Furthermore, these simulations ranged from the RCP 4.5 reforestation scenario that was able to draw down 82.6 PgC from the atmosphere, to the RCP 8.5 wide scale deforestation scenario that released 171.6 PgC to the atmosphere.« less
Hayes, Spencer J; Dutoy, Chris A; Elliott, Digby; Gowen, Emma; Bennett, Simon J
2016-01-01
Learning a novel movement requires a new set of kinematics to be represented by the sensorimotor system. This is often accomplished through imitation learning where lower-level sensorimotor processes are suggested to represent the biological motion kinematics associated with an observed movement. Top-down factors have the potential to influence this process based on the social context, attention and salience, and the goal of the movement. In order to further examine the potential interaction between lower-level and top-down processes in imitation learning, the aim of this study was to systematically control the mediating effects during an imitation of biological motion protocol. In this protocol, we used non-human agent models that displayed different novel atypical biological motion kinematics, as well as a control model that displayed constant velocity. Importantly the three models had the same movement amplitude and movement time. Also, the motion kinematics were displayed in the presence, or absence, of end-state-targets. Kinematic analyses showed atypical biological motion kinematics were imitated, and that this performance was different from the constant velocity control condition. Although the imitation of atypical biological motion kinematics was not modulated by the end-state-targets, movement time was more accurate in the absence, compared to the presence, of an end-state-target. The fact that end-state targets modulated movement time accuracy, but not biological motion kinematics, indicates imitation learning involves top-down attentional, and lower-level sensorimotor systems, which operate as complementary processes mediated by the environmental context. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lovette, J. P.; Duncan, J. M.; Band, L. E.
2016-12-01
Watershed management requires information on the hydrologic impacts of local to regional land use, land cover and infrastructure conditions. Management of runoff volumes, storm flows, and water quality can benefit from large scale, "top-down" screening tools, using readily available information, as well as more detailed, "bottom-up" process-based models that explicitly track local runoff production and routing from sources to receiving water bodies. Regional scale data, available nationwide through the NHD+, and top-down models based on aggregated catchment information provide useful tools for estimating regional patterns of peak flows, volumes and nutrient loads at the catchment level. Management impacts can be estimated with these models, but have limited ability to resolve impacts beyond simple changes to land cover proportions. Alternatively, distributed process-based models provide more flexibility in modeling management impacts by resolving spatial patterns of nutrient source, runoff generation, and uptake. This bottom-up approach can incorporate explicit patterns of land cover, drainage connectivity, and vegetation extent, but are typically applied over smaller areas. Here, we first model peak flood flows and nitrogen loads across North Carolina's 70,000 NHD+ catchments using USGS regional streamflow regression equations and the SPARROW model. We also estimate management impact by altering aggregated sources in each of these models. To address the missing spatial implications of the top-down approach, we further explore the demand for riparian buffers as a management strategy, simulating the accumulation of nutrient sources along flow paths and the potential mitigation of these sources through forested buffers. We use the Regional Hydro-Ecological Simulation System (RHESSys) to model changes across several basins in North Carolina's Piedmont and Blue Ridge regions, ranging in size from 15 - 1,130 km2. The two approaches provide a complementary set of tools for large area screening, followed by smaller, more process based assessment and design tools.
NASA Astrophysics Data System (ADS)
Wetzel, Andrew R.; Hopkins, Philip F.; Kim, Ji-hoon; Faucher-Giguère, Claude-André; Kereš, Dušan; Quataert, Eliot
2016-08-01
Low-mass “dwarf” galaxies represent the most significant challenges to the cold dark matter (CDM) model of cosmological structure formation. Because these faint galaxies are (best) observed within the Local Group (LG) of the Milky Way (MW) and Andromeda (M31), understanding their formation in such an environment is critical. We present first results from the Latte Project: the Milky Way on Feedback in Realistic Environments (FIRE). This simulation models the formation of an MW-mass galaxy to z=0 within ΛCDM cosmology, including dark matter, gas, and stars at unprecedented resolution: baryon particle mass of 7070 {M}⊙ with gas kernel/softening that adapts down to 1 {pc} (with a median of 25{--}60 {pc} at z=0). Latte was simulated using the GIZMO code with a mesh-free method for accurate hydrodynamics and the FIRE-2 model for star formation and explicit feedback within a multi-phase interstellar medium. For the first time, Latte self-consistently resolves the spatial scales corresponding to half-light radii of dwarf galaxies that form around an MW-mass host down to {M}{star}≳ {10}5 {M}⊙ . Latte’s population of dwarf galaxies agrees with the LG across a broad range of properties: (1) distributions of stellar masses and stellar velocity dispersions (dynamical masses), including their joint relation; (2) the mass-metallicity relation; and (3) diverse range of star formation histories, including their mass dependence. Thus, Latte produces a realistic population of dwarf galaxies at {M}{star}≳ {10}5 {M}⊙ that does not suffer from the “missing satellites” or “too big to fail” problems of small-scale structure formation. We conclude that baryonic physics can reconcile observed dwarf galaxies with standard ΛCDM cosmology.
DOT National Transportation Integrated Search
2016-01-01
Next Steps : Use case development : Developing representative use cases for receivers : Defining parameters for transmit application for uplink and down link : Defining and finalize propagation models to be used : Antenna Characte...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Sheng; Li, Hongyi; Huang, Maoyi
2014-07-21
Subsurface stormflow is an important component of the rainfall–runoff response, especially in steep terrain. Its contribution to total runoff is, however, poorly represented in the current generation of land surface models. The lack of physical basis of these common parameterizations precludes a priori estimation of the stormflow (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global land surface models. This paper is aimed at deriving regionalized parameterizations of the storage–discharge relationship relating to subsurface stormflow from a top–down empirical data analysis of streamflow recession curves extracted from 50 eastern United Statesmore » catchments. Detailed regression analyses were performed between parameters of the empirical storage–discharge relationships and the controlling climate, soil and topographic characteristics. The regression analyses performed on empirical recession curves at catchment scale indicated that the coefficient of the power-law form storage–discharge relationship is closely related to the catchment hydrologic characteristics, which is consistent with the hydraulic theory derived mainly at the hillslope scale. As for the exponent, besides the role of field scale soil hydraulic properties as suggested by hydraulic theory, it is found to be more strongly affected by climate (aridity) at the catchment scale. At a fundamental level these results point to the need for more detailed exploration of the co-dependence of soil, vegetation and topography with climate.« less
NASA Astrophysics Data System (ADS)
Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.
2017-12-01
Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter is mainly caused by the large-scale transport associated with the jet stream that carries the negative biogenic CO2 signals to the northeastern coast. We apply comprehensive statistics to eliminate outliers. We generate a set of flux perturbations based on pre-calibrated flux ensemble members and apply them to the simulations.
Characterizing the EPODE logic model: unravelling the past and informing the future.
Van Koperen, T M; Jebb, S A; Summerbell, C D; Visscher, T L S; Romon, M; Borys, J M; Seidell, J C
2013-02-01
EPODE ('Ensemble Prévenons l'Obésité De Enfants' or 'Together let's Prevent Childhood Obesity') is a large-scale, centrally coordinated, capacity-building approach for communities to implement effective and sustainable strategies to prevent childhood obesity. Since 2004, EPODE has been implemented in over 500 communities in six countries. Although based on emergent practice and scientific knowledge, EPODE, as many community programs, lacks a logic model depicting key elements of the approach. The objective of this study is to gain insight in the dynamics and key elements of EPODE and to represent these in a schematic logic model. EPODE's process manuals and documents were collected and interviews were held with professionals involved in the planning and delivery of EPODE. Retrieved data were coded, themed and placed in a four-level logic model. With input from international experts, this model was scaled down to a concise logic model covering four critical components: political commitment, public and private partnerships, social marketing and evaluation. The EPODE logic model presented here can be used as a reference for future and follow-up research; to support future implementation of EPODE in communities; as a tool in the engagement of stakeholders; and to guide the construction of a locally tailored evaluation plan. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.
Silvestro, Paolo Cosmo; Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio; Casa, Raffaele
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations.
Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations. PMID:29107963
NASA Astrophysics Data System (ADS)
Baars, Woutijn J.; Hutchins, Nicholas; Marusic, Ivan
2017-11-01
An organization in wall-bounded turbulence is evidenced by the classification of distinctly different flow structures, including large-scale motions such as hairpin packets and very large-scale motions or superstructures. In conjunction with less organized turbulence, these flow structures all contribute to the streamwise turbulent kinetic energy
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe
2017-04-01
Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.
ICE-COLA: fast simulations for weak lensing observables
NASA Astrophysics Data System (ADS)
Izard, Albert; Fosalba, Pablo; Crocce, Martin
2018-01-01
Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.
Zhang, Lai; Cao, Yan; Hao, Xuewen; Zhang, Yongyong; Liu, Jianguo
2015-12-01
The environmental risk presented by "down-the-drain" chemicals to receiving rivers in large urban areas has received increasing attention in recent years. Geo-referenced Regional Environmental Assessment Tool for European Rivers (GREAT-ER) is a typical river catchment model that has been specifically developed for the risk assessment of these chemicals and applied in many European rivers. By utilizing the new version of the model, GREAT-ER 3.0, which is the first completely open source software for worldwide application, this study represents the first attempt to conduct an application of GREAT-ER in the Wenyu River of China. Aquatic exposure simulation and an environmental risk assessment of nonylphenol (NP) and its environmental precursor nonylphenol ethoxylates (NPEOs) were conducted effectively by GREAT-ER model, since NP is one of typical endocrine disrupting chemicals (EDCs) and its environmental precursor NPEOs as a "down-the-drain" chemical are extensively used in China. In the result, the predicted environmental concentrations (PECs) of NP and NPEOs in the water of Wenyu River were 538 and 4320 ng/L, respectively, at the regional scale, and 1210 and 8990 ng/L, respectively, at the local scale. From the results profile of the RCR, the combination of high emissions from large STPs with insufficient dilution of the river caused the high RCR. The PECs of NP in the sediment were in the range of 216.8-8218.3 ng/g (dry weight), which was consistent with the available monitoring data. The study showed the worldwide applicability and reliability of GREAT-ER as a river catchment model for the risk assessment of these chemicals and also revealed the general environmental risks presented by NP and NPEOs in the Wenyu River catchment in Beijing due to the extensive use of these chemicals. The results suggest that specific control or treatment measures are probably warranted for these chemicals to reduce their discharge in major cities.
Trickle-Down Preferences: Preferential Conformity to High Status Peers in Fashion Choices
Galak, Jeff; Gray, Kurt; Elbert, Igor; Strohminger, Nina
2016-01-01
How much do our choices represent stable inner preferences versus social conformity? We examine conformity and consistency in sartorial choices surrounding a common life event of new norm exposure: relocation. A large-scale dataset of individual purchases of women’s shoes (16,236 transactions) across five years and 2,007 women reveals a balance of conformity and consistency, moderated by changes in location socioeconomic status. Women conform to new local norms (i.e., average heel size) when moving to relatively higher status locations, but mostly ignore new local norms when moving to relatively lower status locations. In short, at periods of transition, it is the fashion norms of the rich that trickle down to consumers. These analyses provide the first naturalistic large-scale demonstration of the tension between psychological conformity and consistency, with real decisions in a highly visible context. PMID:27144595
Non-Nuclear Validation Test Results of a Closed Brayton Cycle Test-Loop
NASA Astrophysics Data System (ADS)
Wright, Steven A.
2007-01-01
Both NASA and DOE have programs that are investigating advanced power conversion cycles for planetary surface power on the moon or Mars, or for next generation nuclear power plants on earth. Although open Brayton cycles are in use for many applications (combined cycle power plants, aircraft engines), only a few closed Brayton cycles have been tested. Experience with closed Brayton cycles coupled to nuclear reactors is even more limited and current projections of Brayton cycle performance are based on analytic models. This report describes and compares experimental results with model predictions from a series of non-nuclear tests using a small scale closed loop Brayton cycle available at Sandia National Laboratories. A substantial amount of testing has been performed, and the information is being used to help validate models. In this report we summarize the results from three kinds of tests. These tests include: 1) test results that are useful for validating the characteristic flow curves of the turbomachinery for various gases ranging from ideal gases (Ar or Ar/He) to non-ideal gases such as CO2, 2) test results that represent shut down transients and decay heat removal capability of Brayton loops after reactor shut down, and 3) tests that map a range of operating power versus shaft speed curve and turbine inlet temperature that are useful for predicting stable operating conditions during both normal and off-normal operating behavior. These tests reveal significant interactions between the reactor and balance of plant. Specifically these results predict limited speed up behavior of the turbomachinery caused by loss of load, the conditions for stable operation, and for direct cooled reactors, the tests reveal that the coast down behavior during loss of power events can extend for hours provided the ultimate heat sink remains available.
Prelude to rational scale-up of penicillin production: a scale-down study.
Wang, Guan; Chu, Ju; Noorman, Henk; Xia, Jianye; Tang, Wenjun; Zhuang, Yingping; Zhang, Siliang
2014-03-01
Penicillin is one of the best known pharmaceuticals and is also an important member of the β-lactam antibiotics. Over the years, ambitious yields, titers, productivities, and low costs in the production of the β-lactam antibiotics have been stepwise realized through successive rounds of strain improvement and process optimization. Penicillium chrysogenum was proven to be an ideal cell factory for the production of penicillin, and successful approaches were exploited to elevate the production titer. However, the industrial production of penicillin faces the serious challenge that environmental gradients, which are caused by insufficient mixing and mass transfer limitations, exert a considerably negative impact on the ultimate productivity and yield. Scale-down studies regarding diverse environmental gradients have been carried out on bacteria, yeasts, and filamentous fungi as well as animal cells. In accordance, a variety of scale-down devices combined with fast sampling and quenching protocols have been established to acquire the true snapshots of the perturbed cellular conditions. The perturbed metabolome information stemming from scale-down studies contributed to the comprehension of the production process and the identification of improvement approaches. However, little is known about the influence of the flow field and the mechanisms of intracellular metabolism. Consequently, it is still rather difficult to realize a fully rational scale-up. In the future, developing a computer framework to simulate the flow field of the large-scale fermenters is highly recommended. Furthermore, a metabolically structured kinetic model directly related to the production of penicillin will be further coupled to the fluid flow dynamics. A mathematical model including the information from both computational fluid dynamics and chemical reaction dynamics will then be established for the prediction of detailed information over the entire period of the fermentation process and thereby for the optimization of penicillin production, and subsequently also benefiting other fermentation products.
NASA Astrophysics Data System (ADS)
Millar, D.; Ewers, B. E.; Peckham, S. D.; Mackay, D. S.; Frank, J. M.; Massman, W. J.; Reed, D. E.
2015-12-01
Mountain pine beetle (Dendroctonus ponderosae) and spruce beetle (Dendroctonus rufipennis) epidemics have led to extensive mortality in lodgepole pine (Pinus contorta) and Engelmann spruce (Picea engelmannii) forests in the Rocky Mountains of the western US. In both of these tree species, mortality results from hydraulic failure within the xylem, due to blue stain fungal infection associated with beetle attack. However, the impacts of these disturbances on ecosystem-scale water fluxes can be complex, owing to their variable and transient nature. In this work, xylem scaling factors that reduced whole-tree conductance were initially incorporated into a forest ecohydrological model (TREES) to simulate the impact of beetle mortality on evapotranspiration (ET) in both pine and spruce forests. For both forests, simulated ET was compared to observed ET fluxes recorded using eddy covariance techniques. Using xylem scaling factors, the model overestimated the impact of beetle mortality, and observed ET fluxes were approximately two-fold higher than model predictions in both forests. The discrepancy between simulated and observed ET following the onset of beetle mortality may be the result of spatial and temporal heterogeneity of plant communities within the foot prints of the eddy covariance towers. Since simulated ET fluxes following beetle mortality in both forests only accounted for approximately 50% of those observed in the field, it is possible that newly established understory vegetation in recently killed tree stands may play a role in stabilizing ecosystem ET fluxes. Here, we further investigate the unaccounted for ET fluxes in the model by breaking it down into multiple cohorts that represent live trees, dying trees, and understory vegetation that establishes following tree mortality.
Coupled wake boundary layer model of windfarms
NASA Astrophysics Data System (ADS)
Stevens, Richard; Gayme, Dennice; Meneveau, Charles
2014-11-01
We present a coupled wake boundary layer (CWBL) model that describes the distribution of the power output in a windfarm. The model couples the traditional, industry-standard wake expansion/superposition approach with a top-down model for the overall windfarm boundary layer structure. Wake models capture the effect of turbine positioning, while the top-down approach represents the interaction between the windturbine wakes and the atmospheric boundary layer. Each portion of the CWBL model requires specification of a parameter that is unknown a-priori. The wake model requires the wake expansion rate, whereas the top-down model requires the effective spanwise turbine spacing within which the model's momentum balance is relevant. The wake expansion rate is obtained by matching the mean velocity at the turbine from both approaches, while the effective spanwise turbine spacing is determined from the wake model. Coupling of the constitutive components of the CWBL model is achieved by iterating these parameters until convergence is reached. We show that the CWBL model predictions compare more favorably with large eddy simulation results than those made with either the wake or top-down model in isolation and that the model can be applied successfully to the Horns Rev and Nysted windfarms. The `Fellowships for Young Energy Scientists' (YES!) of the Foundation for Fundamental Research on Matter supported by NWO, and NSF Grant #1243482.
Anomalous polymer collapse winding angle distributions
NASA Astrophysics Data System (ADS)
Narros, A.; Owczarek, A. L.; Prellberg, T.
2018-03-01
In two dimensions polymer collapse has been shown to be complex with multiple low temperature states and multi-critical points. Recently, strong numerical evidence has been provided for a long-standing prediction of universal scaling of winding angle distributions, where simulations of interacting self-avoiding walks show that the winding angle distribution for N-step walks is compatible with the theoretical prediction of a Gaussian with a variance growing asymptotically as Clog N . Here we extend this work by considering interacting self-avoiding trails which are believed to be a model representative of some of the more complex behaviour. We provide robust evidence that, while the high temperature swollen state of this model has a winding angle distribution that is also Gaussian, this breaks down at the polymer collapse point and at low temperatures. Moreover, we provide some evidence that the distributions are well modelled by stretched/compressed exponentials, in contradistinction to the behaviour found in interacting self-avoiding walks. Dedicated to Professor Stu Whittington on the occasion of his 75th birthday.
NASA Astrophysics Data System (ADS)
Rea, B.; Evans, D. J. A.; Benn, D. I.; Brennan, A. J.
2012-04-01
Networks of crevasse squeeze ridges (CSRs) preserved on the forelands of many surging glaciers attest to extensive full-depth crevassing. Full-depth connections have been inferred from turbid water up-welling in crevasses and the formation of concertina eskers however, it has not been clearly established if the crevasses formed from the top-down or the bottom-up. A Linear Elastic Fracture Mechanics (LEFM) approach is used to determine the likely propagation direction for Mode I crevasses on seven surging glaciers. Results indicate that, the high extensional surface strain rates are insufficient to promote top-down full-depth crevasses but have sufficient magnitude to penetrate to depths of 4-12 m, explaining the extensive surface breakup accompanying glacier surges. Top-down, full-depth crevassing is only possible when water depth approaches 97% of the crevasse depth. However, the provision of sufficient meltwater is problematic due to the aforementioned extensive shallow surface crevassing. Full-depth, bottom-up crevassing can occur provided basal water pressures are in excess of 80-90% of flotation which is the default for surging and on occasion water pressures may even become artesian. Therefore CSRs, found across many surging glacier forelands and ice margins most likely result from the infilling of basal crevasses formed, for the most part, by bottom-up hydrofracturing. Despite the importance of crevassing for meltwater routing and calving dynamics physically testing numerical crevassing models remains problematic due to technological limitations, changing stress regimes and difficulties associated with working in crevasse zones on glaciers. Mapping of CSR spacing and matching to surface crevasse patterns can facilitate quantitative comparison between the LEFM model and observed basal crevasses provided ice dynamics are known. However, assessing full-depth top-down crevasse propagation is much harder to monitor in the field and no geomorphological record is preserved. An alternative approach is provided by geotechnical centrifuge modelling. By testing scaled models in an enhanced 'gravity' field real-world (prototype) stress conditions can be reproduced which is crucial for problems governed by self-weight stresses, of which glacier crevassing is one. Scaling relationships have been established for stress intensity factors - KI which are key to determining crevasse penetration such that KIp = √N KIm (p = prototype and m = model). Operating specifications of the University of Dundee geotechnical centrifuge (100g) will allow the testing of scaled models equivalent to prototype glaciers of 50 m thickness in order to provide a physical test of the LEFM top-down crevassing model.
The global methane budget 2000-2012
NASA Astrophysics Data System (ADS)
Saunois, Marielle; Bousquet, Philippe; Poulter, Ben; Peregon, Anna; Ciais, Philippe; Canadell, Josep G.; Dlugokencky, Edward J.; Etiope, Giuseppe; Bastviken, David; Houweling, Sander; Janssens-Maenhout, Greet; Tubiello, Francesco N.; Castaldi, Simona; Jackson, Robert B.; Alexe, Mihai; Arora, Vivek K.; Beerling, David J.; Bergamaschi, Peter; Blake, Donald R.; Brailsford, Gordon; Brovkin, Victor; Bruhwiler, Lori; Crevoisier, Cyril; Crill, Patrick; Covey, Kristofer; Curry, Charles; Frankenberg, Christian; Gedney, Nicola; Höglund-Isaksson, Lena; Ishizawa, Misa; Ito, Akihiko; Joos, Fortunat; Kim, Heon-Sook; Kleinen, Thomas; Krummel, Paul; Lamarque, Jean-François; Langenfelds, Ray; Locatelli, Robin; Machida, Toshinobu; Maksyutov, Shamil; McDonald, Kyle C.; Marshall, Julia; Melton, Joe R.; Morino, Isamu; Naik, Vaishali; O'Doherty, Simon; Parmentier, Frans-Jan W.; Patra, Prabir K.; Peng, Changhui; Peng, Shushi; Peters, Glen P.; Pison, Isabelle; Prigent, Catherine; Prinn, Ronald; Ramonet, Michel; Riley, William J.; Saito, Makoto; Santini, Monia; Schroeder, Ronny; Simpson, Isobel J.; Spahni, Renato; Steele, Paul; Takizawa, Atsushi; Thornton, Brett F.; Tian, Hanqin; Tohjima, Yasunori; Viovy, Nicolas; Voulgarakis, Apostolos; van Weele, Michiel; van der Werf, Guido R.; Weiss, Ray; Wiedinmyer, Christine; Wilton, David J.; Wiltshire, Andy; Worthy, Doug; Wunch, Debra; Xu, Xiyan; Yoshida, Yukio; Zhang, Bowen; Zhang, Zhen; Zhu, Qiuan
2016-12-01
The global methane (CH4) budget is becoming an increasingly important component for managing realistic pathways to mitigate climate change. This relevance, due to a shorter atmospheric lifetime and a stronger warming potential than carbon dioxide, is challenged by the still unexplained changes of atmospheric CH4 over the past decade. Emissions and concentrations of CH4 are continuing to increase, making CH4 the second most important human-induced greenhouse gas after carbon dioxide. Two major difficulties in reducing uncertainties come from the large variety of diffusive CH4 sources that overlap geographically, and from the destruction of CH4 by the very short-lived hydroxyl radical (OH). To address these difficulties, we have established a consortium of multi-disciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate research on the methane cycle, and producing regular (˜ biennial) updates of the global methane budget. This consortium includes atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions. Following Kirschke et al. (2013), we propose here the first version of a living review paper that integrates results of top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up models, inventories and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, and inventories for anthropogenic emissions, data-driven extrapolations). For the 2003-2012 decade, global methane emissions are estimated by top-down inversions at 558 Tg CH4 yr-1, range 540-568. About 60 % of global emissions are anthropogenic (range 50-65 %). Since 2010, the bottom-up global emission inventories have been closer to methane emissions in the most carbon-intensive Representative Concentrations Pathway (RCP8.5) and higher than all other RCP scenarios. Bottom-up approaches suggest larger global emissions (736 Tg CH4 yr-1, range 596-884) mostly because of larger natural emissions from individual sources such as inland waters, natural wetlands and geological sources. Considering the atmospheric constraints on the top-down budget, it is likely that some of the individual emissions reported by the bottom-up approaches are overestimated, leading to too large global emissions. Latitudinal data from top-down emissions indicate a predominance of tropical emissions (˜ 64 % of the global budget, < 30° N) as compared to mid (˜ 32 %, 30-60° N) and high northern latitudes (˜ 4 %, 60-90° N). Top-down inversions consistently infer lower emissions in China (˜ 58 Tg CH4 yr-1, range 51-72, -14 %) and higher emissions in Africa (86 Tg CH4 yr-1, range 73-108, +19 %) than bottom-up values used as prior estimates. Overall, uncertainties for anthropogenic emissions appear smaller than those from natural sources, and the uncertainties on source categories appear larger for top-down inversions than for bottom-up inventories and models. The most important source of uncertainty on the methane budget is attributable to emissions from wetland and other inland waters. We show that the wetland extent could contribute 30-40 % on the estimated range for wetland emissions. Other priorities for improving the methane budget include the following: (i) the development of process-based models for inland-water emissions, (ii) the intensification of methane observations at local scale (flux measurements) to constrain bottom-up land surface models, and at regional scale (surface networks and satellites) to constrain top-down inversions, (iii) improvements in the estimation of atmospheric loss by OH, and (iv) improvements of the transport models integrated in top-down inversions. The data presented here can be downloaded from the Carbon Dioxide Information Analysis Center (http://doi.org/10.3334/CDIAC/GLOBAL_METHANE_BUDGET_2016_V1.1) and the Global Carbon Project.
The Global Methane Budget 2000-2012
NASA Technical Reports Server (NTRS)
Saunois, Marielle; Bousquet, Philippe; Poulter, Benjamin; Peregon, Anna; Ciais, Philippe; Canadell, Josep G.; Dlugokencky, Edward J.; Etiope, Giuseppe; Bastviken, David; Houweling, Sander;
2016-01-01
The global methane (CH4) budget is becoming an increasingly important component for managing realistic pathways to mitigate climate change. This relevance, due to a shorter atmospheric lifetime and a stronger warming potential than carbon dioxide, is challenged by the still unexplained changes of atmospheric CH4 over the past decade. Emissions and concentrations of CH4 are continuing to increase, making CH4 the second most important human-induced greenhouse gas after carbon dioxide. Two major difficulties in reducing uncertainties come from the large variety of diffusive CH4 sources that overlap geographically, and from the destruction of CH4 by the very short-lived hydroxyl radical (OH). To address these difficulties, we have established a consortium of multi-disciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate research on the methane cycle, and producing regular (approximately biennial) updates of the global methane budget. This consortium includes atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions. Following Kirschke et al. (2013), we propose here the first version of a living review paper that integrates results of top-down studies (exploiting atmospheric observations within an atmospheric inverse-modeling framework) and bottom-up models, inventories and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, and inventories for anthropogenic emissions, data-driven extrapolations).For the 2003-2012 decade, global methane emissions are estimated by top-down inversions at 558 TgCH4 yr(exp -1), range 540-568. About 60 of global emissions are anthropogenic (range 50-65%). Since 2010, the bottom-up global emission inventories have been closer to methane emissions in the most carbon-intensive Representative Concentrations Pathway (RCP8.5) and higher than all other RCP scenarios. Bottom-up approaches suggest larger global emissions (736 TgCH4 yr(exp -1), range 596-884) mostly because of larger natural emissions from individual sources such as inland waters, natural wetlands and geological sources. Considering the atmospheric constraints on the top-down budget, it is likely that some of the individual emissions reported by the bottom-up approaches are overestimated, leading to too large global emissions. Latitudinal data from top-down emissions indicate a predominance of tropical emissions (approximately 64% of the global budget, less than 30deg N) as compared to mid (approximately 32%, 30-60deg N) and high northern latitudes (approximately 4%, 60-90deg N). Top-down inversions consistently infer lower emissions in China (approximately 58 TgCH4 yr(exp -1), range 51-72, minus14% ) and higher emissions in Africa (86 TgCH4 yr(exp -1), range 73-108, plus 19% ) than bottom-up values used as prior estimates. Overall, uncertainties for anthropogenic emissions appear smaller than those from natural sources, and the uncertainties on source categories appear larger for top-down inversions than for bottom-up inventories and models. The most important source of uncertainty on the methane budget is attributable to emissions from wetland and other inland waters. We show that the wetland extent could contribute 30-40% on the estimated range for wetland emissions. Other priorities for improving the methane budget include the following: (i) the development of process-based models for inland-water emissions, (ii) the intensification of methane observations at local scale (flux measurements) to constrain bottom-up land surface models, and at regional scale (surface networks and satellites) to constrain top-down inversions, (iii) improvements in the estimation of atmospheric loss by OH, and (iv) improvements of the transport models integrated in top-down inversions. The data presented here can be downloaded from the Carbon Dioxide Information Analysis Center (http://doi.org/10.3334/CDIAC/GLOBAL_ METHANE_BUDGET_2016_V1.1) and the Global Carbon Project.
A Multi-Scale Perspective of the Effects of Forest Fragmentation on Birds in Eastern Forests
Frank R. Thompson; Therese M. Donovan; Richard M. DeGraff; John Faaborg; Scott K. Robinson
2002-01-01
We propose a model that considers forest fragmentation within a spatial hierarchy that includes regional or biogeographic effects, landscape-level fragmentation effects, and local habitat effects. We hypothesize that effects operate "top down" in that larger scale effects provide constraints or context for smaller scale effects. Bird species' abundance...
New Computer Simulations of Macular Neural Functioning
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.
1994-01-01
We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.
Quasi-coarse-grained dynamics: modelling of metallic materials at mesoscales
NASA Astrophysics Data System (ADS)
Dongare, Avinash M.
2014-12-01
A computationally efficient modelling method called quasi-coarse-grained dynamics (QCGD) is developed to expand the capabilities of molecular dynamics (MD) simulations to model behaviour of metallic materials at the mesoscales. This mesoscale method is based on solving the equations of motion for a chosen set of representative atoms from an atomistic microstructure and using scaling relationships for the atomic-scale interatomic potentials in MD simulations to define the interactions between representative atoms. The scaling relationships retain the atomic-scale degrees of freedom and therefore energetics of the representative atoms as would be predicted in MD simulations. The total energetics of the system is retained by scaling the energetics and the atomic-scale degrees of freedom of these representative atoms to account for the missing atoms in the microstructure. This scaling of the energetics renders improved time steps for the QCGD simulations. The success of the QCGD method is demonstrated by the prediction of the structural energetics, high-temperature thermodynamics, deformation behaviour of interfaces, phase transformation behaviour, plastic deformation behaviour, heat generation during plastic deformation, as well as the wave propagation behaviour, as would be predicted using MD simulations for a reduced number of representative atoms. The reduced number of atoms and the improved time steps enables the modelling of metallic materials at the mesoscale in extreme environments.
3D shape decomposition and comparison for gallbladder modeling
NASA Astrophysics Data System (ADS)
Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen
2011-03-01
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, X. G.; Kim, Y. S.; Choi, K. Y.
2012-07-01
A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less
Impacts devalue the potential of large-scale terrestrial CO2 removal through biomass plantations
NASA Astrophysics Data System (ADS)
Boysen, L. R.; Lucht, W.; Gerten, D.; Heck, V.
2016-09-01
Large-scale biomass plantations (BPs) are often considered a feasible and safe climate engineering proposal for extracting carbon from the atmosphere and, thereby, reducing global mean temperatures. However, the capacity of such terrestrial carbon dioxide removal (tCDR) strategies and their larger Earth system impacts remain to be comprehensively studied—even more so under higher carbon emissions and progressing climate change. Here, we use a spatially explicit process-based biosphere model to systematically quantify the potentials and trade-offs of a range of BP scenarios dedicated to tCDR, representing different assumptions about which areas are convertible. Based on a moderate CO2 concentration pathway resulting in a global mean warming of 2.5 °C above preindustrial level by the end of this century—similar to the Representative Concentration Pathway (RCP) 4.5—we assume tCDR to be implemented when a warming of 1.5 °C is reached in year 2038. Our results show that BPs can slow down the progression of increasing cumulative carbon in the atmosphere only sufficiently if emissions are reduced simultaneously like in the underlying RCP4.5 trajectory. The potential of tCDR to balance additional, unabated emissions leading towards a business-as-usual pathway alike RCP8.5 is therefore very limited. Furthermore, in the required large-scale applications, these plantations would induce significant trade-offs with food production and biodiversity and exert impacts on forest extent, biogeochemical cycles and biogeophysical properties.
Numerical studies of fast ion slowing down rates in cool magnetized plasma using LSP
NASA Astrophysics Data System (ADS)
Evans, Eugene S.; Kolmes, Elijah; Cohen, Samuel A.; Rognlien, Tom; Cohen, Bruce; Meier, Eric; Welch, Dale R.
2016-10-01
In MFE devices, rapid transport of fusion products from the core into the scrape-off layer (SOL) could perform the dual roles of energy and ash removal. The first-orbit trajectories of most fusion products from small field-reversed configuration (FRC) devices will traverse the SOL, allowing those particles to deposit their energy in the SOL and be exhausted along the open field lines. Thus, the fast ion slowing-down time should affect the energy balance of an FRC reactor and its neutron emissions. However, the dynamics of fast ion energy loss processes under the conditions expected in the FRC SOL (with ρe <λDe) are analytically complex, and not yet fully understood. We use LSP, a 3D electromagnetic PIC code, to examine the effects of SOL density and background B-field on the slowing-down time of fast ions in a cool plasma. As we use explicit algorithms, these simulations must spatially resolve both ρe and λDe, as well as temporally resolve both Ωe and ωpe, increasing computation time. Scaling studies of the fast ion charge (Z) and background plasma density are in good agreement with unmagnetized slowing down theory. Notably, Z-scaling represents a viable way to dramatically reduce the required CPU time for each simulation. This work was supported, in part, by DOE Contract Number DE-AC02-09CH11466.
Bottom-up synthesis of multifunctional nanoporous graphene
NASA Astrophysics Data System (ADS)
Moreno, César; Vilas-Varela, Manuel; Kretz, Bernhard; Garcia-Lekue, Aran; Costache, Marius V.; Paradinas, Markos; Panighel, Mirko; Ceballos, Gustavo; Valenzuela, Sergio O.; Peña, Diego; Mugarza, Aitor
2018-04-01
Nanosize pores can turn semimetallic graphene into a semiconductor and, from being impermeable, into the most efficient molecular-sieve membrane. However, scaling the pores down to the nanometer, while fulfilling the tight structural constraints imposed by applications, represents an enormous challenge for present top-down strategies. Here we report a bottom-up method to synthesize nanoporous graphene comprising an ordered array of pores separated by ribbons, which can be tuned down to the 1-nanometer range. The size, density, morphology, and chemical composition of the pores are defined with atomic precision by the design of the molecular precursors. Our electronic characterization further reveals a highly anisotropic electronic structure, where orthogonal one-dimensional electronic bands with an energy gap of ∼1 electron volt coexist with confined pore states, making the nanoporous graphene a highly versatile semiconductor for simultaneous sieving and electrical sensing of molecular species.
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
Delvigne, Frank; Takors, Ralf; Mudde, Rob; van Gulik, Walter; Noorman, Henk
2017-09-01
Efficient optimization of microbial processes is a critical issue for achieving a number of sustainable development goals, considering the impact of microbial biotechnology in agrofood, environment, biopharmaceutical and chemical industries. Many of these applications require scale-up after proof of concept. However, the behaviour of microbial systems remains unpredictable (at least partially) when shifting from laboratory-scale to industrial conditions. The need for robust microbial systems is thus highly needed in this context, as well as a better understanding of the interactions between fluid mechanics and cell physiology. For that purpose, a full scale-up/down computational framework is already available. This framework links computational fluid dynamics (CFD), metabolic flux analysis and agent-based modelling (ABM) for a better understanding of the cell lifelines in a heterogeneous environment. Ultimately, this framework can be used for the design of scale-down simulators and/or metabolically engineered cells able to cope with environmental fluctuations typically found in large-scale bioreactors. However, this framework still needs some refinements, such as a better integration of gas-liquid flows in CFD, and taking into account intrinsic biological noise in ABM. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ansong, Charles; Wu, Si; Meng, Da
Characterization of the mature protein complement in cells is crucial for a better understanding of cellular processes on a systems-wide scale. Bottom-up proteomic approaches often lead to loss of critical information about an endogenous protein’s actual state due to post translational modifications (PTMs) and other processes. Top-down approaches that involve analysis of the intact protein can address this concern but present significant analytical challenges related to the separation quality needed, measurement sensitivity, and speed that result in low throughput and limited coverage. Here we used single-dimension ultra high pressure liquid chromatography mass spectrometry to investigate the comprehensive ‘intact’ proteome ofmore » the Gram negative bacterial pathogen Salmonella Typhimurium. Top-down proteomics analysis revealed 563 unique proteins including 1665 proteoforms generated by PTMs, representing the largest microbial top-down dataset reported to date. Our analysis not only confirmed several previously recognized aspects of Salmonella biology and bacterial PTMs in general, but also revealed several novel biological insights. Of particular interest was differential utilization of the protein S-thiolation forms S-glutathionylation and S-cysteinylation in response to infection-like conditions versus basal conditions, which was corroborated by changes in corresponding biosynthetic pathways. This differential utilization highlights underlying metabolic mechanisms that modulate changes in cellular signaling, and represents to our knowledge the first report of S-cysteinylation in Gram negative bacteria. The demonstrated utility of our simple proteome-wide intact protein level measurement strategy for gaining biological insight should promote broader adoption and applications of top-down proteomics approaches.« less
The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling
NASA Astrophysics Data System (ADS)
Thornes, Tobias; Duben, Peter; Palmer, Tim
2016-04-01
At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new paradigm would represent a revolution in numerical modelling that could be of great benefit to the world.
Probability of brittle failure
NASA Technical Reports Server (NTRS)
Kim, A.; Bosnyak, C. P.; Chudnovsky, A.
1991-01-01
A methodology was developed for collecting statistically representative data for crack initiation and arrest from small number of test specimens. An epoxy (based on bisphenol A diglycidyl ether and polyglycol extended diglycyl ether and cured with diethylene triamine) is selected as a model material. A compact tension specimen with displacement controlled loading is used to observe multiple crack initiation and arrests. The energy release rate at crack initiation is significantly higher than that at a crack arrest, as has been observed elsewhere. The difference between these energy release rates is found to depend on specimen size (scale effect), and is quantitatively related to the fracture surface morphology. The scale effect, similar to that in statistical strength theory, is usually attributed to the statistics of defects which control the fracture process. Triangular shaped ripples (deltoids) are formed on the fracture surface during the slow subcritical crack growth, prior to the smooth mirror-like surface characteristic of fast cracks. The deltoids are complementary on the two crack faces which excludes any inelastic deformation from consideration. Presence of defects is also suggested by the observed scale effect. However, there are no defects at the deltoid apexes detectable down to the 0.1 micron level.
Turner, Richard; Joseph, Adrian; Titchener-Hooker, Nigel; Bender, Jean
2017-08-04
Cell harvesting is the separation or retention of cells and cellular debris from the supernatant containing the target molecule Selection of harvest method strongly depends on the type of cells, mode of bioreactor operation, process scale, and characteristics of the product and cell culture fluid. Most traditional harvesting methods use some form of filtration, centrifugation, or a combination of both for cell separation and/or retention. Filtration methods include normal flow depth filtration and tangential flow microfiltration. The ability to scale down predictably the selected harvest method helps to ensure successful production and is critical for conducting small-scale characterization studies for confirming parameter targets and ranges. In this chapter we describe centrifugation and depth filtration harvesting methods, share strategies for harvest optimization, present recent developments in centrifugation scale-down models, and review alternative harvesting technologies.
NASA Astrophysics Data System (ADS)
Li, Wanli; Vicente, C. L.; Xia, J. S.; Pan, W.; Tsui, D. C.; Pfeiffer, L. N.; West, K. W.
2009-05-01
The quantum Hall-plateau transition was studied at temperatures down to 1 mK in a random alloy disordered high mobility two-dimensional electron gas. A perfect power-law scaling with κ=0.42 was observed from 1.2 K down to 12 mK. This perfect scaling terminates sharply at a saturation temperature of Ts˜10mK. The saturation is identified as a finite-size effect when the quantum phase coherence length (Lϕ∝T-p/2) reaches the sample size (W) of millimeter scale. From a size dependent study, Ts∝W-1 was observed and p=2 was obtained. The exponent of the localization length, determined directly from the measured κ and p, is ν=2.38, and the dynamic critical exponent z=1.
Multiscale Modeling of Hematologic Disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedosov, Dmitry A.; Pivkin, Igor; Pan, Wenxiao
Parasitic infectious diseases and other hereditary hematologic disorders are often associated with major changes in the shape and viscoelastic properties of red blood cells (RBCs). Such changes can disrupt blood flow and even brain perfusion, as in the case of cerebral malaria. Modeling of these hematologic disorders requires a seamless multiscale approach, where blood cells and blood flow in the entire arterial tree are represented accurately using physiologically consistent parameters. In this chapter, we present a computational methodology based on dissipative particle dynamics (DPD) which models RBCs as well as whole blood in health and disease. DPD is a Lagrangianmore » method that can be derived from systematic coarse-graining of molecular dynamics but can scale efficiently up to small arteries and can also be used to model RBCs down to spectrin level. To this end, we present two complementary mathematical models for RBCs and describe a systematic procedure on extracting the relevant input parameters from optical tweezers and microfluidic experiments for single RBCs. We then use these validated RBC models to predict the behavior of whole healthy blood and compare with experimental results. The same procedure is applied to modeling malaria, and results for infected single RBCs and whole blood are presented.« less
Scattering of magnetized electrons at the boundary of low temperature plasmas
NASA Astrophysics Data System (ADS)
Krüger, Dennis; Trieschmann, Jan; Brinkmann, Ralf Peter
2018-02-01
Magnetized technological plasmas with magnetic fields of 10-200 mT, plasma densities of 1017-1019 m-3, gas pressures of less than 1 Pa, and electron energies from a few to (at most) a few hundred electron volts are characterized by electron Larmor radii r L, that are small compared to all other length scales of the system, including the spatial scale L of the magnetic field and the collisional mean free path λ. In this regime, the classical drift approximation applies. In the boundary sheath of these discharges, however, that approximation breaks down: The sheath penetration depth of electrons (a few to some ten Debye length λ D; depending on the kinetic energy; typically much smaller than the sheath thickness of tens/hundreds of λ D) is even smaller than r L. For a model description of the electron dynamics, an appropriate boundary condition for the plasma/sheath interface is required. To develop such, the interaction of magnetized electrons with the boundary sheath is investigated using a 3D kinetic single electron model that sets the larger scales L and λ to infinity, i.e. neglects magnetic field gradients, the electric field in the bulk, and collisions. A detailed comparison of the interaction for a Bohm sheath (which assumes a finite Debye length) and a hard wall model (representing the limit {λ }{{D}}\\to 0; also called the specular reflection model) is conducted. Both models are found to be in remarkable agreement with respect to the sheath-induced drift. It is concluded that the assumption of specular reflection can be used as a valid boundary condition for more realistic kinetic models of magnetized technological plasmas.
Zhou, Haiying; Purdie, Jennifer; Wang, Tongtong; Ouyang, Anli
2010-01-01
The number of therapeutic proteins produced by cell culture in the pharmaceutical industry continues to increase. During the early stages of manufacturing process development, hundreds of clones and various cell culture conditions are evaluated to develop a robust process to identify and select cell lines with high productivity. It is highly desirable to establish a high throughput system to accelerate process development and reduce cost. Multiwell plates and shake flasks are widely used in the industry as the scale down model for large-scale bioreactors. However, one of the limitations of these two systems is the inability to measure and control pH in a high throughput manner. As pH is an important process parameter for cell culture, this could limit the applications of these scale down model vessels. An economical, rapid, and robust pH measurement method was developed at Eli Lilly and Company by employing SNARF-4F 5-(-and 6)-carboxylic acid. The method demonstrated the ability to measure the pH values of cell culture samples in a high throughput manner. Based upon the chemical equilibrium of CO(2), HCO(3)(-), and the buffer system, i.e., HEPES, we established a mathematical model to regulate pH in multiwell plates and shake flasks. The model calculates the required %CO(2) from the incubator and the amount of sodium bicarbonate to be added to adjust pH to a preset value. The model was validated by experimental data, and pH was accurately regulated by this method. The feasibility of studying the pH effect on cell culture in 96-well plates and shake flasks was also demonstrated in this study. This work shed light on mini-bioreactor scale down model construction and paved the way for cell culture process development to improve productivity or product quality using high throughput systems. Copyright 2009 American Institute of Chemical Engineers
Structural similitude and design of scaled down laminated models
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Rezaeepazhand, J.
1993-01-01
The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.
NASA Astrophysics Data System (ADS)
Rytka, C.; Lungershausen, J.; Kristiansen, P. M.; Neyer, A.
2016-06-01
Flow simulations can cut down both costs and time for the development of injection moulded polymer parts with functional surfaces used in life science and optical applications. We simulated the polymer melt flow into 3D micro- and nanostructures with Moldflow and Comsol and compared the results to real iso- and variothermal injection moulding trials below, at and above the transition temperature of the polymer. By adjusting the heat transfer coefficient and the transition temperature in the simulation it was possible to achieve good correlation with experimental findings at different processing conditions (mould temperature, injection velocity) for two polymers, namely polymethylmethacrylate and amorphous polyamide. The macroscopic model can be scaled down in volume and number of elements to save computational time for microstructure simulation and to enable first and foremost the nanostructure simulation, as long as local boundary conditions such as flow front speed are transferred correctly. The heat transfer boundary condition used in Moldflow was further evaluated in Comsol. Results showed that the heat transfer coefficient needs to be increased compared to macroscopic moulding in order to represent interfacial polymer/mould effects correctly. The transition temperature is most important in the packing phase for variothermal injection moulding.
Safak, Ilgar; List, Jeffrey; Warner, John C.; Schwab, William C.
2017-01-01
Mechanisms relating offshore geologic framework to shoreline evolution are determined through geologic investigations, oceanographic deployments, and numerical modeling. Analysis of shoreline positions from the past 50 years along Fire Island, New York, a 50 km long barrier island, demonstrates a persistent undulating shape along the western half of the island. The shelf offshore of these persistent undulations is characterized with shoreface-connected sand ridges (SFCR) of a similar alongshore length scale, leading to a hypothesis that the ridges control the shoreline shape through the modification of flow. To evaluate this, a hydrodynamic model was configured to start with the US East Coast and scale down to resolve the Fire Island nearshore. The model was validated using observations along western Fire Island and buoy data, and used to compute waves, currents and sediment fluxes. To isolate the influence of the SFCR on the generation of the persistent shoreline shape, simulations were performed with a linearized nearshore bathymetry to remove alongshore transport gradients associated with shoreline shape. The model accurately predicts the scale and variation of the alongshore transport that would generate the persistent shoreline undulations. In one location, however, the ridge crest connects to the nearshore and leads to an offshore-directed transport that produces a difference in the shoreline shape. This qualitatively supports the hypothesized effect of cross-shore fluxes on coastal evolution. Alongshore flows in the nearshore during a representative storm are driven by wave breaking, vortex force, advection and pressure gradient, all of which are affected by the SFCR.
NASA Astrophysics Data System (ADS)
Tulich, S. N.
2015-06-01
This paper describes a general method for the treatment of convective momentum transport (CMT) in large-scale dynamical solvers that use a cyclic, two-dimensional (2-D) cloud-resolving model (CRM) as a "superparameterization" of convective-system-scale processes. The approach is similar in concept to traditional parameterizations of CMT, but with the distinction that both the scalar transport and diagnostic pressure gradient force are calculated using information provided by the 2-D CRM. No assumptions are therefore made concerning the role of convection-induced pressure gradient forces in producing up or down-gradient CMT. The proposed method is evaluated using a new superparameterized version of the Weather Research and Forecast model (SP-WRF) that is described herein for the first time. Results show that the net effect of the formulation is to modestly reduce the overall strength of the large-scale circulation, via "cumulus friction." This statement holds true for idealized simulations of two types of mesoscale convective systems, a squall line, and a tropical cyclone, in addition to real-world global simulations of seasonal (1 June to 31 August) climate. In the case of the latter, inclusion of the formulation is found to improve the depiction of key synoptic modes of tropical wave variability, in addition to some aspects of the simulated time-mean climate. The choice of CRM orientation is also found to importantly affect the simulated time-mean climate, apparently due to changes in the explicit representation of wide-spread shallow convective regions.
Multiscale Modelling of the 2011 Tohoku Tsunami with Fluidity: Coastal Inundation and Run-up.
NASA Astrophysics Data System (ADS)
Hill, J.; Martin-Short, R.; Piggott, M. D.; Candy, A. S.
2014-12-01
Tsunami-induced flooding represents one of the most dangerous natural hazards to coastal communities around the world, as exemplified by Tohoku tsunami of March 2011. In order to further understand this hazard and to design appropriate mitigation it is necessary to develop versatile, accurate software capable of simulating large scale tsunami propagation and interaction with coastal geomorphology on a local scale. One such software package is Fluidity, an open source, finite element, multiscale, code that is capable of solving the fully three dimensional Navier-Stokes equations on unstructured meshes. Such meshes are significantly better at representing complex coastline shapes than structured meshes and have the advantage of allowing variation in element size across a domain. Furthermore, Fluidity incorporates a novel wetting and drying algorithm, which enables accurate, efficient simulation of tsunami run-up over complex, multiscale, topography. Fluidity has previously been demonstrated to accurately simulate the 2011 Tohoku tsunami (Oishi et al 2013) , but its wetting and drying facility has not yet been tested on a geographical scale. This study makes use of Fluidity to simulate the 2011 Tohoku tsunami and its interaction with Japan's eastern shoreline, including coastal flooding. The results are validated against observations made by survey teams, aerial photographs and previous modelling efforts in order to evaluate Fluidity's current capabilities and suggest methods of future improvement. The code is shown to perform well at simulating flooding along the topographically complex Tohoku coast of Japan, with major deviations between model and observation arising mainly due to limitations imposed by bathymetry resolution, which could be improved in future. In theory, Fluidity is capable of full multiscale tsunami modelling, thus enabling researchers to understand both wave propagation across ocean basins and flooding of coastal landscapes down to interaction with individual defence structures. This makes the code an exciting candidate for use in future studies aiming to investigate tsunami risk elsewhere in the world. Oishi, Y. et al. Three-dimensional tsunami propagation simulations using an unstructured mesh finite element model. J. Geophys. Res. [Solid Earth] 118, 2998-3018 (2013).
Scaling Relations and Self-Similarity of 3-Dimensional Reynolds-Averaged Navier-Stokes Equations.
Ercan, Ali; Kavvas, M Levent
2017-07-25
Scaling conditions to achieve self-similar solutions of 3-Dimensional (3D) Reynolds-Averaged Navier-Stokes Equations, as an initial and boundary value problem, are obtained by utilizing Lie Group of Point Scaling Transformations. By means of an open-source Navier-Stokes solver and the derived self-similarity conditions, we demonstrated self-similarity within the time variation of flow dynamics for a rigid-lid cavity problem under both up-scaled and down-scaled domains. The strength of the proposed approach lies in its ability to consider the underlying flow dynamics through not only from the governing equations under consideration but also from the initial and boundary conditions, hence allowing to obtain perfect self-similarity in different time and space scales. The proposed methodology can be a valuable tool in obtaining self-similar flow dynamics under preferred level of detail, which can be represented by initial and boundary value problems under specific assumptions.
Nanoscale Fresnel coherent diffraction imaging tomography using ptychography.
Peterson, I; Abbey, B; Putkunz, C T; Vine, D J; van Riessen, G A; Cadenazzi, G A; Balaur, E; Ryan, R; Quiney, H M; McNulty, I; Peele, A G; Nugent, K A
2012-10-22
We demonstrate Fresnel Coherent Diffractive Imaging (FCDI) tomography in the X-ray regime. The method uses an incident X-ray illumination with known curvature in combination with ptychography to overcome existing problems in diffraction imaging. The resulting tomographic reconstruction represents a 3D map of the specimen's complex refractive index at nano-scale resolution. We use this technique to image a lithographically fabricated glass capillary, in which features down to 70nm are clearly resolved.
New signals for vector-like down-type quark in U(1) of E_6
NASA Astrophysics Data System (ADS)
Das, Kasinath; Li, Tianjun; Nandi, S.; Rai, Santosh Kumar
2018-01-01
We consider the pair production of vector-like down-type quarks in an E_6 motivated model, where each of the produced down-type vector-like quark decays into an ordinary Standard Model light quark and a singlet scalar. Both the vector-like quark and the singlet scalar appear naturally in the E_6 model with masses at the TeV scale with a favorable choice of symmetry breaking pattern. We focus on the non-standard decay of the vector-like quark and the new scalar which decays to two photons or two gluons. We analyze the signal for the vector-like quark production in the 2γ +≥ 2j channel and show how the scalar and vector-like quark masses can be determined at the Large Hadron Collider.
Steady-State Modeling of Modular Multilevel Converter Under Unbalanced Grid Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xiaojie M.; Wang, Zhiqiang; Liu, Bo
This paper presents a steady-state model of MMC for the second-order phase voltage ripple prediction under unbalanced conditions, taking the impact of negative-sequence current control into account. From the steady-state model, a circular relationship is found among current and voltage quantities, which can be used to evaluate the magnitudes and initial phase angles of different circulating current components. Moreover, in order to calculate the circulating current in a point-to-point MMC-based HVdc system under unbalanced grid conditions, the derivation of equivalent dc impedance of an MMC is discussed as well. According to the dc impedance model, an MMC inverter can bemore » represented as a series connected R-L-C branch, with its equivalent resistance and capacitance directly related to the circulating current control parameters. Experimental results from a scaled-down three-phase MMC system under an emulated single-line-to-ground fault are provided to support the theoretical analysis and derived model. In conclusion, this new models provides an insight into the impact of different control schemes on the fault characteristics and improves the understanding of the operation of MMC under unbalanced conditions.« less
Steady-State Modeling of Modular Multilevel Converter Under Unbalanced Grid Conditions
Shi, Xiaojie M.; Wang, Zhiqiang; Liu, Bo; ...
2016-11-16
This paper presents a steady-state model of MMC for the second-order phase voltage ripple prediction under unbalanced conditions, taking the impact of negative-sequence current control into account. From the steady-state model, a circular relationship is found among current and voltage quantities, which can be used to evaluate the magnitudes and initial phase angles of different circulating current components. Moreover, in order to calculate the circulating current in a point-to-point MMC-based HVdc system under unbalanced grid conditions, the derivation of equivalent dc impedance of an MMC is discussed as well. According to the dc impedance model, an MMC inverter can bemore » represented as a series connected R-L-C branch, with its equivalent resistance and capacitance directly related to the circulating current control parameters. Experimental results from a scaled-down three-phase MMC system under an emulated single-line-to-ground fault are provided to support the theoretical analysis and derived model. In conclusion, this new models provides an insight into the impact of different control schemes on the fault characteristics and improves the understanding of the operation of MMC under unbalanced conditions.« less
Blueprints of the no-scale multiverse at the LHC
NASA Astrophysics Data System (ADS)
Li, Tianjun; Maxin, James A.; Nanopoulos, Dimitri V.; Walker, Joel W.
2011-09-01
We present a contemporary perspective on the String Landscape and the Multiverse of plausible string, M- and F-theory vacua. In contrast to traditional statistical classifications and capitulation to the anthropic principle, we seek only to demonstrate the existence of a nonzero probability for a universe matching our own observed physics within the solution ensemble. We argue for the importance of No-Scale Supergravity as an essential common underpinning for the spontaneous emergence of a cosmologically flat universe from the quantum “nothingness.” Concretely, we continue to probe the phenomenology of a specific model which is testable at the LHC and Tevatron. Dubbed No-Scale F-SU(5), it represents the intersection of the Flipped SU(5) Grand Unified Theory (GUT) with extra TeV-Scale vectorlike multiplets derived out of F-theory, and the dynamics of No-Scale Supergravity, which in turn imply a very restricted set of high-energy boundary conditions. By secondarily minimizing the minimum of the scalar Higgs potential, we dynamically determine the ratio tanβ≃15-20 of up- to down-type Higgs vacuum expectation values (VEVs), the universal gaugino boundary mass M1/2≃450GeV, and, consequently, also the total magnitude of the GUT-scale Higgs VEVs, while constraining the low-energy standard model gauge couplings. In particular, this local minimum minimorum lies within the previously described “golden strip,” satisfying all current experimental constraints. We emphasize, however, that the overarching goal is not to establish why our own particular universe possesses any number of specific characteristics, but rather to tease out what generic principles might govern the superset of all possible universes.
Traveling waves in a spatially-distributed Wilson-Cowan model of cortex: From fronts to pulses
NASA Astrophysics Data System (ADS)
Harris, Jeremy D.; Ermentrout, Bard
2018-04-01
Wave propagation in excitable media has been studied in various biological, chemical, and physical systems. Waves are among the most common evoked and spontaneous organized activity seen in cortical networks. In this paper, we study traveling fronts and pulses in a spatially-extended version of the Wilson-Cowan equations, a neural firing rate model of sensory cortex having two population types: Excitatory and inhibitory. We are primarily interested in the case when the local or space-clamped dynamics has three fixed points: (1) a stable down state; (2) a saddle point with stable manifold that acts as a threshold for firing; (3) an up state having stability that depends on the time scale of the inhibition. In the case when the up state is stable, we look for wave fronts, which transition the media from a down to up state, and when the up state is unstable, we are interested in pulses, a transient increase in firing that returns to the down state. We explore the behavior of these waves as the time and space scales of the inhibitory population vary. Some interesting findings include bistability between a traveling front and pulse, fronts that join the down state to an oscillation or spatiotemporal pattern, and pulses which go through an oscillatory instability.
Edgeworth streaming model for redshift space distortions
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael; Haugg, Thomas
2015-09-01
We derive the Edgeworth streaming model (ESM) for the redshift space correlation function starting from an arbitrary distribution function for biased tracers of dark matter by considering its two-point statistics and show that it reduces to the Gaussian streaming model (GSM) when neglecting non-Gaussianities. We test the accuracy of the GSM and ESM independent of perturbation theory using the Horizon Run 2 N -body halo catalog. While the monopole of the redshift space halo correlation function is well described by the GSM, higher multipoles improve upon including the leading order non-Gaussian correction in the ESM: the GSM quadrupole breaks down on scales below 30 Mpc /h whereas the ESM stays accurate to 2% within statistical errors down to 10 Mpc /h . To predict the scale-dependent functions entering the streaming model we employ convolution Lagrangian perturbation theory (CLPT) based on the dust model and local Lagrangian bias. Since dark matter halos carry an intrinsic length scale given by their Lagrangian radius, we extend CLPT to the coarse-grained dust model and consider two different smoothing approaches operating in Eulerian and Lagrangian space, respectively. The coarse graining in Eulerian space features modified fluid dynamics different from dust while the coarse graining in Lagrangian space is performed in the initial conditions with subsequent single-streaming dust dynamics, implemented by smoothing the initial power spectrum in the spirit of the truncated Zel'dovich approximation. Finally, we compare the predictions of the different coarse-grained models for the streaming model ingredients to N -body measurements and comment on the proper choice of both the tracer distribution function and the smoothing scale. Since the perturbative methods we considered are not yet accurate enough on small scales, the GSM is sufficient when applied to perturbation theory.
Evolution of Particle Size Distributions in Fragmentation Over Time
NASA Astrophysics Data System (ADS)
Charalambous, C. A.; Pike, W. T.
2013-12-01
We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.
NASA Technical Reports Server (NTRS)
Tucker, Warren A.; Comisarow, Paul
1946-01-01
During the first flight tests of the Republic XP-84 airplane it was discovered that there was a complete lack of stall warning. A short series of development tests of a suitable stall-warning device for the airplane was therefore made on a 1/5-scale model in the Langley 300 MPH 7- by 10-foot tunnel. Two similar stall-warning devices, each designed to produce early root stall which would provide a buffet warning, were tested. It appeared that either device would give a satisfactory buffet warning in the flap-up configuration, at the cost of an increase of 8 or 10 miles per hour in minimum speed. Although neither device seemed to give a true buffet warning in the flaps-down configuration, it appeared that either device would improve the flaps-down stalling characteristics by lessening the severity of the stall and by maintaining better control at the stall. The flaps-down minimum-speed increase caused by the devices was only 1 or 2 miles per hour.
A Prototype Windflow Modeling System for Tactical Weather Support Operations.
1987-05-07
a system of numerical models that covers the mesoscale from horizontal scales of 200 km down to 5 km. Veazey and Tabor 2 1 used the windflow model to...821785 West Conference, Long Beach, Calif. 21. Veazey , D.R., and Tabor, P.A. (1985) Meteorological sensor density on the battlefield, Workshop on
NASA Astrophysics Data System (ADS)
Travesset-Baro, Oriol; Jover, Eric; Rosas-Casals, Marti
2016-04-01
This paper analyses the long-term energy security in a national scale using Long-range Energy Alternatives Planning System (LEAP) modelling tool. It builds the LEAP Andorra model, which forecasts energy demand and supply for the Principality of Andorra by 2050. It has a general bottom-up structure, where energy demand is driven by the technological composition of the sectors of the economy. The technological model is combined with a top-down econometric model to take into account macroeconomic trends. The model presented in this paper provides an initial estimate of energy demand in Andorra segregated into all sectors (residential, transport, secondary, tertiary and public administration) and charts a baseline scenario based on historical trends. Additional scenarios representing different policy strategies are built to explore the country's potential energy savings and the feasibility to achieve the Intended Nationally Determined Contribution (INDC) submitted in April 2015 to UN. In this climatic agreement Andorra intends to reduce net greenhouse gas emissions (GHG) by 37% as compared to a business-as-usual scenario by 2030. In addition, current and future energy security is analysed in this paper under baseline and de-carbonization scenarios. Energy security issues are assessed in LEAP with an integrated vision, going beyond the classic perspective of security of supply, and being closer to the sustainability's integrative vision. Results of scenarios show the benefits of climate policies in terms of national energy security and the difficulties for Andorra to achieving the de-carbonization target by 2030.
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
NASA Astrophysics Data System (ADS)
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.
2016-04-01
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.
U.S.A. National Surface Rock Density Map - Part 2
NASA Astrophysics Data System (ADS)
Winester, D.
2016-12-01
A map of surface rock densities over the USA has been developed by the NOAA-National Geodetic Survey (NGS) as part of its Gravity for the Redefinition of the American Vertical Datum (GRAV-D) Program. GRAV-D is part of an international effort to generate a North American gravimetric geoid for use as the vertical datum reference surface. As a part of modeling process, it is necessary to eliminate from the observed gravity data the topographic and density effects of all masses above the geoid. However, the long-standing tradition in geoid modeling, which is to use an average rock density (e.g. 2.67 g/cm3), does not adequately represent the variety of lithologies in the USA. The U.S. Geological Survey has assembled a downloadable set of surface geologic formation maps (typically 1:100,000 to 1:500, 000 scale in NAD27) in GIS format. The lithologies were assigned densities typical of their rock type (Part 1) and these variety of densities were then rasterized and averaged over one arc-minute areas. All were then transformed into WGS84 datum. Thin layers of alluvium and some water bodies (interpreted to be less than 40 m thick) have been ignored in deference to underlying rocks. Deep alluvial basins have not been removed, since they represent significant fraction of local mass. The initial assumption for modeling densities will be that the surface rock densities extend down to the geoid. If this results in poor modeling, variable lithologies with depth can be attempted. Initial modeling will use elevations from the SRTM DEM. A map of CONUS densities is presented (denser lithologies are shown brighter). While a visual map at this scale does show detailed features, digital versions are available upon request. Also presented are some pitfalls of using source GIS maps digitized from variable reference sources, including the infamous `state line faults.'
Ascarrunz, F G; Kisley, M A; Flach, K A; Hamilton, R W; MacGregor, R J
1995-07-01
This paper applies a general mathematical system for characterizing and scaling functional connectivity and information flow across the diffuse (EC) and discrete (DG) input junctions to the CA3 hippocampus. Both gross connectivity and coordinated multiunit informational firing patterns are quantitatively characterized in terms of 32 defining parameters interrelated by 17 equations, and then scaled down according to rules for uniformly proportional scaling and for partial representation. The diffuse EC-CA3 junction is shown to be uniformly scalable with realistic representation of both essential spatiotemporal cooperativity and coordinated firing patterns down to populations of a few hundred neurons. Scaling of the discrete DG-CA3 junction can be effected with a two-step process, which necessarily deviates from uniform proportionality but nonetheless produces a valuable and readily interpretable reduced model, also utilizing a few hundred neurons in the receiving population. Partial representation produces a reduced model of only a portion of the full network where each model neuron corresponds directly to a biological neuron. The mathematical analysis illustrated here shows that although omissions and distortions are inescapable in such an application, satisfactorily complete and accurate models the size of pattern modules are possible. Finally, the mathematical characterization of these junctions generates a theory which sees the DG as a definer of the fine structure of embedded traces in the hippocampus and entire coordinated patterns of sequences of 14-cell links in CA3 as triggered by the firing of sequences of individual neurons in DG.
NASA Astrophysics Data System (ADS)
Beverly, D.; Speckman, H. N.; Klatt, A. L.; Ewers, B. E.
2016-12-01
Whole-plant hydraulic conductance is now used in many processed-based ecohydrological models running at the plot to regional scales. Many models, such as Dynamic Global Vegetation Model (DGVM), predict entire ecosystem evapotranspiration (ET) based on a single unvarying plant conductance parameter that assumes no variation in plant traits. However, whole-plant conductance varies in space, time, and with topography. Understanding this variation increases model predictive power for stand and ecosystem level estimates of ET, ultimately reducing uncertainty in predictions of the water balance. We hypothesize that whole-plant conductance (Kw) is water limited in the up-slope stands due to water flow paths and energy limited in down-slope stands due to shading. To test this hypothesis in two adjacent stands in the Medicine Bow Mountains of southern Wyoming. Both mixed conifer stands were south-facing, with the upper stand being 300 m above the down-slope stand. Whole-plant conductance was quantified measuring sapflow (Js) and leaf water potential (WPL) throughout the growing season. To quantify Js, each stand was instrumented with 30 Granier-type sapflow sensors. Leaf-water potentials were measured in monthly 48-hour campaigns sampling every 3 hours. The upper slope stand exhibited significantly lower Kw (approximately 35% lower in spruce and pine) and decreased throughout the growing season, driven by drying soils resulting in lower predawn WPL. In contrast, the down-slope stand Kw peaked in July before decreasing for rest of the summer. Down-slope predawn WPL maintained a consistent predawn WPL until October reflecting consistent water input from the upper slopes and ground water. Including this topographical variation in whole-plant conductance will increase the predictive power of models simulating evapotranspiration at the watershed scale.
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...
2016-10-20
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
NASA Astrophysics Data System (ADS)
Acton, W. Joe F.; Schallhart, Simon; Langford, Ben; Valach, Amy; Rantala, Pekka; Fares, Silvano; Carriero, Giulia; Tillmann, Ralf; Tomlinson, Sam J.; Dragosits, Ulrike; Gianelle, Damiano; Hewitt, C. Nicholas; Nemitz, Eiko
2016-06-01
This paper reports the fluxes and mixing ratios of biogenically emitted volatile organic compounds (BVOCs) 4 m above a mixed oak and hornbeam forest in northern Italy. Fluxes of methanol, acetaldehyde, isoprene, methyl vinyl ketone + methacrolein, methyl ethyl ketone and monoterpenes were obtained using both a proton-transfer-reaction mass spectrometer (PTR-MS) and a proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) together with the methods of virtual disjunct eddy covariance (using PTR-MS) and eddy covariance (using PTR-ToF-MS). Isoprene was the dominant emitted compound with a mean daytime flux of 1.9 mg m-2 h-1. Mixing ratios, recorded 4 m above the canopy, were dominated by methanol with a mean value of 6.2 ppbv over the 28-day measurement period. Comparison of isoprene fluxes calculated using the PTR-MS and PTR-ToF-MS showed very good agreement while comparison of the monoterpene fluxes suggested a slight over estimation of the flux by the PTR-MS. A basal isoprene emission rate for the forest of 1.7 mg m-2 h-1 was calculated using the Model of Emissions of Gases and Aerosols from Nature (MEGAN) isoprene emission algorithms (Guenther et al., 2006). A detailed tree-species distribution map for the site enabled the leaf-level emission of isoprene and monoterpenes recorded using gas-chromatography mass spectrometry (GC-MS) to be scaled up to produce a bottom-up canopy-scale flux. This was compared with the top-down canopy-scale flux obtained by measurements. For monoterpenes, the two estimates were closely correlated and this correlation improved when the plant-species composition in the individual flux footprint was taken into account. However, the bottom-up approach significantly underestimated the isoprene flux, compared with the top-down measurements, suggesting that the leaf-level measurements were not representative of actual emission rates.
Sparticle spectroscopy of the minimal SO(10) model
Fukuyama, Takeshi; Okada, Nobuchika; Tran, Hieu Minh
2017-02-14
Here, the supersymmetric (SUSY) minimal SO(10) model is a well-motivated grand unified theory, where the Standard Model (SM) fermions have Yukawa couplings with only one 10-plet and onemore » $$\\overline{126}$$-plet Higgs fields and it is highly non-trivial if the realistic quark and lepton mass matrices can be reproduced in this context. It has been known that the best fit for all the SM fermion mass matrices is achieved by a vacuum expectation value of the $$\\overline{126}$$-plet Higgs field being at the intermediate scale of around O(10 13) GeV. Under the presence of the SO(10) symmetry breaking at the intermediate scale, the successful SM gauge coupling unification is at risk and likely to be spoiled. Recently, it has been shown that the low-energy fermion mass matrices, except for the down-quark mass predicted to be too low, are very well-fitted without the intermediate scale. In order to resolve the too-low down quark mass while keeping the other fittings intact, we consider SUSY threshold corrections to reproduce the right down quark mass. It turns out that this requires flavor-dependent soft parameters. Motivated by this fact, we calculate particle mass spectra at low energies with flavor-dependent sfermion masses at the grand unification scale. We present a benchmark particle mass spectrum which satisfies a variety of phenomenological constraints, in particular, the observed SM-like Higgs boson mass of around 125 GeV and the relic abundance of the neutralino dark matter as well as the experimental result of the muon anomalous magnetic moment. In the resultant mass spectrum, sleptons in the first and second generations, bino and winos are all light, and this scenario can be tested at the LHC Run-2 in the near future.« less
Mora-Castilla, Sergio; To, Cuong; Vaezeslami, Soheila; Morey, Robert; Srinivasan, Srimeenakshi; Dumdie, Jennifer N; Cook-Andersen, Heidi; Jenkins, Joby; Laurent, Louise C
2016-08-01
As the cost of next-generation sequencing has decreased, library preparation costs have become a more significant proportion of the total cost, especially for high-throughput applications such as single-cell RNA profiling. Here, we have applied novel technologies to scale down reaction volumes for library preparation. Our system consisted of in vitro differentiated human embryonic stem cells representing two stages of pancreatic differentiation, for which we prepared multiple biological and technical replicates. We used the Fluidigm (San Francisco, CA) C1 single-cell Autoprep System for single-cell complementary DNA (cDNA) generation and an enzyme-based tagmentation system (Nextera XT; Illumina, San Diego, CA) with a nanoliter liquid handler (mosquito HTS; TTP Labtech, Royston, UK) for library preparation, reducing the reaction volume down to 2 µL and using as little as 20 pg of input cDNA. The resulting sequencing data were bioinformatically analyzed and correlated among the different library reaction volumes. Our results showed that decreasing the reaction volume did not interfere with the quality or the reproducibility of the sequencing data, and the transcriptional data from the scaled-down libraries allowed us to distinguish between single cells. Thus, we have developed a process to enable efficient and cost-effective high-throughput single-cell transcriptome sequencing. © 2016 Society for Laboratory Automation and Screening.
Evaluating short-term hydro-meteorological fluxes using GRACE-derived water storage changes
NASA Astrophysics Data System (ADS)
Eicker, A.; Jensen, L.; Springer, A.; Kusche, J.
2017-12-01
Atmospheric and terrestrial water budgets, which represent important boundary conditions for both climate modeling and hydrological studies, are linked by evapotranspiration (E) and precipitation (P). These fields are provided by numerical weather prediction models and atmospheric reanalyses such as ERA-Interim and MERRA-Land; yet, in particular the quality of E is still not well evaluated. Via the terrestrial water budget equation, water storage changes derived from products of the Gravity Recovery and Climate Experiment (GRACE) mission, combined with runoff (R) data can be used to assess the realism of atmospheric models. In this contribution we will investigate the closure of the water balance for short-term fluxes, i.e. the agreement of GRACE water storage changes with P-E-R flux time series from different (global and regional) atmospheric reanalyses, land surface models, as well as observation-based data sets. Missing river runoff observations will be extrapolated using the calibrated rainfall-runoff model GR2M. We will perform a global analysis and will additionally focus on selected river basins in West Africa. The investigations will be carried out for various temporal scales, focusing on short-term fluxes down to daily variations to be detected in daily GRACE time series.
Anomalous properties of the acoustic excitations in glasses on the mesoscopic length scale.
Monaco, Giulio; Mossa, Stefano
2009-10-06
The low-temperature thermal properties of dielectric crystals are governed by acoustic excitations with large wavelengths that are well described by plane waves. This is the Debye model, which rests on the assumption that the medium is an elastic continuum, holds true for acoustic wavelengths large on the microscopic scale fixed by the interatomic spacing, and gradually breaks down on approaching it. Glasses are characterized as well by universal low-temperature thermal properties that are, however, anomalous with respect to those of the corresponding crystalline phases. Related universal anomalies also appear in the low-frequency vibrational density of states and, despite a longstanding debate, remain poorly understood. By using molecular dynamics simulations of a model monatomic glass of extremely large size, we show that in glasses the structural disorder undermines the Debye model in a subtle way: The elastic continuum approximation for the acoustic excitations breaks down abruptly on the mesoscopic, medium-range-order length scale of approximately 10 interatomic spacings, where it still works well for the corresponding crystalline systems. On this scale, the sound velocity shows a marked reduction with respect to the macroscopic value. This reduction turns out to be closely related to the universal excess over the Debye model prediction found in glasses at frequencies of approximately 1 THz in the vibrational density of states or at temperatures of approximately 10 K in the specific heat.
Le Deunff, Erwan; Malagoli, Philippe
2014-01-01
Background The top-down analysis of nitrate influx isotherms through the Enzyme-Substrate interpretation has not withstood recent molecular and histochemical analyses of nitrate transporters. Indeed, at least four families of nitrate transporters operating at both high and/or low external nitrate concentrations, and which are located in series and/or parallel in the different cellular layers of the mature root, are involved in nitrate uptake. Accordingly, the top-down analysis of the root catalytic structure for ion transport from the Enzyme-Substrate interpretation of nitrate influx isotherms is inadequate. Moreover, the use of the Enzyme-Substrate velocity equation as a single reference in agronomic models is not suitable in its formalism to account for variations in N uptake under fluctuating environmental conditions. Therefore, a conceptual paradigm shift is required to improve the mechanistic modelling of N uptake in agronomic models. Scope An alternative formalism, the Flow-Force theory, was proposed in the 1970s to describe ion isotherms based upon biophysical ‘flows and forces’ relationships of non-equilibrium thermodynamics. This interpretation describes, with macroscopic parameters, the patterns of N uptake provided by a biological system such as roots. In contrast to the Enzyme-Substrate interpretation, this approach does not claim to represent molecular characteristics. Here it is shown that it is possible to combine the Flow-Force formalism with polynomial responses of nitrate influx rate induced by climatic and in planta factors in relation to nitrate availability. Conclusions Application of the Flow-Force formalism allows nitrate uptake to be modelled in a more realistic manner, and allows scaling-up in time and space of the regulation of nitrate uptake across the plant growth cycle. PMID:25425406
A grid of MHD models for stellar mass loss and spin-down rates of solar analogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, O.; Drake, J. J.
2014-03-01
Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less
Comparing estimates of EMEP MSC-W and UFORE models in air pollutant reduction by urban trees.
Guidolotti, Gabriele; Salviato, Michele; Calfapietra, Carlo
2016-10-01
There is a growing interest to identify and quantify the benefits provided by the presence of trees in urban environment in order to improve the environmental quality in cities. However, the evaluation and estimate of plant efficiency in removing atmospheric pollutants is rather complicated, because of the high number of factors involved and the difficulty of estimating the effect of the interactions between the different components. In this study, the EMEP MSC-W model was implemented to scale-down to tree-level and allows its application to an industrial-urban green area in Northern Italy. Moreover, the annual outputs were compared with the outputs of UFORE (nowadays i-Tree), a leading model for urban forest applications. Although, EMEP/MSC-W model and UFORE are semi-empirical models designed for different applications, the comparison, based on O3, NO2 and PM10 removal, showed a good agreement in the estimates and highlights how the down-scaling methodology presented in this study may have significant opportunities for further developments.
NASA Astrophysics Data System (ADS)
Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.
2012-12-01
The Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) aircraft program spanned the dry to wet and wet to dry transition seasons in November 2008 & May 2009 respectively. It resulted in ~150 vertical profiles covering the Brazilian Amazon Basin (BAB). With the data we attempt to estimate a carbon budget for the BAB, to determine if regional aircraft experiments can provide strong constraints for a budget, and to compare inversion frameworks when optimizing flux estimates. We use a LPDM to integrate satellite-, aircraft-, & surface-data with mesoscale meteorological fields to link bottom-up and top-down models to provide constraints and error bounds for regional fluxes. The Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from BRAMS, ECMWF, and WRF are coupled to a biosphere model, the Vegetation Photosynthesis Respiration Model (VPRM), to determine regional CO2 fluxes for the BAB. The VPRM is a prognostic biosphere model driven by MODIS 8-day EVI and LSWI indices along with shortwave radiation and temperature from tower measurements and mesoscale meteorological data. VPRM parameters are tuned using eddy flux tower data from the Large-Scale Biosphere Atmosphere experiment. VPRM computes hourly CO2 fluxes by calculating Gross Ecosystem Exchange (GEE) and Respiration (R) for 8 different vegetation types. The VPRM fluxes are scaled up to the BAB by using time-averaged drivers (shortwave radiation & temperature) from high-temporal resolution runs of BRAMS, ECMWF, and WRF and vegetation maps from SYNMAP and IGBP2007. Shortwave radiation from each mesoscale model is validated using surface data and output from GL 1.2, a global radiation model based on GOES 8 visible imagery. The vegetation maps are updated to 2008 and 2009 using landuse scenarios modeled by Sim Amazonia 2 and Sim Brazil. A priori fluxes modeled by STILT-VPRM are optimized using data from BARCA, eddy covariance sites, and flask measurements. The aircraft mixing ratios are applied as a top down constraint in Maximum Likelihood Estimation (MLE) and Bayesian inversion frameworks that solves for parameters controlling the flux. Posterior parameter estimates are used to estimate the carbon budget of the BAB. Preliminary results show that the STILT-VPRM model simulates the net emission of CO2 during both transition periods reasonably well. There is significant enhancement from biomass burning during the November 2008 profiles and some from fossil fuel combustion during the May 2009 flights. ΔCO/ΔCO2 emission ratios are used in combination with continuous observations of CO to remove the CO2 contributions from biomass burning and fossil fuel combustion from the observed CO2 measurements resulting in better agreement of observed and modeled aircraft data. Comparing column calculations for each of the vertical profiles shows our model represents the variability in the diurnal cycle. The high altitude CO2 values from above 3500m are similar to the lateral boundary conditions from CarbonTracker 2010 and GEOS-Chem indicating little influence from surface fluxes at these levels. The MLE inversion provides scaling factors for GEE and R for each of the 8 vegetation types and a Bayesian inversion is being conducted. Our initial inversion results suggest the BAB represents a small net source of CO2 during both of the BARCA intensives.
NASA Astrophysics Data System (ADS)
Selker, J. S.; Higgins, C. W.; Tai, L. C. M.
2014-12-01
The linkage between large-scale manipulation of land cover and resulting patterns of precipitation has been a long-standing problem. For example, what is the impact of the Columbia River project's 2,700 km^2 irrigated area (applying approximately 300 m^3/s) on the down-wind continental rainfall in North America? Similarly, can we identify places on earth where planting large-scale runoff-reducing forests might increase down-wind precipitation, thus leading to magnified carbon capture? In this talk we present an analytical Lagrangian framework for the prediction of incremental increases in down-wind precipitation due to land surface evaporation and transpiration. We compare these predictions to recently published rainfall recycling values from the literature. Focus is on the Columbia basin (Pacific Northwest of hte USA), with extensions to East Africa. We further explore the monitoring requirements for verification of any such impact, and see if the planned TAHMO African Observatory (TAHMO.org) has the potential to document any such processes over the 25-year and 1,000 km scales.
Simulation Modeling of Software Development Processes
NASA Technical Reports Server (NTRS)
Calavaro, G. F.; Basili, V. R.; Iazeolla, G.
1996-01-01
A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.
An AgMIP framework for improved agricultural representation in integrated assessment models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruane, Alex C.; Rosenzweig, Cynthia; Asseng, Senthold
Integrated assessment models (IAMs) hold great potential to assess how future agricultural systems will be shaped by socioeconomic development, technological innovation, and changing climate conditions. By coupling with climate and crop model emulators, IAMs have the potential to resolve important agricultural feedback loops and identify unintended consequences of socioeconomic development for agricultural systems. Here we propose a framework to develop robust representation of agricultural system responses within IAMs, linking downstream applications with model development and the coordinated evaluation of key climate responses from local to global scales. We survey the strengths and weaknesses of protocol-based assessments linked to the Agriculturalmore » Model Intercomparison and Improvement Project (AgMIP), each utilizing multiple sites and models to evaluate crop response to core climate changes including shifts in carbon dioxide concentration, temperature, and water availability, with some studies further exploring how climate responses are affected by nitrogen levels and adaptation in farm systems. Site-based studies with carefully calibrated models encompass the largest number of activities; however they are limited in their ability to capture the full range of global agricultural system diversity. Representative site networks provide more targeted response information than broadly-sampled networks, with limitations stemming from difficulties in covering the diversity of farming systems. Global gridded crop models provide comprehensive coverage, although with large challenges for calibration and quality control of inputs. Diversity in climate responses underscores that crop model emulators must distinguish between regions and farming system while recognizing model uncertainty. Finally, to bridge the gap between bottom-up and top-down approaches we recommend the deployment of a hybrid climate response system employing a representative network of sites to bias-correct comprehensive gridded simulations, opening the door to accelerated development and a broad range of applications.« less
An AgMIP framework for improved agricultural representation in integrated assessment models
NASA Astrophysics Data System (ADS)
Ruane, Alex C.; Rosenzweig, Cynthia; Asseng, Senthold; Boote, Kenneth J.; Elliott, Joshua; Ewert, Frank; Jones, James W.; Martre, Pierre; McDermid, Sonali P.; Müller, Christoph; Snyder, Abigail; Thorburn, Peter J.
2017-12-01
Integrated assessment models (IAMs) hold great potential to assess how future agricultural systems will be shaped by socioeconomic development, technological innovation, and changing climate conditions. By coupling with climate and crop model emulators, IAMs have the potential to resolve important agricultural feedback loops and identify unintended consequences of socioeconomic development for agricultural systems. Here we propose a framework to develop robust representation of agricultural system responses within IAMs, linking downstream applications with model development and the coordinated evaluation of key climate responses from local to global scales. We survey the strengths and weaknesses of protocol-based assessments linked to the Agricultural Model Intercomparison and Improvement Project (AgMIP), each utilizing multiple sites and models to evaluate crop response to core climate changes including shifts in carbon dioxide concentration, temperature, and water availability, with some studies further exploring how climate responses are affected by nitrogen levels and adaptation in farm systems. Site-based studies with carefully calibrated models encompass the largest number of activities; however they are limited in their ability to capture the full range of global agricultural system diversity. Representative site networks provide more targeted response information than broadly-sampled networks, with limitations stemming from difficulties in covering the diversity of farming systems. Global gridded crop models provide comprehensive coverage, although with large challenges for calibration and quality control of inputs. Diversity in climate responses underscores that crop model emulators must distinguish between regions and farming system while recognizing model uncertainty. Finally, to bridge the gap between bottom-up and top-down approaches we recommend the deployment of a hybrid climate response system employing a representative network of sites to bias-correct comprehensive gridded simulations, opening the door to accelerated development and a broad range of applications.
Joseph, Adrian; Goldrick, Stephen; Mollet, Michael; Turner, Richard; Bender, Jean; Gruber, David; Farid, Suzanne S; Titchener-Hooker, Nigel
2017-05-01
Continuous disk-stack centrifugation is typically used for the removal of cells and cellular debris from mammalian cell culture broths at manufacturing-scale. The use of scale-down methods to characterise disk-stack centrifugation performance enables substantial reductions in material requirements and allows a much wider design space to be tested than is currently possible at pilot-scale. The process of scaling down centrifugation has historically been challenging due to the difficulties in mimicking the Energy Dissipation Rates (EDRs) in typical machines. This paper describes an alternative and easy-to-assemble automated capillary-based methodology to generate levels of EDRs consistent with those found in a continuous disk-stack centrifuge. Variations in EDR were achieved through changes in capillary internal diameter and the flow rate of operation through the capillary. The EDRs found to match the levels of shear in the feed zone of a pilot-scale centrifuge using the experimental method developed in this paper (2.4×10 5 W/Kg) are consistent with those obtained through previously published computational fluid dynamic (CFD) studies (2.0×10 5 W/Kg). Furthermore, this methodology can be incorporated into existing scale-down methods to model the process performance of continuous disk-stack centrifuges. This was demonstrated through the characterisation of culture hold time, culture temperature and EDRs on centrate quality. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An implementation of discrete electron transport models for gold in the Geant4 simulation toolkit
NASA Astrophysics Data System (ADS)
Sakata, D.; Incerti, S.; Bordage, M. C.; Lampe, N.; Okada, S.; Emfietzoglou, D.; Kyriakou, I.; Murakami, K.; Sasaki, T.; Tran, H.; Guatelli, S.; Ivantchenko, V. N.
2016-12-01
Gold nanoparticle (GNP) boosted radiation therapy can enhance the biological effectiveness of radiation treatments by increasing the quantity of direct and indirect radiation-induced cellular damage. As the physical effects of GNP boosted radiotherapy occur across energy scales that descend down to 10 eV, Monte Carlo simulations require discrete physics models down to these very low energies in order to avoid underestimating the absorbed dose and secondary particle generation. Discrete physics models for electron transportation down to 10 eV have been implemented within the Geant4-DNA low energy extension of Geant4. Such models allow the investigation of GNP effects at the nanoscale. At low energies, the new models have better agreement with experimental data on the backscattering coefficient, and they show similar performance for transmission coefficient data as the Livermore and Penelope models already implemented in Geant4. These new models are applicable in simulations focussed towards estimating the relative biological effectiveness of radiation in GNP boosted radiotherapy applications with photon and electron radiation sources.
Light Z' in heterotic string standardlike models
NASA Astrophysics Data System (ADS)
Athanasopoulos, P.; Faraggi, A. E.; Mehta, V. M.
2014-05-01
The discovery of the Higgs boson at the LHC supports the hypothesis that the Standard Model provides an effective parametrization of all subatomic experimental data up to the Planck scale. String theory, which provides a viable perturbative approach to quantum gravity, requires for its consistency the existence of additional gauge symmetries beyond the Standard Model. The construction of heterotic string models with a viable light Z' is, however, highly constrained. We outline the construction of standardlike heterotic string models that allow for an additional Abelian gauge symmetry that may remain unbroken down to low scales. We present a string inspired model, consistent with the string constraints.
Chatel, Alex; Kumpalume, Peter; Hoare, Mike
2014-01-01
The processing of harvested E. coli cell broths is examined where the expressed protein product has been released into the extracellular space. Pre-treatment methods such as freeze–thaw, flocculation, and homogenization are studied. The resultant suspensions are characterized in terms of the particle size distribution, sensitivity to shear stress, rheology and solids volume fraction, and, using ultra scale-down methods, the predicted ability to clarify the material using industrial scale continuous flow centrifugation. A key finding was the potential of flocculation methods both to aid the recovery of the particles and to cause the selective precipitation of soluble contaminants. While the flocculated material is severely affected by process shear stress, the impact on the very fine end of the size distribution is relatively minor and hence the predicted performance was only diminished to a small extent, for example, from 99.9% to 99.7% clarification compared with 95% for autolysate and 65% for homogenate at equivalent centrifugation conditions. The lumped properties as represented by ultra scale-down centrifugation results were correlated with the basic properties affecting sedimentation including particle size distribution, suspension viscosity, and solids volume fraction. Grade efficiency relationships were used to allow for the particle and flow dynamics affecting capture in the centrifuge. The size distribution below a critical diameter dependant on the broth pre-treatment type was shown to be the main determining factor affecting the clarification achieved. Biotechnol. Bioeng. 2014;111: 913–924. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:24284936
From Single-Cell Dynamics to Scaling Laws in Oncology
NASA Astrophysics Data System (ADS)
Chignola, Roberto; Sega, Michela; Stella, Sabrina; Vyshemirsky, Vladislav; Milotti, Edoardo
We are developing a biophysical model of tumor biology. We follow a strictly quantitative approach where each step of model development is validated by comparing simulation outputs with experimental data. While this strategy may slow down our advancements, at the same time it provides an invaluable reward: we can trust simulation outputs and use the model to explore territories of cancer biology where current experimental techniques fail. Here, we review our multi-scale biophysical modeling approach and show how a description of cancer at the cellular level has led us to general laws obeyed by both in vitro and in vivo tumors.
NASA Astrophysics Data System (ADS)
Gallego, C.; Costa, A.; Cuerva, A.
2010-09-01
Since nowadays wind energy can't be neither scheduled nor large-scale storaged, wind power forecasting has been useful to minimize the impact of wind fluctuations. In particular, short-term forecasting (characterised by prediction horizons from minutes to a few days) is currently required by energy producers (in a daily electricity market context) and the TSO's (in order to keep the stability/balance of an electrical system). Within the short-term background, time-series based models (i.e., statistical models) have shown a better performance than NWP models for horizons up to few hours. These models try to learn and replicate the dynamic shown by the time series of a certain variable. When considering the power output of wind farms, ramp events are usually observed, being characterized by a large positive gradient in the time series (ramp-up) or negative (ramp-down) during relatively short time periods (few hours). Ramp events may be motivated by many different causes, involving generally several spatial scales, since the large scale (fronts, low pressure systems) up to the local scale (wind turbine shut-down due to high wind speed, yaw misalignment due to fast changes of wind direction). Hence, the output power may show unexpected dynamics during ramp events depending on the underlying processes; consequently, traditional statistical models considering only one dynamic for the hole power time series may be inappropriate. This work proposes a Regime Switching (RS) model based on Artificial Neural Nets (ANN). The RS-ANN model gathers as many ANN's as different dynamics considered (called regimes); a certain ANN is selected so as to predict the output power, depending on the current regime. The current regime is on-line updated based on a gradient criteria, regarding the past two values of the output power. 3 Regimes are established, concerning ramp events: ramp-up, ramp-down and no-ramp regime. In order to assess the skillness of the proposed RS-ANN model, a single-ANN model (without regime classification) is adopted as a reference model. Both models are evaluated in terms of Improvement over Persistence on the Mean Square Error basis (IoP%) when predicting horizons form 1 time-step to 5. The case of a wind farm located in the complex terrain of Alaiz (north of Spain) has been considered. Three years of available power output data with a hourly resolution have been employed: two years for training and validation of the model and the last year for assessing the accuracy. Results showed that the RS-ANN overcame the single-ANN model for one step-ahead forecasts: the overall IoP% was up to 8.66% for the RS-ANN model (depending on the gradient criterion selected to consider the ramp regime triggered) and 6.16% for the single-ANN. However, both models showed similar accuracy for larger horizons. A locally-weighted evaluation during ramp events for one-step ahead was also performed. It was found that the IoP% during ramps-up increased from 17.60% (case of single-ANN) to 22.25% (case of RS-ANN); however, during the ramps-down events this improvement increased from 18.55% to 19.55%. Three main conclusions are derived from this case study: It highlights the importance of considering statistical models capable of differentiate several regimes showed by the output power time series in order to improve the forecasting during extreme events like ramps. On-line regime classification based on available power output data didn't seem to contribute to improve forecasts for horizons beyond one-step ahead. Tacking into account other explanatory variables (local wind measurements, NWP outputs) could lead to a better understanding of ramp events, improving the regime assessment also for further horizons. The RS-ANN model slightly overcame the single-ANN during ramp-down events. If further research reinforce this effect, special attention should be addressed to understand the underlying processes during ramp-down events.
NASA Astrophysics Data System (ADS)
Tangarife, Walter; Tobioka, Kohsaku; Ubaldi, Lorenzo; Volansky, Tomer
2018-02-01
The cosmological relaxation of the electroweak scale has been proposed as a mechanism to address the hierarchy problem of the Standard Model. A field, the relaxion, rolls down its potential and, in doing so, scans the squared mass parameter of the Higgs, relaxing it to a parametrically small value. In this work, we promote the relaxion to an inflaton. We couple it to Abelian gauge bosons, thereby introducing the necessary dissipation mechanism which slows down the field in the last stages. We describe a novel reheating mechanism, which relies on the gauge-boson production leading to strong electro-magnetic fields, and proceeds via the vacuum production of electron-positron pairs through the Schwinger effect. We refer to this mechanism as Schwinger reheating. We discuss the cosmological dynamics of the model and the phenomenological constraints from CMB and other experiments. We find that a cutoff close to the Planck scale may be achieved. In its minimal form, the model does not generate sufficient curvature perturbations and additional ingredients, such as a curvaton field, are needed.
Zhang, X.; McGuire, A.D.; Ruess, Roger W.
2006-01-01
A major challenge confronting the scientific community is to understand both patterns of and controls over spatial and temporal variability of carbon exchange between boreal forest ecosystems and the atmosphere. An understanding of the sources of variability of carbon processes at fine scales and how these contribute to uncertainties in estimating carbon fluxes is relevant to representing these processes at coarse scales. To explore some of the challenges and uncertainties in estimating carbon fluxes at fine to coarse scales, we conducted a modeling analysis of canopy foliar maintenance respiration for black spruce ecosystems of Alaska by scaling empirical hourly models of foliar maintenance respiration (Rm) to estimate canopy foliar Rm for individual stands. We used variation in foliar N concentration among stands to develop hourly stand-specific models and then developed an hourly pooled model. An uncertainty analysis identified that the most important parameter affecting estimates of canopy foliar Rm was one that describes R m at 0??C per g N, which explained more than 55% of variance in annual estimates of canopy foliar Rm. The comparison of simulated annual canopy foliar Rm identified significant differences between stand-specific and pooled models for each stand. This result indicates that control over foliar N concentration should be considered in models that estimate canopy foliar Rm of black spruce stands across the landscape. In this study, we also temporally scaled the hourly stand-level models to estimate canopy foliar Rm of black spruce stands using mean monthly temperature data. Comparisons of monthly Rm between the hourly and monthly versions of the models indicated that there was very little difference between the estimates of hourly and monthly models, suggesting that hourly models can be aggregated to use monthly input data with little loss of precision. We conclude that uncertainties in the use of a coarse-scale model for estimating canopy foliar Rm at regional scales depend on uncertainties in representing needle-level respiration and on uncertainties in representing the spatial variability of canopy foliar N across a region. The development of spatial data sets of canopy foliar N represents a major challenge in estimating canopy foliar maintenance respiration at regional scales. ?? Springer 2006.
The United States Environmental Protection Agency (USEPA) and National Oceanic and Atmospheric Administration (NOAA) participate in a multi-agency examination of the effects of climate change through the U.S. Climate Change Science Program (CCSP, 2003). The EPA Global Change Rese...
NASA Astrophysics Data System (ADS)
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
2018-01-01
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
Watanabe, Hayafumi; Sano, Yukie; Takayasu, Hideki; Takayasu, Misako
2016-11-01
To elucidate the nontrivial empirical statistical properties of fluctuations of a typical nonsteady time series representing the appearance of words in blogs, we investigated approximately 3×10^{9} Japanese blog articles over a period of six years and analyze some corresponding mathematical models. First, we introduce a solvable nonsteady extension of the random diffusion model, which can be deduced by modeling the behavior of heterogeneous random bloggers. Next, we deduce theoretical expressions for both the temporal and ensemble fluctuation scalings of this model, and demonstrate that these expressions can reproduce all empirical scalings over eight orders of magnitude. Furthermore, we show that the model can reproduce other statistical properties of time series representing the appearance of words in blogs, such as functional forms of the probability density and correlations in the total number of blogs. As an application, we quantify the abnormality of special nationwide events by measuring the fluctuation scalings of 1771 basic adjectives.
2003-06-04
KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach (left) talks to the media about activities that have taken place since the Columbia accident on Feb. 1, 2003. Behind him is a model of the left wing of the orbiter. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.
2003-06-04
KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach talks to the media about activities that have taken place since the Columbia accident on Feb. 1, 2003. Behind him is a model of the left wing of the orbiter. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.
Zeilinger, Katrin; Schreiter, Thomas; Darnell, Malin; Söderdahl, Therese; Lübberstedt, Marc; Dillner, Birgitta; Knobeloch, Daniel; Nüssler, Andreas K; Gerlach, Jörg C; Andersson, Tommy B
2011-05-01
Within the scope of developing an in vitro culture model for pharmacological research on human liver functions, a three-dimensional multicompartment hollow fiber bioreactor proven to function as a clinical extracorporeal liver support system was scaled down in two steps from 800 mL to 8 mL and 2 mL bioreactors. Primary human liver cells cultured over 14 days in 800, 8, or 2 mL bioreactors exhibited comparable time-course profiles for most of the metabolic parameters in the different bioreactor size variants. Major drug-metabolizing cytochrome P450 activities analyzed in the 2 mL bioreactor were preserved over up to 23 days. Immunohistochemical studies revealed tissue-like structures of parenchymal and nonparenchymal cells in the miniaturized bioreactor, indicating physiological reorganization of the cells. Moreover, the canalicular transporters multidrug-resistance-associated protein 2, multidrug-resistance protein 1 (P-glycoprotein), and breast cancer resistance protein showed a similar distribution pattern to that found in human liver tissue. In conclusion, the down-scaled multicompartment hollow fiber technology allows stable maintenance of primary human liver cells and provides an innovative tool for pharmacological and kinetic studies of hepatic functions with small cell numbers.
NASA Astrophysics Data System (ADS)
Goeckede, M.; Michalak, A. M.; Vickers, D.; Turner, D.; Law, B.
2009-04-01
The study presented is embedded within the NACP (North American Carbon Program) West Coast project ORCA2, which aims at determining the regional carbon balance of the US states Oregon, California and Washington. Our work specifically focuses on the effect of disturbance history and climate variability, aiming at improving our understanding of e.g. drought stress and stand age on carbon sources and sinks in complex terrain with fine-scale variability in land cover types. The ORCA2 atmospheric inverse modeling approach has been set up to capture flux variability on the regional scale at high temporal and spatial resolution. Atmospheric transport is simulated coupling the mesoscale model WRF (Weather Research and Forecast) with the STILT (Stochastic Time Inverted Lagrangian Transport) footprint model. This setup allows identifying sources and sinks that influence atmospheric observations with highly resolved mass transport fields and realistic turbulent mixing. Terrestrial biosphere carbon fluxes are simulated at spatial resolutions of up to 1km and subdaily timesteps, considering effects of ecoregion, land cover type and disturbance regime on the carbon budgets. Our approach assimilates high-precision atmospheric CO2 concentration measurements and eddy-covariance data from several sites throughout the model domain, as well as high-resolution remote sensing products (e.g. LandSat, MODIS) and interpolated surface meteorology (DayMet, SOGS, PRISM). We present top-down modeling results that have been optimized using Bayesian inversion, reflecting the information on regional scale carbon processes provided by the network of high-precision CO2 observations. We address the level of detail (e.g. spatial and temporal resolution) that can be resolved by top-down modeling on the regional scale, given the uncertainties introduced by various sources for model-data mismatch. Our results demonstrate the importance of accurate modeling of carbon-water coupling, with the representation of water availability and drought stress playing a dominant role to capture spatially variable CO2 exchange rates in a region characterized by strong climatic gradients.
A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations
Miller, Scot M.; Miller, Charles E.; Commane, Roisin; Chang, Rachel Y.-W.; Dinardo, Steven J.; Henderson, John M.; Karion, Anna; Lindaas, Jakob; Melton, Joe R.; Miller, John B.; Sweeney, Colm; Wofsy, Steven C.; Michalak, Anna M.
2016-01-01
Methane (CH4) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH4 fluxes across Alaska for 2012–2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH4 observations. In addition, we find that CH4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH4 (for May–Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH4 fluxes than in process estimates. Lastly, we find that the seasonality of CH4 fluxes varied during 2012–2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH4 fluxes from the state. PMID:28066129
A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations.
Miller, Scot M; Miller, Charles E; Commane, Roisin; Chang, Rachel Y-W; Dinardo, Steven J; Henderson, John M; Karion, Anna; Lindaas, Jakob; Melton, Joe R; Miller, John B; Sweeney, Colm; Wofsy, Steven C; Michalak, Anna M
2016-10-01
Methane (CH 4 ) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH 4 fluxes across Alaska for 2012-2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH 4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH 4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH 4 observations. In addition, we find that CH 4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH 4 ( for May-Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH 4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH 4 fluxes than in process estimates. Lastly, we find that the seasonality of CH 4 fluxes varied during 2012-2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH 4 fluxes from the state.
Improving Evapotranspiration Estimates Using Multi-Platform Remote Sensing
NASA Astrophysics Data System (ADS)
Knipper, Kyle; Hogue, Terri; Franz, Kristie; Scott, Russell
2016-04-01
Understanding the linkages between energy and water cycles through evapotranspiration (ET) is uniquely challenging given its dependence on a range of climatological parameters and surface/atmospheric heterogeneity. A number of methods have been developed to estimate ET either from primarily remote-sensing observations, in-situ measurements, or a combination of the two. However, the scale of many of these methods may be too large to provide needed information about the spatial and temporal variability of ET that can occur over regions with acute or chronic land cover change and precipitation driven fluxes. The current study aims to improve the spatial and temporal variability of ET utilizing only satellite-based observations by incorporating a potential evapotranspiration (PET) methodology with satellite-based down-scaled soil moisture estimates in southern Arizona, USA. Initially, soil moisture estimates from AMSR2 and SMOS are downscaled to 1km through a triangular relationship between MODIS land surface temperature (MYD11A1), vegetation indices (MOD13Q1/MYD13Q1), and brightness temperature. Downscaled soil moisture values are then used to scale PET to actual ET (AET) at a daily, 1km resolution. Derived AET estimates are compared to observed flux tower estimates, the North American Land Data Assimilation System (NLDAS) model output (i.e. Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, Mosiac Model, and Noah Model simulations), the Operational Simplified Surface Energy Balance Model (SSEBop), and a calibrated empirical ET model created specifically for the region. Preliminary results indicate a strong increase in correlation when incorporating the downscaling technique to original AMSR2 and SMOS soil moisture values, with the added benefit of being able to decipher small scale heterogeneity in soil moisture (riparian versus desert grassland). AET results show strong correlations with relatively low error and bias when compared to flux tower estimates. In addition, AET results show improved bias to those reported by SSEBop, with similar correlations and errors when compared to the empirical ET model. Spatial patterns of estimated AET display patterns representative of the basin's elevation and vegetation characteristics, with improved spatial resolution and temporal heterogeneity when compared to previous models.
A Skew-t space-varying regression model for the spectral analysis of resting state brain activity.
Ismail, Salimah; Sun, Wenqi; Nathoo, Farouk S; Babul, Arif; Moiseev, Alexader; Beg, Mirza Faisal; Virji-Babul, Naznin
2013-08-01
It is known that in many neurological disorders such as Down syndrome, main brain rhythms shift their frequencies slightly, and characterizing the spatial distribution of these shifts is of interest. This article reports on the development of a Skew-t mixed model for the spatial analysis of resting state brain activity in healthy controls and individuals with Down syndrome. Time series of oscillatory brain activity are recorded using magnetoencephalography, and spectral summaries are examined at multiple sensor locations across the scalp. We focus on the mean frequency of the power spectral density, and use space-varying regression to examine associations with age, gender and Down syndrome across several scalp regions. Spatial smoothing priors are incorporated based on a multivariate Markov random field, and the markedly non-Gaussian nature of the spectral response variable is accommodated by the use of a Skew-t distribution. A range of models representing different assumptions on the association structure and response distribution are examined, and we conduct model selection using the deviance information criterion. (1) Our analysis suggests region-specific differences between healthy controls and individuals with Down syndrome, particularly in the left and right temporal regions, and produces smoothed maps indicating the scalp topography of the estimated differences.
A Protocol for Generating and Exchanging (Genome-Scale) Metabolic Resource Allocation Models.
Reimers, Alexandra-M; Lindhorst, Henning; Waldherr, Steffen
2017-09-06
In this article, we present a protocol for generating a complete (genome-scale) metabolic resource allocation model, as well as a proposal for how to represent such models in the systems biology markup language (SBML). Such models are used to investigate enzyme levels and achievable growth rates in large-scale metabolic networks. Although the idea of metabolic resource allocation studies has been present in the field of systems biology for some years, no guidelines for generating such a model have been published up to now. This paper presents step-by-step instructions for building a (dynamic) resource allocation model, starting with prerequisites such as a genome-scale metabolic reconstruction, through building protein and noncatalytic biomass synthesis reactions and assigning turnover rates for each reaction. In addition, we explain how one can use SBML level 3 in combination with the flux balance constraints and our resource allocation modeling annotation to represent such models.
X-ray sources in dwarf galaxies in the Virgo cluster and the nearby field
NASA Astrophysics Data System (ADS)
Papadopoulou, Marina; Phillipps, S.; Young, A. J.
2016-08-01
The extent to which dwarf galaxies represent essentially scaled down versions of giant galaxies is an important question with regards the formation and evolution of the galaxy population as a whole. Here, we address the specific question of whether dwarf galaxies behave like smaller versions of giants in terms of their X-ray properties. We discuss two samples of around 100 objects each, dwarfs in the Virgo cluster and dwarfs in a large Northern hemisphere area. We find nine dwarfs in each sample with Chandra detections. For the Virgo sample, these are in dwarf elliptical (or dwarf lenticular) galaxies and we assume that these are (mostly) low-mass X-ray binaries (LMXB) [some may be nuclear sources]. We find a detection rate entirely consistent with scaling down from massive ellipticals, viz. about one bright (I.e. LX > 1038 erg s-1) LMXB per 5 × 109 M⊙ of stars. For the field sample, we find one (known) Seyfert nucleus, in a galaxy which appears to be the lowest mass dwarf with a confirmed X-ray emitting nucleus. The other detections are in star-forming dwarf irregular or blue compact dwarf galaxies and are presumably high-mass X-ray binaries (HMXB). This time, we find a very similar detection rate to that in large late-type galaxies if we scale down by star formation rate, roughly one HMXB for a rate of 0.3 M⊙ per year. Nevertheless, there does seem to be one clear difference, in that the dwarf late-type galaxies with X-ray sources appear strongly biased to very low metallicity systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parresol, Bernard, R.; Scott, Joe, H.; Andreu, Anne
2012-01-01
Currently geospatial fire behavior analyses are performed with an array of fire behavior modeling systems such as FARSITE, FlamMap, and the Large Fire Simulation System. These systems currently require standard or customized surface fire behavior fuel models as inputs that are often assigned through remote sensing information. The ability to handle hundreds or thousands of measured surface fuelbeds representing the fine scale variation in fire behavior on the landscape is constrained in terms of creating compatible custom fire behavior fuel models. In this study, we demonstrate an objective method for taking ecologically complex fuelbeds from inventory observations and converting thosemore » into a set of custom fuel models that can be mapped to the original landscape. We use an original set of 629 fuel inventory plots measured on an 80,000 ha contiguous landscape in the upper Atlantic Coastal Plain of the southeastern United States. From models linking stand conditions to component fuel loads, we impute fuelbeds for over 6000 stands. These imputed fuelbeds were then converted to fire behavior parameters under extreme fuel moisture and wind conditions (97th percentile) using the fuel characteristic classification system (FCCS) to estimate surface fire rate of spread, surface fire flame length, shrub layer reaction intensity (heat load), non-woody layer reaction intensity, woody layer reaction intensity, and litter-lichen-moss layer reaction intensity. We performed hierarchical cluster analysis of the stands based on the values of the fire behavior parameters. The resulting 7 clusters were the basis for the development of 7 custom fire behavior fuel models from the cluster centroids that were calibrated against the FCCS point data for wind and fuel moisture. The latter process resulted in calibration against flame length as it was difficult to obtain a simultaneous calibration against both rate of spread and flame length. The clusters based on FCCS fire behavior parameters represent reasonably identifiable stand conditions, being: (1) pine dominated stands with more litter and down woody debriscomponents than other stands, (2) hardwood and pine stands with no shrubs, (3) hardwood dominated stands with low shrub and high non-woody biomass and high down woody debris, (4) stands with high grass and forb (i.e., non-woody) biomass as well as substantial shrub biomass, (5) stands with both high shrub and litter biomass, (6) pine-mixed hardwood stands with moderate litter biomass and low shrub biomass, and (7) baldcypress-tupelo stands. Models representing these stand clusters generated flame lengths from 0.6 to 2.3 musing a 30 km h{sub 1} wind speed and fireline intensities of 100-1500 kW m{sub 1} that are typical within the range of experience on this landscape. The fuel models ranked 1 < 2 < 7 < 5 < 4 < 3 < 6 in terms of both flame length and fireline intensity. The method allows for ecologically complex data to be utilized in order to create a landscape representative of measured fuel conditions and to create models that interface with geospatial fire models.« less
NASA Astrophysics Data System (ADS)
Dorman, C. E.; Koracin, D.
2002-12-01
The importance of winds in driving the coastal ocean has long been recognized. Pre-World War II literature links wind stress and wind stress curl to coastal ocean responses. Nevertheless, direct measurements plausibly representative of a coastal area are few. Multiple observations on the scale of the simplest mesoscale atmospheric structure, such as the cross-coast variation along a linear coast, are even less frequent. The only wind measurements that we are aware of in a complicated coastal area backed by higher topography are in the MMS sponsored, Santa Barbara Channel/Santa Marina basin study. Taking place from 1994 to present, this study had an unheard of dense surface automated meteorological station array of up to 5 meteorological buoys, 4 oil platforms, 2 island stations, and 11 coastal stations within 1 km of the beach. Most of the land stations are maintained by other projects. Only a large, a well funded project with backed by an agency with the long-view could dedicate the resources and effort into filling the mesoscale "holes" and maintaining long-term, remotely located stations. The result of the MMS funded project is a sufficiently dense surface station array to resolve the along-coast and cross-coast atmospheric mesoscale wind structure. Great temporal and spatial variation is found in the wind, wind stress and the wind stress curl, during the extended summer season. The MM5 atmospheric mesoscale model with appropriate boundary layer physics and high-resolution horizontal and vertical grid structure successfully simulates the measured wind field from large scale down to the lower end of the mesoscale. Atmospheric models without appropriate resolution and boundary layer physics fail to capture significant mesoscale wind features. Satellite microwave wind measurements generally capture the offshore synoptic scale temporal and spatial scale in twice-a-day snap shots but fail in the crucial, innermost coastal waters and the diurnal scale.
Validation of a 30m resolution flood hazard model of the conterminous United States
NASA Astrophysics Data System (ADS)
Sampson, C. C.; Wing, O.; Smith, A.; Bates, P. D.; Neal, J. C.
2017-12-01
We present a 30m resolution two-dimensional hydrodynamic model of the entire conterminous US that has been used to simulate continent-wide flood extent for ten return periods. The model uses a highly efficient numerical solution of the shallow water equations to simulate fluvial flooding in catchments down to 50 km2 and pluvial flooding in all catchments. We use the US National Elevation Dataset (NED) to determine topography for the model and the US Army Corps of Engineers National Levee Dataset to explicitly represent known flood defences. Return period flows and rainfall intensities are estimated using regionalized frequency analyses. We validate these simulations against the complete catalogue of Federal Emergency Management Agency (FEMA) Special Flood Hazard Area maps. We also compare the results obtained from the NED-based continental model with results from a 90m resolution global hydraulic model built using SRTM terrain and identical boundary conditions. Where the FEMA Special Flood Hazard Areas are based on high quality local models the NED-based continental scale model attains a Hit Rate of 86% and a Critical Success Index (CSI) of 0.59; both are typical of scores achieved when comparing high quality reach-scale models to observed event data. The NED model also consistently outperformed the coarser SRTM model. The correspondence between the continental model and FEMA improves in temperate areas and for basins above 400 km2. Given typical hydraulic modeling uncertainties in the FEMA maps, it is probable that the continental-scale model can replicate them to within error. The continental model covers the entire continental US, compared to only 61% for FEMA, and also maps flooding in smaller watersheds not included in the FEMA coverage. The simulations were performed using computing hardware costing less than 100k, whereas the FEMA flood layers are built from thousands of individual local studies that took several decades to develop at an estimated cost (up to 2013) of 4.5 - $7.5bn. The continental model is relatively straightforward to modify and could be re-run under different scenarios, such as climate change. The results show that continental-scale models may now offer sufficient rigor to inform some decision-making needs with far lower cost and greater coverage than traditional patchwork approaches.
NASA Astrophysics Data System (ADS)
Aldakheel, Fadi; Wriggers, Peter; Miehe, Christian
2017-12-01
The modeling of failure in ductile materials must account for complex phenomena at the micro-scale, such as nucleation, growth and coalescence of micro-voids, as well as the final rupture at the macro-scale, as rooted in the work of Gurson (J Eng Mater Technol 99:2-15, 1977). Within a top-down viewpoint, this can be achieved by the combination of a micro-structure-informed elastic-plastic model for a porous medium with a concept for the modeling of macroscopic crack discontinuities. The modeling of macroscopic cracks can be achieved in a convenient way by recently developed continuum phase field approaches to fracture, which are based on the regularization of sharp crack discontinuities, see Miehe et al. (Comput Methods Appl Mech Eng 294:486-522, 2015). This avoids the use of complex discretization methods for crack discontinuities, and can account for complex crack patterns. In this work, we develop a new theoretical and computational framework for the phase field modeling of ductile fracture in conventional elastic-plastic solids under finite strain deformation. It combines modified structures of Gurson-Tvergaard-Needelman GTN-type plasticity model outlined in Tvergaard and Needleman (Acta Metall 32:157-169, 1984) and Nahshon and Hutchinson (Eur J Mech A Solids 27:1-17, 2008) with a new evolution equation for the crack phase field. An important aspect of this work is the development of a robust Explicit-Implicit numerical integration scheme for the highly nonlinear rate equations of the enhanced GTN model, resulting with a low computational cost strategy. The performance of the formulation is underlined by means of some representative examples, including the development of the experimentally observed cup-cone failure mechanism.
High flexibility of DNA on short length scales probed by atomic force microscopy.
Wiggins, Paul A; van der Heijden, Thijn; Moreno-Herrero, Fernando; Spakowitz, Andrew; Phillips, Rob; Widom, Jonathan; Dekker, Cees; Nelson, Philip C
2006-11-01
The mechanics of DNA bending on intermediate length scales (5-100 nm) plays a key role in many cellular processes, and is also important in the fabrication of artificial DNA structures, but previous experimental studies of DNA mechanics have focused on longer length scales than these. We use high-resolution atomic force microscopy on individual DNA molecules to obtain a direct measurement of the bending energy function appropriate for scales down to 5 nm. Our measurements imply that the elastic energy of highly bent DNA conformations is lower than predicted by classical elasticity models such as the worm-like chain (WLC) model. For example, we found that on short length scales, spontaneous large-angle bends are many times more prevalent than predicted by the WLC model. We test our data and model with an interlocking set of consistency checks. Our analysis also shows how our model is compatible with previous experiments, which have sometimes been viewed as confirming the WLC.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
NASA Astrophysics Data System (ADS)
Jiang, Xiaolong; Zhang, Lijuan; Bai, Yang; Liu, Ying; Liu, Zhengkun; Qiu, Keqiang; Liao, Wei; Zhang, Chuanchao; Yang, Ke; Chen, Jing; Jiang, Yilan; Yuan, Xiaodong
2017-07-01
In this work, we experimentally investigate the surface nano-roughness during the inductively coupled plasma etching of fused silica, and discover a novel bi-stage time evolution of surface nano-morphology. At the beginning, the rms roughness, correlation length and nano-mound dimensions increase linearly and rapidly with etching time. At the second stage, the roughening process slows down dramatically. The switch of evolution stage synchronizes with the morphological change from dual-scale roughness comprising long wavelength underlying surface and superimposed nano-mounds to one scale of nano-mounds. A theoretical model based on surface morphological change is proposed. The key idea is that at the beginning, etched surface is dual-scale, and both larger deposition rate of etch inhibitors and better plasma etching resistance at the surface peaks than surface valleys contribute to the roughness development. After surface morphology transforming into one-scale, the difference of plasma resistance between surface peaks and valleys vanishes, thus the roughening process slows down.
The interactions between vegetation and hydrology in mountainous terrain are difficult to represent in mathematical models. There are at least three primary reasons for this difficulty. First, expanding plot-scale measurements to the watershed scale requires finding the balance...
Downscaling global precipitation for local applications - a case for the Rhine basin
NASA Astrophysics Data System (ADS)
Sperna Weiland, Frederiek; van Verseveld, Willem; Schellekens, Jaap
2017-04-01
Within the EU FP7 project eartH2Observe a global Water Resources Re-analysis (WRR) is being developed. This re-analysis consists of meteorological and hydrological water balance variables with global coverage, spanning the period 1979-2014 at 0.25 degrees resolution (Schellekens et al., 2016). The dataset can be of special interest in regions with limited in-situ data availability, yet for local scale analysis particularly in mountainous regions, a resolution of 0.25 degrees may be too coarse and downscaling the data to a higher resolution may be required. A downscaling toolbox has been made that includes spatial downscaling of precipitation based on the global WorldClim dataset that is available at 1 km resolution as a monthly climatology (Hijmans et al., 2005). The input of the down-scaling tool are either the global eartH2Observe WRR1 and WRR2 datasets based on the WFDEI correction methodology (Weedon et al., 2014) or the global Multi-Source Weighted-Ensemble Precipitation (MSWEP) dataset (Beck et al., 2016). Here we present a validation of the datasets over the Rhine catchment by means of a distributed hydrological model (wflow, Schellekens et al., 2014) using a number of precipitation scenarios. (1) We start by running the model using the local reference dataset derived by spatial interpolation of gauge observations. Furthermore we use (2) the MSWEP dataset at the native 0.25-degree resolution followed by (3) MSWEP downscaled with the WorldClim dataset and final (4) MSWEP downscaled with the local reference dataset. The validation will be based on comparison of the modeled river discharges as well as rainfall statistics. We expect that down-scaling the MSWEP dataset with the WorldClim data to higher resolution will increase its performance. To test the performance of the down-scaling routine we have added a run with MSWEP data down-scaled with the local dataset and compare this with the run based on the local dataset itself. - Beck, H. E. et al., 2016. MSWEP: 3-hourly 0.25° global gridded precipitation (1979-2015) by merging gauge, satellite, and reanalysis data, Hydrol. Earth Syst. Sci. Discuss., doi:10.5194/hess-2016-236, accepted for final publication. - Hijmans, R.J. et al., 2005. Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology 25: 1965-1978. - Schellekens, J. et al., 2016. A global water resources ensemble of hydrological models: the eartH2Observe Tier-1 dataset, Earth Syst. Sci. Data Discuss., doi:10.5194/essd-2016-55, under review. - Schellekens, J. et al., 2014. Rapid setup of hydrological and hydraulic models using OpenStreetMap and the SRTM derived digital elevation model. Environmental Modelling&Software - Weedon, G.P. et al., 2014. The WFDEI meteorological forcing data set: WATCH Forcing Data methodology applied to ERA-Interim reanalysis data. Water Resources Research, 50, doi:10.1002/2014WR015638.
NASA Astrophysics Data System (ADS)
Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.
2018-04-01
Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are underestimated. On the other hand, many G2 models are able to represent most of large-scale circulation over Indo-Pacific region associated with El Niño and hence provide more realistic ENSO-ISM teleconnections. Therefore, this study advocates the importance of representation/simulation of large-scale circulation patterns during El Niño years in coupled models in order to capture El Niño-monsoon teleconnections well.
High Resolution Regional Climate Modeling for Lebanon, Eastern Mediterranean Coast
NASA Astrophysics Data System (ADS)
Katurji, Marwan; Soltanzadeh, Iman; Kuhnlein, Meike; Zawar-Reza, Peyman
2013-04-01
The Eastern Mediterranean coast consists of Lebanon, Palestine, Syria, Israel and a small part of southern Turkey. The region lies between latitudes 30 degrees S and 40 degrees N, which makes its climate affected by westerly propagating wintertime cyclones spinning off mid-latitude troughs (December, January and February), while during summer (June, July and August) the area is strongly affected by the sub-tropical anti-cyclonic belt as a result of the descending air of the Hadley cell circulation system. The area is considered to be in a transitional zone between tropical to mid-latitude climate regimes, and having a coastal topography up to 3000 m in elevation (like in the Western Ranges of Lebanon), which emphasizes the complexity of climate variability in this area under future predictions of climate change. This research incorporates both regional climate numerical simulations, Tropical Rainfall Measuring Mission (TRMM) satellite derived and surface rain gauge rainfall data to evaluate the Regional Climate Model (RegCM) version 4 ability to represent both the mean and variance of observed precipitation in the Eastern Mediterranean Region, with emphasis on the Lebanese coastal terrain and mountain ranges. The adopted methodology involves dynamically down scaling climate data from reanalysis synoptic files through a double nesting procedure. The retrospective analysis of 13 years with both 50 and 10 km spatial resolution allows for the assessment of the model results on both a climate scale and specific high intensity precipitating events. The spatial averaged mean bias error in precipitation rate for the rainy season predicted by RegCM 50 and 10 km resolution grids was 0.13 and 0.004 mm hr-1 respectively. When correlating RegCM and TRMM precipitation rate for the domain covering Lebanon's coastal mountains, the root mean square error (RMSE) for the mean quantities over the 13-year period was only 0.03, while the RMSE for the standard deviation was higher by one order of magnitude. Initial results showed good spatial variability agreement for precipitation with the satellite-derived data with improved results for the 10 km grid resolution setup. Also, results show a larger uncertainty within RegCM for predicting extreme precipitation events. Future work will investigate the ability of RegCM to simulate these extreme deviations in precipitation. The results from this research can be helpful for the better design of future regional climate down scaling predictions under climate change scenarios.
NASA Astrophysics Data System (ADS)
Hedelius, J.; Wennberg, P. O.; Wunch, D.; Roehl, C. M.; Podolske, J. R.; Hillyard, P.; Iraci, L. T.
2017-12-01
Greenhouse gas (GHG) emissions from California's South Coast Air Basin (SoCAB) have been studied extensively using a variety of tower, aircraft, remote sensing, emission inventory, and modeling studies. It is impractical to survey GHG fluxes from all urban areas and hot-spots to the extent the SoCAB has been studied, but it can serve as a test location for scaling methods globally. We use a combination of remote sensing measurements from ground (Total Carbon Column Observing Network, TCCON) and space-based (Observing Carbon Observatory-2, OCO-2) sensors in an inversion to obtain the carbon dioxide flux from the SoCAB. We also perform a variety of sensitivity tests to see how the inversion performs using different model parameterizations. Fluxes do not significantly depend on the mixed layer depth, but are sensitive to the model surface layers (<5 m). Carbon dioxide fluxes are larger than those from bottom-up inventories by about 20%, and along with CO has a significant weekend:weekday effect. Methane fluxes have little weekend changes. Results also include flux estimates from sub-regions of the SoCAB. Larger top-down than bottom-up fluxes highlight the need for additional work on both approaches. Higher top-down fluxes could arise from sampling bias, model bias, or may show bottom-up values underestimate sources. Lessons learned here may help in scaling up inversions to hundreds of urban systems using space-based observations.
NASA Astrophysics Data System (ADS)
Petersen, Marcell Elo; Maar, Marie; Larsen, Janus; Møller, Eva Friis; Hansen, Per Juel
2017-05-01
The aim of the study was to investigate the relative importance of bottom-up and top-down forcing on trophic cascades in the pelagic food-web and the implications for water quality indicators (summer phytoplankton biomass and winter nutrients) in relation to management. The 3D ecological model ERGOM was validated and applied in a local set-up of the Kattegat, Denmark, using the off-line Flexsem framework. The model scenarios were conducted by changing the forcing by ± 20% of nutrient inputs (bottom-up) and mesozooplankton mortality (top-down), and both types of forcing combined. The model results showed that cascading effects operated differently depending on the forcing type. In the single-forcing bottom-up scenarios, the cascade directions were in the same direction as the forcing. For scenarios involving top-down, there was a skipped-level-transmission in the trophic responses that was either attenuated or amplified at different trophic levels. On a seasonal scale, bottom-up forcing showed strongest response during winter-spring for DIN and Chl a concentrations, whereas top-down forcing had the highest cascade strength during summer for Chl a concentrations and microzooplankton biomass. On annual basis, the system was more bottom-up than top-down controlled. Microzooplankton was found to play an important role in the pelagic food web as mediator of nutrient and energy fluxes. This study demonstrated that the best scenario for improved water quality was a combined reduction in nutrient input and mesozooplankton mortality calling for the need of an integrated management of marine areas exploited by human activities.
Granular flows in constrained geometries
NASA Astrophysics Data System (ADS)
Murthy, Tejas; Viswanathan, Koushik
Confined geometries are widespread in granular processing applications. The deformation and flow fields in such a geometry, with non-trivial boundary conditions, determine the resultant mechanical properties of the material (local porosity, density, residual stresses etc.). We present experimental studies of deformation and plastic flow of a prototypical granular medium in different nontrivial geometries- flat-punch compression, Couette-shear flow and a rigid body sliding past a granular half-space. These geometries represent simplified scaled-down versions of common industrial configurations such as compaction and dredging. The corresponding granular flows show a rich variety of flow features, representing the entire gamut of material types, from elastic solids (beam buckling) to fluids (vortex-formation, boundary layers) and even plastically deforming metals (dead material zone, pile-up). The effect of changing particle-level properties (e.g., shape, size, density) on the observed flows is also explicitly demonstrated. Non-smooth contact dynamics particle simulations are shown to reproduce some of the observed flow features quantitatively. These results showcase some central challenges facing continuum-scale constitutive theories for dynamic granular flows.
Simulations of NLC formation using a microphysical model driven by three-dimensional dynamics
NASA Astrophysics Data System (ADS)
Kirsch, Annekatrin; Becker, Erich; Rapp, Markus; Megner, Linda; Wilms, Henrike
2014-05-01
Noctilucent clouds (NLCs) represent an optical phenomenon occurring in the polar summer mesopause region. These clouds have been known since the late 19th century. Current physical understanding of NLCs is based on numerous observational and theoretical studies, in recent years especially observations from satellites and by lidars from ground. Theoretical studies based on numerical models that simulate NLCs with the underlying microphysical processes are uncommon. Up to date no three-dimensional numerical simulations of NLCs exist that take all relevant dynamical scales into account, i.e., from the planetary scale down to gravity waves and turbulence. Rather, modeling is usually restricted to certain flow regimes. In this study we make a more rigorous attempt and simulate NLC formation in the environment of the general circulation of the mesopause region by explicitly including gravity waves motions. For this purpose we couple the Community Aerosol and Radiation Model for Atmosphere (CARMA) to gravity-wave resolving dynamical fields simulated beforehand with the Kuehlungsborn Mechanistic Circulation Model (KMCM). In our case, the KMCM is run with a horizontal resolution of T120 which corresponds to a minimum horizontal wavelength of 350 km. This restriction causes the resolved gravity waves to be somewhat biased to larger scales. The simulated general circulation is dynamically controlled by these waves in a self-consitent fashion and provides realistic temperatures and wind-fields for July conditions. Assuming a water vapor mixing ratio profile in agreement with current observations results in reasonable supersaturations of up to 100. In a first step, CARMA is applied to a horizontal section covering the Northern hemisphere. The vertical resolution is 120 levels ranging from 72 to 101 km. In this paper we will present initial results of this coupled dynamical microphysical model focussing on the interaction of waves and turbulent diffusion with NLC-microphysics.
Examining Changes to the Madden-Julian Oscillation in a Warmer Climate Using CMIP5 Models
NASA Astrophysics Data System (ADS)
Rushley, Stephanie
Five models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) that reasonably represent the Madden-Julian Oscillation (MJO) are used to examine the response of the MJO to greenhouse gas induced warming. Changes in the MJO's amplitude, zonal scale, and phase speed are examined using daily-mean precipitation during boreal winter (November to April) when the MJO is strongest. The MJO precipitation variance increases with tropics mean surface temperature. However, the westward moving waves of the same temporal and spatial scales increase at about the same rate, suggesting that the maintenance mechanism for the MJO does not change with warming. On the other hand, a robust increase in phase speed of the MJO is found with a rate of 5-12% per degree of surface warming. The robust increase in the MJO phase speed are examined using the linear moisture wave theory of Adames and Kim (2016). In this theory, the MJO phase speed is determined by the horizontal moisture gradient in the lower troposphere, the gross dry stability, the convective moisture adjustment timescale, and zonal wavenumber of the MJO. All CMIP5 models examined show an increase in the horizontal humidity gradient, the gross dry stability and the convective moisture adjustment timescale, while exhibiting a decrease in the zonal wavenumber of the MJO. The increase in the horizontal humidity gradient and zonal scale of the MJO act to increase the speed of the MJO by enhancing horizontal moisture advection associated with the MJO, while the gross dry stability and convective moisture adjustment timescale act to slow down the MJO by dampening the horizontal moisture advection process. In all the models, the combined effects of the four key parameters act to speed up the MJO, matching the calculated phase speed changes with warming in the models.
NASA Technical Reports Server (NTRS)
Johnson, J. D.; Braddock, W. F.; Praharaj, S. C.
1975-01-01
A force test of a scale model of the Space Shuttle Solid Rocket Booster was conducted in a trisonic wind tunnel. The model was tested with such protuberances as a camera capsule, electrical tunnel, attach rings, aft separation rockets, ET attachment structure, and hold-down struts. The model was also tested with the nozzle at gimbal angles of 0, 2.5, and 5 degrees. The influence of a unique heat shield configuration was also determined. Some photographs of model installations in the tunnel were taken and are included. Schlieren photography was utilized for several angles of attack.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as ‘active vision’, to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of ‘where’ and ‘what’ information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate ‘active’ visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a ‘priority map’. PMID:23437044
NASA Technical Reports Server (NTRS)
Barna, P. S.
1996-01-01
Numerous tests were performed on the original Acoustic Quiet Flow Facility Three-Dimensional Model Tunnel, scaled down from the full-scale plans. Results of tests performed on the original scale model tunnel were reported in April 1995, which clearly showed that this model was lacking in performance. Subsequently this scale model was modified to attempt to possibly improve the tunnel performance. The modifications included: (a) redesigned diffuser; (b) addition of a collector; (c) addition of a Nozzle-Diffuser; (d) changes in location of vent-air. Tests performed on the modified tunnel showed a marked improvement in performance amounting to a nominal increase of pressure recovery in the diffuser from 34 percent to 54 percent. Results obtained in the tests have wider application. They may also be applied to other tunnels operating with an open test section not necessarily having similar geometry as the model under consideration.
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.
2016-01-01
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design. PMID:27109208
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less
Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; ...
2016-04-25
The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less
Understanding multi-scale structural evolution in granular systems through gMEMS
NASA Astrophysics Data System (ADS)
Walker, David M.; Tordesillas, Antoinette
2013-06-01
We show how the rheological response of a material to applied loads can be systematically coded, analyzed and succinctly summarized, according to an individual grain's property (e.g. kinematics). Individual grains are considered as their own smart sensor akin to microelectromechanical systems (e.g. gyroscopes, accelerometers), each capable of recognizing their evolving role within self-organizing building block structures (e.g. contact cycles and force chains). A symbolic time series is used to represent their participation in such self-assembled building blocks and a complex network summarizing their interrelationship with other grains is constructed. In particular, relationships between grain time series are determined according to the information theory Hamming distance or the metric Euclidean distance. We then use topological distance to find network communities enabling groups of grains at remote physical metric distances in the material to share a classification. In essence grains with similar structural and functional roles at different scales are identified together. This taxonomy distills the dissipative structural rearrangements of grains down to its essential features and thus provides pointers for objective physics-based internal variable formalisms used in the construction of robust predictive continuum models.
Linear analysis of the Richtmyer-Meshkov instability in shock-flame interactions
NASA Astrophysics Data System (ADS)
Massa, L.; Jha, P.
2012-05-01
Shock-flame interactions enhance supersonic mixing and detonation formation. Therefore, their analysis is important to explosion safety, internal combustion engine performance, and supersonic combustor design. The fundamental process at the basis of the interaction is the Richtmyer-Meshkov instability supported by the density difference between burnt and fresh mixtures. In the present study we analyze the effect of reactivity on the Richtmyer-Meshkov instability with particular emphasis on combustion lengths that typify the scaling between perturbation growth and induction. The results of the present linear analysis study show that reactivity changes the perturbation growth rate by developing a pressure gradient at the flame surface. The baroclinic torque based on the density gradient across the flame acts to slow down the instability growth of high wave-number perturbations. A gasdynamic flame representation leads to the definition of a Peclet number representing the scaling between perturbation and thermal diffusion lengths within the flame. Peclet number effects on perturbation growth are observed to be marginal. The gasdynamic model also considers a finite flame Mach number that supports a separation between flame and contact discontinuity. Such a separation destabilizes the interface growth by augmenting the tangential shear.
Mesoscopic model of actin-based propulsion.
Zhu, Jie; Mogilner, Alex
2012-01-01
Two theoretical models dominate current understanding of actin-based propulsion: microscopic polymerization ratchet model predicts that growing and writhing actin filaments generate forces and movements, while macroscopic elastic propulsion model suggests that deformation and stress of growing actin gel are responsible for the propulsion. We examine both experimentally and computationally the 2D movement of ellipsoidal beads propelled by actin tails and show that neither of the two models can explain the observed bistability of the orientation of the beads. To explain the data, we develop a 2D hybrid mesoscopic model by reconciling these two models such that individual actin filaments undergoing nucleation, elongation, attachment, detachment and capping are embedded into the boundary of a node-spring viscoelastic network representing the macroscopic actin gel. Stochastic simulations of this 'in silico' actin network show that the combined effects of the macroscopic elastic deformation and microscopic ratchets can explain the observed bistable orientation of the actin-propelled ellipsoidal beads. To test the theory further, we analyze observed distribution of the curvatures of the trajectories and show that the hybrid model's predictions fit the data. Finally, we demonstrate that the model can explain both concave-up and concave-down force-velocity relations for growing actin networks depending on the characteristic time scale and network recoil. To summarize, we propose that both microscopic polymerization ratchets and macroscopic stresses of the deformable actin network are responsible for the force and movement generation.
Mesoscopic Model — Advanced Simulation of Microforming Processes
NASA Astrophysics Data System (ADS)
Geißdörfer, Stefan; Engel, Ulf; Geiger, Manfred
2007-04-01
Continued miniaturization in many fields of forming technology implies the need for a better understanding of the effects occurring while scaling down from conventional macroscopic scale to microscale. At microscale, the material can no longer be regarded as a homogeneous continuum because of the presence of only a few grains in the deformation zone. This leads to a change in the material behaviour resulting among others in a large scatter of forming results. A correlation between the integral flow stress of the workpiece and the scatter of the process factors on the one hand and the mean grain size and its standard deviation on the other hand has been observed in experiments. The conventional FE-simulation of scaled down processes is not able to consider the size-effects observed such as the actual reduction of the flow stress, the increasing scatter of the process factors and a local material flow being different to that obtained in the case of macroparts. For that reason, a new simulation model has been developed taking into account all the size-effects. The present paper deals with the theoretical background of the new mesoscopic model, its characteristics like synthetic grain structure generation and the calculation of micro material properties — based on conventional material properties. The verification of the simulation model is done by carrying out various experiments with different mean grain sizes and grain structures but the same geometrical dimensions of the workpiece.
Optical bandgap of single- and multi-layered amorphous germanium ultra-thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Pei; Zaslavsky, Alexander; Longo, Paolo
2016-01-07
Accurate optical methods are required to determine the energy bandgap of amorphous semiconductors and elucidate the role of quantum confinement in nanometer-scale, ultra-thin absorbing layers. Here, we provide a critical comparison between well-established methods that are generally employed to determine the optical bandgap of thin-film amorphous semiconductors, starting from normal-incidence reflectance and transmittance measurements. First, we demonstrate that a more accurate estimate of the optical bandgap can be achieved by using a multiple-reflection interference model. We show that this model generates more reliable results compared to the widely accepted single-pass absorption method. Second, we compare two most representative methods (Taucmore » and Cody plots) that are extensively used to determine the optical bandgap of thin-film amorphous semiconductors starting from the extracted absorption coefficient. Analysis of the experimental absorption data acquired for ultra-thin amorphous germanium (a-Ge) layers demonstrates that the Cody model is able to provide a less ambiguous energy bandgap value. Finally, we apply our proposed method to experimentally determine the optical bandgap of a-Ge/SiO{sub 2} superlattices with single and multiple a-Ge layers down to 2 nm thickness.« less
NASA Astrophysics Data System (ADS)
Mesinger, F.
The traditional views hold that high-resolution limited area models (LAMs) down- scale large-scale lateral boundary information, and that predictability of small scales is short. Inspection of various rms fits/errors has contributed to these views. It would follow that the skill of LAMs should visibly deteriorate compared to that of their driver models at more extended forecast times. The limited area Eta Model at NCEP has an additional handicap of being driven by LBCs of the previous Avn global model run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. This should make its relative skill compared to that of the Avn deteriorate even faster. These views are challenged by various Eta results including rms fits to raobs out to 84 h. It is argued that it is the largest scales that contribute the most to the skill of the Eta relative to that of the Avn.
Integrated, multi-scale, spatial-temporal cell biology--A next step in the post genomic era.
Horwitz, Rick
2016-03-01
New microscopic approaches, high-throughput imaging, and gene editing promise major new insights into cellular behaviors. When coupled with genomic and other 'omic information and "mined" for correlations and associations, a new breed of powerful and useful cellular models should emerge. These top down, coarse-grained, and statistical models, in turn, can be used to form hypotheses merging with fine-grained, bottom up mechanistic studies and models that are the back bone of cell biology. The goal of the Allen Institute for Cell Science is to develop the top down approach by developing a high throughput microscopy pipeline that is integrated with modeling, using gene edited hiPS cell lines in various physiological and pathological contexts. The output of these experiments and models will be an "animated" cell, capable of integrating and analyzing image data generated from experiments and models. Copyright © 2015 Elsevier Inc. All rights reserved.
A Variable Resolution Stretched Grid General Circulation Model: Regional Climate Simulation
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.; Suarez, Max J.
2000-01-01
The development of and results obtained with a variable resolution stretched-grid GCM for the regional climate simulation mode, are presented. A global variable resolution stretched- grid used in the study has enhanced horizontal resolution over the U.S. as the area of interest The stretched-grid approach is an ideal tool for representing regional to global scale interaction& It is an alternative to the widely used nested grid approach introduced over a decade ago as a pioneering step in regional climate modeling. The major results of the study are presented for the successful stretched-grid GCM simulation of the anomalous climate event of the 1988 U.S. summer drought- The straightforward (with no updates) two month simulation is performed with 60 km regional resolution- The major drought fields, patterns and characteristics such as the time averaged 500 hPa heights precipitation and the low level jet over the drought area. appear to be close to the verifying analyses for the stretched-grid simulation- In other words, the stretched-grid GCM provides an efficient down-scaling over the area of interest with enhanced horizontal resolution. It is also shown that the GCM skill is sustained throughout the simulation extended to one year. The developed and tested in a simulation mode stretched-grid GCM is a viable tool for regional and subregional climate studies and applications.
Pretest Round Robin Analysis of 1:4-Scale Prestressed Concrete Containment Vessel Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
HESSHEIMER,MICHAEL F.; LUK,VINCENT K.; KLAMERUS,ERIC W.
The purpose of the program is to investigate the response of representative scale models of nuclear containment to pressure loading beyond the design basis accident and to compare analytical predictions to measured behavior. This objective is accomplished by conducting static, pneumatic overpressurization tests of scale models at ambient temperature. This research program consists of testing two scale models: a steel containment vessel (SCV) model (tested in 1996) and a prestressed concrete containment vessel (PCCV) model, which is the subject of this paper.
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua
2016-10-01
Tropical Instability Waves (TIWs) and the El Niño-Southern Oscillation (ENSO) are two air-sea coupling phenomena that are prominent in the tropical Pacific, occurring at vastly different space-time scales. It has been challenging to adequately represent both of these processes within a large-scale coupled climate model, which has led to a poor understanding of the interactions between TIW-induced feedback and ENSO. In this study, a novel modeling system was developed that allows representation of TIW-scale air-sea coupling and its interaction with ENSO. Satellite data were first used to derive an empirical model for TIW-induced sea surface wind stress perturbations (τTIW). The model was then embedded in a basin-wide hybrid-coupled model (HCM) of the tropical Pacific. Because τTIW were internally determined from TIW-scale sea surface temperatures (SSTTIW) simulated in the ocean model, the wind-SST coupling at TIW scales was interactively represented within the large-scale coupled model. Because the τTIW-SSTTIW coupling part of the model can be turned on or off in the HCM simulations, the related TIW wind feedback effects can be isolated and examined in a straightforward way. Then, the TIW-scale wind feedback effects on the large-scale mean ocean state and interannual variability in the tropical Pacific were investigated based on this embedded system. The interactively represented TIW-scale wind forcing exerted an asymmetric influence on SSTs in the HCM, characterized by a mean-state cooling and by a positive feedback on interannual variability, acting to enhance ENSO amplitude. Roughly speaking, the feedback tends to increase interannual SST variability by approximately 9%. Additionally, there is a tendency for TIW wind to have an effect on the phase transition during ENSO evolution, with slightly shortened interannual oscillation periods. Additional sensitivity experiments were performed to elucidate the details of TIW wind effects on SST evolution during ENSO cycles.
Chang, Wing Chung; Kwong, Vivian Wing Yan; Or Chi Fai, Philip; Lau, Emily Sin Kei; Chan, Gloria Hoi Kei; Jim, Olivia Tsz Ting; Hui, Christy Lai Ming; Chan, Sherry Kit Wa; Lee, Edwin Ho Ming; Chen, Eric Yu Hai
2018-02-01
Functional remission represents an intermediate functional milestone toward recovery. Differential relationships of negative symptom sub-domains with functional remission in first-episode psychosis are understudied. We aimed to examine rate and predictors of functional remission in people with first-episode psychosis in the context of a 3-year follow-up of a randomized controlled trial comparing 1-year extension of early intervention (i.e. 3-year early intervention) with step-down psychiatric care (i.e. 2-year early intervention). A total of 160 participants were recruited upon completion of a 2-year specialized early intervention program for first-episode psychosis in Hong Kong and underwent a 1-year randomized controlled trial comparing 1-year extended early intervention with step-down care. Participants were followed up and reassessed 3 years after inclusion to the trial (i.e. 3-year follow-up). Functional remission was operationalized as simultaneous fulfillment of attaining adequate functioning (measured by Social and Occupational Functioning Scale and Role Functioning Scale) at 3-year follow-up and sustained employment in the last 6 months of 3-year study period. Negative symptom measure was delineated into amotivation (i.e. motivational impairment) and diminished expression (i.e. reduced affect and speech output). Data analysis was based on 143 participants who completed follow-up functional assessments. A total of 31 (21.7%) participants achieved functional remission status at 3-year follow-up. Multivariate regression analysis showed that lower levels of amotivation ( p = 0.010) and better functioning at study intake ( p = 0.004) independently predicted functional remission (Final model: Nagelkerke R 2 = 0.40, χ 2 = 42.9, p < 0.001). Extended early intervention, duration of untreated psychosis and diminished expression did not predict functional remission. Only approximately one-fifths of early psychosis patients were found to achieve functional remission. Functional impairment remains an unmet treatment need in the early stage of psychotic illness. Our results further suggest that amotivation may represent a critical therapeutic target for functional remission attainment in early psychosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jardine, Kolby
In conjunction with the U.S. Department of Energy (DOE)’s Atmospheric Radiation Measurement (ARM) Climate Research Facility GoAmazon campaign, the Terrestrial Ecosystem Science (TES)-funded Green Ocean Amazon (GoAmazon 2014/15) terrestrial ecosystem project (Geco) was designed to: • evaluate the strengths and weaknesses of leaf-level algorithms for biogenic volatile organic compounds (BVOCs) emissions in Amazon forests near Manaus, Brazil, and • conduct mechanistic field studies to characterize biochemical and physiological processes governing leaf- and landscape-scale tropical forest BVOC emissions, and the influence of environmental drivers that are expected to change with a warming climate. Through a close interaction between modeling and observationalmore » activities, including the training of MS and PhD graduate students, post-doctoral students, and technicians at the National Institute for Amazon Research (INPA), the study aimed at improving the representation of BVOC-mediated biosphere-atmosphere interactions and feedbacks under a warming climate. BVOCs can form cloud condensation nuclei (CCN) that influence precipitation dynamics and modify the quality of down welling radiation for photosynthesis. However, our ability to represent these coupled biosphere-atmosphere processes in Earth system models suffers from poor understanding of the functions, identities, quantities, and seasonal patterns of BVOC emissions from tropical forests as well as their biological and environmental controls. The Model of Emissions of Gases and Aerosols from Nature (MEGAN), the current BVOC sub-model of the Community Earth System Model (CESM), was evaluated to explore mechanistic controls over BVOC emissions. Based on that analysis, a combination of observations and experiments were studied in forests near Manaus, Brazil, to test existing parameterizations and algorithm structures in MEGAN. The model was actively modified as needed to improve tropical BVOC emission simulations on a regional scale.« less
A multiscale model for reinforced concrete with macroscopic variation of reinforcement slip
NASA Astrophysics Data System (ADS)
Sciegaj, Adam; Larsson, Fredrik; Lundgren, Karin; Nilenius, Filip; Runesson, Kenneth
2018-06-01
A single-scale model for reinforced concrete, comprising the plain concrete continuum, reinforcement bars and the bond between them, is used as a basis for deriving a two-scale model. The large-scale problem, representing the "effective" reinforced concrete solid, is enriched by an effective reinforcement slip variable. The subscale problem on a Representative Volume Element (RVE) is defined by Dirichlet boundary conditions. The response of the RVEs of different sizes was investigated by means of pull-out tests. The resulting two-scale formulation was used in an FE^2 analysis of a deep beam. Load-deflection relations, crack widths, and strain fields were compared to those obtained from a single-scale analysis. Incorporating the independent macroscopic reinforcement slip variable resulted in a more pronounced localisation of the effective strain field. This produced a more accurate estimation of the crack widths than the two-scale formulation neglecting the effective reinforcement slip variable.
A model for integrating elementary neural functions into delayed-response behavior.
Gisiger, Thomas; Kerszberg, Michel
2006-04-01
It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning), and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task), or recalling from this image another one that has been associated with it during training (delayed-pair association task). The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.
Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge
NASA Astrophysics Data System (ADS)
Park, Heon-Joon; Lee, Changyeol
2017-04-01
Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).
Stochastic models to study the impact of mixing on a fed-batch culture of Saccharomyces cerevisiae.
Delvigne, F; Lejeune, A; Destain, J; Thonart, P
2006-01-01
The mechanisms of interaction between microorganisms and their environment in a stirred bioreactor can be modeled by a stochastic approach. The procedure comprises two submodels: a classical stochastic model for the microbial cell circulation and a Markov chain model for the concentration gradient calculus. The advantage lies in the fact that the core of each submodel, i.e., the transition matrix (which contains the probabilities to shift from a perfectly mixed compartment to another in the bioreactor representation), is identical for the two cases. That means that both the particle circulation and fluid mixing process can be analyzed by use of the same modeling basis. This assumption has been validated by performing inert tracer (NaCl) and stained yeast cells dispersion experiments that have shown good agreement with simulation results. The stochastic model has been used to define a characteristic concentration profile experienced by the microorganisms during a fermentation test performed in a scale-down reactor. The concentration profiles obtained in this way can explain the scale-down effect in the case of a Saccharomyces cerevisiae fed-batch process. The simulation results are analyzed in order to give some explanations about the effect of the substrate fluctuation dynamics on S. cerevisiae.
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.
Receptive fields and functional architecture in the retina
Balasubramanian, Vijay; Sterling, Peter
2009-01-01
Functional architecture of the striate cortex is known mostly at the tissue level – how neurons of different function distribute across its depth and surface on a scale of millimetres. But explanations for its design – why it is just so – need to be addressed at the synaptic level, a much finer scale where the basic description is still lacking. Functional architecture of the retina is known from the scale of millimetres down to nanometres, so we have sought explanations for various aspects of its design. Here we review several aspects of the retina's functional architecture and find that all seem governed by a single principle: represent the most information for the least cost in space and energy. Specifically: (i) why are OFF ganglion cells more numerous than ON cells? Because natural scenes contain more negative than positive contrasts, and the retina matches its neural resources to represent them equally well; (ii) why do ganglion cells of a given type overlap their dendrites to achieve 3-fold coverage? Because this maximizes total information represented by the array – balancing signal-to-noise improvement against increased redundancy; (iii) why do ganglion cells form multiple arrays? Because this allows most information to be sent at lower rates, decreasing the space and energy costs for sending a given amount of information. This broad principle, operating at higher levels, probably contributes to the brain's immense computational efficiency. PMID:19525561
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Inflated Uncertainty in Multimodel-Based Regional Climate Projections.
Madsen, Marianne Sloth; Langen, Peter L; Boberg, Fredrik; Christensen, Jens Hesselbjerg
2017-11-28
Multimodel ensembles are widely analyzed to estimate the range of future regional climate change projections. For an ensemble of climate models, the result is often portrayed by showing maps of the geographical distribution of the multimodel mean results and associated uncertainties represented by model spread at the grid point scale. Here we use a set of CMIP5 models to show that presenting statistics this way results in an overestimation of the projected range leading to physically implausible patterns of change on global but also on regional scales. We point out that similar inconsistencies occur in impact analyses relying on multimodel information extracted using statistics at the regional scale, for example, when a subset of CMIP models is selected to represent regional model spread. Consequently, the risk of unwanted impacts may be overestimated at larger scales as climate change impacts will never be realized as the worst (or best) case everywhere.
1983-09-01
adult man (full-scale height = 171 cm) and child (full-scale height = 86 cm), with arms down. We used the full-scale figure to reflect a worst-case... child for all orientations was much higher than that for the adult (e.g., 0.187 W/kg versus 0.063 W/kg), which is expected since the frequency is closer...to the resonance frequency for the child . Another series of scale-model measurements was conducted for determination of the average SAR values for
What Actually Happens When Granular Materials Deform Under Shear: A Look Within
NASA Astrophysics Data System (ADS)
Viggiani, C.
2012-12-01
We all know that geomaterials (soil and rock) are composed of particles. However, when dealing with them, we often use continuum models, which ignore particles and make use of abstract variables such stress and strain. Continuum mechanics is the classical tool that geotechnical engineers have always used for their everyday calculations: estimating settlements of an embankment, the deformation of a sheet pile wall, the stability of a dam or a foundation, etc. History tells us that, in general, this works fine. While we are happily ignoring particles, they will at times come back to haunt us. This happens when deformation is localized in regions so small that the detail of the soil's (or rock's) particular structure cannot safely be ignored. Failure is the perfect example of this. Researchers in geomechanics (and more generally in solid mechanics) have long since known that all classical continuum models typically break down when trying to model failure. All sorts of numerical troubles ensue - all of them pointing to a fundamental deficiency of the model: the lack of microstructure. N.B.: the term microstructure doesn't prescribe a dimension (e.g., microns), but rather a scale - the scale of the mechanisms responsible for failure. A possible remedy to this deficiency is represented by the so-called "double scale" models, in which the small scale (the microstructure) is explicitly taken into account. Typically, two numerical problems are defined and solved - one at the large (continuum) scale, and the other at the small scale. This sort of approach requires a link between the two scales, to complete the picture. Imagine we are solving at the small scale a simulation of an assembly of a few grains, for example using the Discrete Element Method, whose results are in turn fed back to the large scale Finite Element simulation. The key feature of a double scale model is that one can inject the relevant physics at the appropriate scale. The success of such a model crucially depends on the quality of the physics one injects: ideally, this comes directly from experiments. In Grenoble, this is what we do, combining various advanced experimental techniques. We are able to image, in three dimensions and at small scales, the deformation processes accompanying failure in geomaterials. This allows us to understand these processes and subsequently to define models at a pertinently small scale. I will present a few examples of the kind of experimental results which could inform a micro scale model. X-ray micro tomography imaging is the key measurement tool. This is used during loading, providing complete 3D images of a sand specimen at several stages throughout a triaxial compression test. Images from x-rays are then analyzed either in a continuum sense (using 3D Digital Image Correlation) or looking at the individual particle kinematics (Particle Tracking). I will show some of our most recent results, in which individual sand grains are followed with a technique combining very recent developments in image correlation and particle tracking. These advanced techniques offer us a look at what actually happens when a granular material deforms and eventually fails.
Modeling Spatial and Temporal Variability in Ammonia Emissions from Agricultural Fertilization
NASA Astrophysics Data System (ADS)
Balasubramanian, S.; Koloutsou-Vakakis, S.; Rood, M. J.
2013-12-01
Ammonia (NH3), is an important component of the reactive nitrogen cycle and a precursor to formation of atmospheric particulate matter (PM). Predicting regional PM concentrations and deposition of nitrogen species to ecosystems requires representative emission inventories. Emission inventories have traditionally been developed using top down approaches and more recently from data assimilation based on satellite and ground based ambient concentrations and wet deposition data. The National Emission Inventory (NEI) indicates agricultural fertilization as the predominant contributor (56%) to NH3 emissions in Midwest USA, in 2002. However, due to limited understanding of the complex interactions between fertilizer usage, farm practices, soil and meteorological conditions and absence of detailed statistical data, such emission estimates are currently based on generic emission factors, time-averaged temporal factors and coarse spatial resolution. Given the significance of this source, our study focuses on developing an improved NH3 emission inventory for agricultural fertilization at finer spatial and temporal scales for air quality modeling studies. Firstly, a high-spatial resolution 4 km x 4 km NH3 emission inventory for agricultural fertilization has been developed for Illinois by modifying spatial allocation of emissions based on combining crop-specific fertilization rates with cropland distribution in the Sparse Matrix Operator Kernel Emissions model. Net emission estimates of our method are within 2% of NEI, since both methods are constrained by fertilizer sales data. However, we identified localized crop-specific NH3 emission hotspots at sub-county resolutions absent in NEI. Secondly, we have adopted the use of the DeNitrification-DeComposition (DNDC) Biogeochemistry model to simulate the physical and chemical processes that control volatilization of nitrogen as NH3 to the atmosphere after fertilizer application and resolve the variability at the hourly scale. Representative temporal factors are being developed to capture crop-specific NH3 emission variability by combining knowledge of local crop management practices with high resolution cropland and soil maps. This improved spatially and temporally dependent NH3 emission inventory for agricultural fertilization is being prepared as a direct input to a state of the art air quality model to evaluate the effects of agricultural fertilization on regional air quality and atmospheric deposition of reactive nitrogen species.
NASA Astrophysics Data System (ADS)
Canuto, V. M.
1997-06-01
We present a model to treat fully compressible, nonlocal, time-dependent turbulent convection in the presence of large-scale flows and arbitrary density stratification. The problem is of interest, for example, in stellar pulsation problems, especially since accurate helioseismological data are now available, as well as in accretion disks. Owing to the difficulties in formulating an analytical model, it is not surprising that most of the work has gone into numerical simulations. At present, there are three analytical models: one by the author, which leads to a rather complicated set of equations; one by Yoshizawa; and one by Xiong. The latter two use a Reynolds stress model together with phenomenological relations with adjustable parameters whose determination on the basis of terrestrial flows does not guarantee that they may be extrapolated to astrophysical flows. Moreover, all third-order moments representing nonlocality are taken to be of the down gradient form (which in the case of the planetary boundary layer yields incorrect results). In addition, correlations among pressure, temperature, and velocities are often neglected or treated as in the incompressible case. To avoid phenomenological relations, we derive the full set of dynamic, time-dependent, nonlocal equations to describe all mean variables, second- and third-order moments. Closures are carried out at the fourth order following standard procedures in turbulence modeling. The equations are collected in an Appendix. Some of the novelties of the treatment are (1) new flux conservation law that includes the large-scale flow, (2) increase of the rate of dissipation of turbulent kinetic energy owing to compressibility and thus (3) a smaller overshooting, and (4) a new source of mean temperature due to compressibility; moreover, contrary to some phenomenological suggestions, the adiabatic temperature gradient depends only on the thermal pressure, while in the equation for the large-scale flow, the physical pressure is the sum of thermal plus turbulent pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T; Hollenbach, Daniel F
2009-02-01
The Radiochemical Development Facility at Oak Ridge National Laboratory has been storing solid materials containing 233U for decades. Preparations are under way to process these materials into a form that is inherently safe from a nuclear criticality safety perspective. This will be accomplished by down-blending the {sup 233}U materials with depleted or natural uranium. At the request of the U.S. Department of Energy, a study has been performed using the SCALE sensitivity and uncertainty analysis tools to demonstrate how these tools could be used to validate nuclear criticality safety calculations of selected process and storage configurations. ISOTEK nuclear criticality safetymore » staff provided four models that are representative of the criticality safety calculations for which validation will be needed. The SCALE TSUNAMI-1D and TSUNAMI-3D sequences were used to generate energy-dependent k{sub eff} sensitivity profiles for each nuclide and reaction present in the four safety analysis models, also referred to as the applications, and in a large set of critical experiments. The SCALE TSUNAMI-IP module was used together with the sensitivity profiles and the cross-section uncertainty data contained in the SCALE covariance data files to propagate the cross-section uncertainties ({Delta}{sigma}/{sigma}) to k{sub eff} uncertainties ({Delta}k/k) for each application model. The SCALE TSUNAMI-IP module was also used to evaluate the similarity of each of the 672 critical experiments with each application. Results of the uncertainty analysis and similarity assessment are presented in this report. A total of 142 experiments were judged to be similar to application 1, and 68 experiments were judged to be similar to application 2. None of the 672 experiments were judged to be adequately similar to applications 3 and 4. Discussion of the uncertainty analysis and similarity assessment is provided for each of the four applications. Example upper subcritical limits (USLs) were generated for application 1 based on trending of the energy of average lethargy of neutrons causing fission, trending of the TSUNAMI similarity parameters, and use of data adjustment techniques.« less
NASA Astrophysics Data System (ADS)
Musi, Richard; Grange, Benjamin; Diago, Miguel; Topel, Monika; Armstrong, Peter; Slocum, Alexander; Calvet, Nicolas
2017-06-01
A molten salt direct absorption receiver, CSPonD, used to simultaneously collect and store thermal energy is being tested by Masdar Institute and MIT in Abu Dhabi, UAE. Whilst a research-scale prototype has been combined with a beam-down tower in Abu Dhabi, the original design coupled the receiver with a hillside heliostat field. With respect to a conventional power-tower setup, a hillside solar field presents the advantages of eliminating tower costs, heat tracing equipment, and high-pressure pumps. This analysis considers the industrial viability of the CSPonD concept by modeling a 10 MWe up-scaled version of a molten salt direct absorption receiver combined with a hillside heliostat field. Five different slope angles are initially simulated to determine the optimum choice using a combination of lowest LCOE and highest IRR, and sensitivity analyses are carried out based on thermal energy storage duration, power output, and feed-in tariff price. Finally, multi-objective optimization is undertaken to determine a Pareto front representing optimum cases. The study indicates that a 40° slope and a combination of 14 h thermal energy storage with a 40-50 MWe power output provide the best techno-economic results. By selecting one simulated result and using a feed-in tariff of 0.25 /kWh, a competitive IRR of 15.01 % can be achieved.
Yamaguchi, Takami; Ishikawa, Takuji; Imai, Y.; Matsuki, N.; Xenos, Mikhail; Deng, Yuefan; Bluestein, Danny
2010-01-01
A major computational challenge for a multiscale modeling is the coupling of disparate length and timescales between molecular mechanics and macroscopic transport, spanning the spatial and temporal scales characterizing the complex processes taking place in flow-induced blood clotting. Flow and pressure effects on a cell-like platelet can be well represented by a continuum mechanics model down to the order of the micrometer level. However, the molecular effects of adhesion/aggregation bonds are on the order of nanometer. A successful multiscale model of platelet response to flow stresses in devices and the ensuing clotting responses should be able to characterize the clotting reactions and their interactions with the flow. This paper attempts to describe a few of the computational methods that were developed in recent years and became available to researchers in the field. They differ from traditional approaches that dominate the field by expanding on prevailing continuum-based approaches, or by completely departing from them, yielding an expanding toolkit that may facilitate further elucidation of the underlying mechanisms of blood flow and the cellular response to it. We offer a paradigm shift by adopting a multidisciplinary approach with fluid dynamics simulations coupled to biophysical and biochemical transport. PMID:20336827
NASA Astrophysics Data System (ADS)
Arcand, K.; Megan, W.; DePasquale, J.; Jubett, A.; Edmonds, P.; DiVona, K.
2017-09-01
Three-dimensional (3D) modelling is more than just good fun, it offers a new vehicle to represent and understand scientific data and gives experts and non-experts alike the ability to manipulate models and gain new perspectives on data. This article explores the use of 3D modelling and printing in astronomy and astronomy communication and looks at some of the practical challenges, and solutions, to using 3D modelling, visualisation and printing in this way.
Incorporating time-delays in S-System model for reverse engineering genetic networks.
Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan
2013-06-18
In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in Escherichia coli show a significant improvement compared with other state-of-the-art approaches for GRN modeling.
In recent years the applications of regional air quality models are continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physic...
Challenges in industrial fermentation technology research.
Formenti, Luca Riccardo; Nørregaard, Anders; Bolic, Andrijana; Hernandez, Daniela Quintanilla; Hagemann, Timo; Heins, Anna-Lena; Larsson, Hilde; Mears, Lisa; Mauricio-Iglesias, Miguel; Krühne, Ulrich; Gernaey, Krist V
2014-06-01
Industrial fermentation processes are increasingly popular, and are considered an important technological asset for reducing our dependence on chemicals and products produced from fossil fuels. However, despite their increasing popularity, fermentation processes have not yet reached the same maturity as traditional chemical processes, particularly when it comes to using engineering tools such as mathematical models and optimization techniques. This perspective starts with a brief overview of these engineering tools. However, the main focus is on a description of some of the most important engineering challenges: scaling up and scaling down fermentation processes, the influence of morphology on broth rheology and mass transfer, and establishing novel sensors to measure and control insightful process parameters. The greatest emphasis is on the challenges posed by filamentous fungi, because of their wide applications as cell factories and therefore their relevance in a White Biotechnology context. Computational fluid dynamics (CFD) is introduced as a promising tool that can be used to support the scaling up and scaling down of bioreactors, and for studying mixing and the potential occurrence of gradients in a tank. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kinnaert, X.; Gaucher, E.; Kohl, T.; Achauer, U.
2018-03-01
Seismicity induced in geo-reservoirs can be a valuable observation to image fractured reservoirs, to characterize hydrological properties, or to mitigate seismic hazard. However, this requires accurate location of the seismicity, which is nowadays an important seismological task in reservoir engineering. The earthquake location (determination of the hypocentres) depends on the model used to represent the medium in which the seismic waves propagate and on the seismic monitoring network. In this work, location uncertainties and location inaccuracies are modeled to investigate the impact of several parameters on the determination of the hypocentres: the picking uncertainty, the numerical precision of picked arrival times, a velocity perturbation and the seismic network configuration. The method is applied to the geothermal site of Soultz-sous-Forêts, which is located in the Upper Rhine Graben (France) and which was subject to detailed scientific investigations. We focus on a massive water injection performed in the year 2000 to enhance the productivity of the well GPK2 in the granitic basement, at approximately 5 km depth, and which induced more than 7000 earthquakes recorded by down-hole and surface seismic networks. We compare the location errors obtained from the joint or the separate use of the down-hole and surface networks. Besides the quantification of location uncertainties caused by picking uncertainties, the impact of the numerical precision of the picked arrival times as provided in a reference catalogue is investigated. The velocity model is also modified to mimic possible effects of a massive water injection and to evaluate its impact on earthquake hypocentres. It is shown that the use of the down-hole network in addition to the surface network provides smaller location uncertainties but can also lead to larger inaccuracies. Hence, location uncertainties would not be well representative of the location errors and interpretation of the seismicity distribution possibly biased. This result also emphasizes that it is still necessary to properly describe the seismic propagation medium even though the addition of down-hole sensors increases the coverage of a surface network.
Li, Jian; Jaitzig, Jennifer; Lu, Ping; Süssmuth, Roderich D; Neubauer, Peter
2015-06-12
Heterologous production of natural products in Escherichia coli has emerged as an attractive strategy to obtain molecules of interest. Although technically feasible most of them are still constrained to laboratory scale production. Therefore, it is necessary to develop reasonable scale-up strategies for bioprocesses aiming at the overproduction of targeted natural products under industrial scale conditions. To this end, we used the production of the antibiotic valinomycin in E. coli as a model system for scalable bioprocess development based on consistent fed-batch cultivations. In this work, the glucose limited fed-batch strategy based on pure mineral salt medium was used throughout all scales for valinomycin production. The optimal glucose feed rate was initially detected by the use of a biocatalytically controlled glucose release (EnBase® technology) in parallel cultivations in 24-well plates with continuous monitoring of pH and dissolved oxygen. These results were confirmed in shake flasks, where the accumulation of valinomycin was highest when the specific growth rate decreased below 0.1 h(-1). This correlation was also observed for high cell density fed-batch cultivations in a lab-scale bioreactor. The bioreactor fermentation produced valinomycin with titers of more than 2 mg L(-1) based on the feeding of a concentrated glucose solution. Valinomycin production was not affected by oscillating conditions (i.e. glucose and oxygen) in a scale-down two-compartment reactor, which could mimic similar situations in industrial bioreactors, suggesting that the process is very robust and a scaling of the process to a larger industrial scale appears a realistic scenario. Valinomycin production was scaled up from mL volumes to 10 L with consistent use of the fed-batch technology. This work presents a robust and reliable approach for scalable bioprocess development and represents an example for the consistent development of a process for a heterologously expressed natural product towards the industrial scale.
Centuries of human-driven change in salt marsh ecosystems.
Gedan, K Bromberg; Silliman, B R; Bertness, M D
2009-01-01
Salt marshes are among the most abundant, fertile, and accessible coastal habitats on earth, and they provide more ecosystem services to coastal populations than any other environment. Since the Middle Ages, humans have manipulated salt marshes at a grand scale, altering species composition, distribution, and ecosystem function. Here, we review historic and contemporary human activities in marsh ecosystems--exploitation of plant products; conversion to farmland, salt works, and urban land; introduction of non-native species; alteration of coastal hydrology; and metal and nutrient pollution. Unexpectedly, diverse types of impacts can have a similar consequence, turning salt marsh food webs upside down, dramatically increasing top down control. Of the various impacts, invasive species, runaway consumer effects, and sea level rise represent the greatest threats to salt marsh ecosystems. We conclude that the best way to protect salt marshes and the services they provide is through the integrated approach of ecosystem-based management.
Nanoelectronics from the bottom up.
Lu, Wei; Lieber, Charles M
2007-11-01
Electronics obtained through the bottom-up approach of molecular-level control of material composition and structure may lead to devices and fabrication strategies not possible with top-down methods. This review presents a brief summary of bottom-up and hybrid bottom-up/top-down strategies for nanoelectronics with an emphasis on memories based on the crossbar motif. First, we will discuss representative electromechanical and resistance-change memory devices based on carbon nanotube and core-shell nanowire structures, respectively. These device structures show robust switching, promising performance metrics and the potential for terabit-scale density. Second, we will review architectures being developed for circuit-level integration, hybrid crossbar/CMOS circuits and array-based systems, including experimental demonstrations of key concepts such lithography-independent, chemically coded stochastic demultipluxers. Finally, bottom-up fabrication approaches, including the opportunity for assembly of three-dimensional, vertically integrated multifunctional circuits, will be critically discussed.
NASA Astrophysics Data System (ADS)
von Boetticher, Albrecht; Turowski, Jens M.; McArdell, Brian; Rickenmann, Dieter
2016-04-01
Debris flows are frequent natural hazards that cause massive damage. A wide range of debris flow models try to cover the complex flow behavior that arises from the inhomogeneous material mixture of water with clay, silt, sand, and gravel. The energy dissipation between moving grains depends on grain collisions and tangential friction, and the viscosity of the interstitial fine material suspension depends on the shear gradient. Thus a rheology description needs to be sensitive to the local pressure and shear rate, making the three-dimensional flow structure a key issue for flows in complex terrain. Furthermore, the momentum exchange between the granular and fluid phases should account for the presence of larger particles. We model the fine material suspension with a Herschel-Bulkley rheology law, and represent the gravel with the Coulomb-viscoplastic rheology of Domnik & Pudasaini (Domnik et al. 2013). Both composites are described by two phases that can mix; a third phase accounting for the air is kept separate to account for the free surface. The fluid dynamics are solved in three dimensions using the finite volume open-source code OpenFOAM. Computational costs are kept reasonable by using the Volume of Fluid method to solve only one phase-averaged system of Navier-Stokes equations. The Herschel-Bulkley parameters are modeled as a function of water content, volumetric solid concentration of the mixture, clay content and its mineral composition (Coussot et al. 1989, Yu et al. 2013). The gravel phase properties needed for the Coulomb-viscoplastic rheology are defined by the angle of repose of the gravel. In addition to this basic setup, larger grains and the corresponding grain collisions can be introduced by a coupled Lagrangian particle simulation. Based on the local Savage number a diffusive term in the gravel phase can activate phase separation. The resulting model can reproduce the sensitivity of the debris flow to water content and channel bed roughness, as illustrated with lab-scale and large-scale experiments. A large-scale natural landslide event down a curved channel is presented to show the model performance at such a scale, calibrated based on the observed surface super-elevation.
NASA Astrophysics Data System (ADS)
Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.
2017-12-01
It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required to reduce an error metric based on the Hessian of the field. This allows the local pressure drawdown to be captured without user¬ driven modification of the mesh. We demonstrate that the method has wide application in reservoir ¬scale models of geothermal fields, and regional models of groundwater resources.
Method and apparatus of assessing down-hole drilling conditions
Hall, David R [Provo, UT; Pixton, David S [Lehl, UT; Johnson, Monte L [Orem, UT; Bartholomew, David B [Springville, UT; Fox, Joe [Spanish Fork, UT
2007-04-24
A method and apparatus for use in assessing down-hole drilling conditions are disclosed. The apparatus includes a drill string, a plurality of sensors, a computing device, and a down-hole network. The sensors are distributed along the length of the drill string and are capable of sensing localized down-hole conditions while drilling. The computing device is coupled to at least one sensor of the plurality of sensors. The data is transmitted from the sensors to the computing device over the down-hole network. The computing device analyzes data output by the sensors and representative of the sensed localized conditions to assess the down-hole drilling conditions. The method includes sensing localized drilling conditions at a plurality of points distributed along the length of a drill string during drilling operations; transmitting data representative of the sensed localized conditions to a predetermined location; and analyzing the transmitted data to assess the down-hole drilling conditions.
NASA Astrophysics Data System (ADS)
Shin, S.; Pokhrel, Y. N.
2016-12-01
Land surface models have been used to assess water resources sustainability under changing Earth environment and increasing human water needs. Overwhelming observational records indicate that human activities have ubiquitous and pertinent effects on the hydrologic cycle; however, they have been crudely represented in large scale land surface models. In this study, we enhance an integrated continental-scale land hydrology model named Leaf-Hydro-Flood to better represent land-water management. The model is implemented at high resolution (5km grids) over the continental US. Surface water and groundwater are withdrawn based on actual practices. Newly added irrigation, water diversion, and dam operation schemes allow better simulations of stream flows, evapotranspiration, and infiltration. Results of various hydrologic fluxes and stores from two sets of simulation (one with and the other without human activities) are compared over a range of river basin and aquifer scales. The improved simulations of land hydrology have potential to build consistent modeling framework for human-water-climate interactions.
High Performance Computing for Modeling Wind Farms and Their Impact
NASA Astrophysics Data System (ADS)
Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.
2016-12-01
As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.
TASK 2: QUENCH ZONE SIMULATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fusselman, Steve
Aerojet Rocketdyne (AR) has developed an innovative gasifier concept incorporating advanced technologies in ultra-dense phase dry feed system, rapid mix injector, and advanced component cooling to significantly improve gasifier performance, life, and cost compared to commercially available state-of-the-art systems. A key feature of the AR gasifier design is the transition from the gasifier outlet into the quench zone, where the raw syngas is cooled to ~ 400°C by injection and vaporization of atomized water. Earlier pilot plant testing revealed a propensity for the original gasifier outlet design to accumulate slag in the outlet, leading to erratic syngas flow from themore » outlet. Subsequent design modifications successfully resolved this issue in the pilot plant gasifier. In order to gain greater insight into the physical phenomena occurring within this zone, AR developed a cold flow simulation apparatus with Coanda Research & Development with a high degree of similitude to hot fire conditions with the pilot scale gasifier design, and capable of accommodating a scaled-down quench zone for a demonstration-scale gasifier. The objective of this task was to validate similitude of the cold flow simulation model by comparison of pilot-scale outlet design performance, and to assess demonstration scale gasifier design feasibility from testing of a scaled-down outlet design. Test results did exhibit a strong correspondence with the two pilot scale outlet designs, indicating credible similitude for the cold flow simulation device. Testing of the scaled-down outlet revealed important considerations in the design and operation of the demonstration scale gasifier, in particular pertaining to the relative momentum between the downcoming raw syngas and the sprayed quench water and associated impacts on flow patterns within the quench zone. This report describes key findings from the test program, including assessment of pilot plant configuration simulations relative to actual results on the pilot plant gasifier and demonstration plant design recommendations, based on cold flow simulation results.« less
NASA Astrophysics Data System (ADS)
Chen, M.; Willgoose, G. R.; Saco, P. M.
2009-12-01
This paper investigates the soil moisture dynamics over two subcatchments (Stanley and Krui) in the Goulburn River in NSW during a three year period (2005-2007) using the Hydrus 1-D unsaturated soil water flow model. The model was calibrated to the seven Stanley microcatchment sites (1 sqkm site) using continuous time surface 30cm and full profile soil moisture measurements. Soil type, leaf area index and soil depth were found to be the key parameters changing model fit to the soil moisture time series. They either shifted the time series up or down, changed the steepness of dry-down recessions or determined the lowest point of soil moisture dry-down respectively. Good correlations were obtained between observed and simulated soil water storage (R=0.8-0.9) when calibrated parameters for one site were applied to the other sites. Soil type was also found to be the main determinant (after rainfall) of the mean of modelled soil moisture time series. Simulations of top 30cm were better than those of the whole soil profile. Within the Stanley microcatchment excellent soil moisture matches could be generated simply by adjusting the mean of soil moisture up or down slightly. Only minor modification of soil properties from site to site enable good fits for all of the Stanley sites. We extended the predictions of soil moisture to a larger spatial scale of the Krui catchment (sites up to 30km distant from Stanley) using soil and vegetation parameters from Stanley but the locally recorded rainfall at the soil moisture measurement site. The results were encouraging (R=0.7~0.8). These results show that it is possible to use a calibrated soil moisture model to extrapolate the soil moisture to other sites for a catchment with an area of up to 1000km2. This paper demonstrates the potential usefulness of continuous time, point scale soil moisture (typical of that measured by permanently installed TDR probes) in predicting the soil wetness status over a catchment of significant size.
Cognitive Abilities Explain Wording Effects in the Rosenberg Self-Esteem Scale.
Gnambs, Timo; Schroeders, Ulrich
2017-12-01
There is consensus that the 10 items of the Rosenberg Self-Esteem Scale (RSES) reflect wording effects resulting from positively and negatively keyed items. The present study examined the effects of cognitive abilities on the factor structure of the RSES with a novel, nonparametric latent variable technique called local structural equation models. In a nationally representative German large-scale assessment including 12,437 students competing measurement models for the RSES were compared: a bifactor model with a common factor and a specific factor for all negatively worded items had an optimal fit. Local structural equation models showed that the unidimensionality of the scale increased with higher levels of reading competence and reasoning, while the proportion of variance attributed to the negatively keyed items declined. Wording effects on the factor structure of the RSES seem to represent a response style artifact associated with cognitive abilities.
Nanometer scale thermometry in a living cell
Kucsko, G.; Maurer, P. C.; Yao, N. Y.; Kubo, M.; Noh, H. J.; Lo, P. K.; Park, H.; Lukin, M. D.
2014-01-01
Sensitive probing of temperature variations on nanometer scales represents an outstanding challenge in many areas of modern science and technology1. In particular, a thermometer capable of sub-degree temperature resolution over a large range of temperatures as well as integration within a living system could provide a powerful new tool for many areas of biological, physical and chemical research; possibilities range from the temperature-induced control of gene expression2–5 and tumor metabolism6 to the cell-selective treatment of disease7,8 and the study of heat dissipation in integrated circuits1. By combining local light-induced heat sources with sensitive nanoscale thermometry, it may also be possible to engineer biological processes at the sub-cellular level2–5. Here, we demonstrate a new approach to nanoscale thermometry that utilizes coherent manipulation of the electronic spin associated with nitrogen-vacancy (NV) color centers in diamond. We show the ability to detect temperature variations down to 1.8 mK (sensitivity of 9mK/Hz) in an ultra-pure bulk diamond sample. Using NV centers in diamond nanocrystals (nanodiamonds, NDs), we directly measure the local thermal environment at length scales down to 200 nm. Finally, by introducing both nanodiamonds and gold nanoparticles into a single human embryonic fibroblast, we demonstrate temperature-gradient control and mapping at the sub-cellular level, enabling unique potential applications in life sciences. PMID:23903748
Atomic switch networks—nanoarchitectonic design of a complex system for natural computing
NASA Astrophysics Data System (ADS)
Demis, E. C.; Aguilera, R.; Sillin, H. O.; Scharnhorst, K.; Sandouk, E. J.; Aono, M.; Stieg, A. Z.; Gimzewski, J. K.
2015-05-01
Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.
Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.
Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K
2015-05-22
Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.
Mars-solar wind interaction: LatHyS, an improved parallel 3-D multispecies hybrid model
NASA Astrophysics Data System (ADS)
Modolo, Ronan; Hess, Sebastien; Mancini, Marco; Leblanc, Francois; Chaufray, Jean-Yves; Brain, David; Leclercq, Ludivine; Esteban-Hernández, Rosa; Chanteur, Gerard; Weill, Philippe; González-Galindo, Francisco; Forget, Francois; Yagi, Manabu; Mazelle, Christian
2016-07-01
In order to better represent Mars-solar wind interaction, we present an unprecedented model achieving spatial resolution down to 50 km, a so far unexplored resolution for global kinetic models of the Martian ionized environment. Such resolution approaches the ionospheric plasma scale height. In practice, the model is derived from a first version described in Modolo et al. (2005). An important effort of parallelization has been conducted and is presented here. A better description of the ionosphere was also implemented including ionospheric chemistry, electrical conductivities, and a drag force modeling the ion-neutral collisions in the ionosphere. This new version of the code, named LatHyS (Latmos Hybrid Simulation), is here used to characterize the impact of various spatial resolutions on simulation results. In addition, and following a global model challenge effort, we present the results of simulation run for three cases which allow addressing the effect of the suprathermal corona and of the solar EUV activity on the magnetospheric plasma boundaries and on the global escape. Simulation results showed that global patterns are relatively similar for the different spatial resolution runs, but finest grid runs provide a better representation of the ionosphere and display more details of the planetary plasma dynamic. Simulation results suggest that a significant fraction of escaping O+ ions is originated from below 1200 km altitude.
The Regional Vulnerability Assessment (ReV A) Program is an applied research program t,1at is focusing on using spatial information and model results to support environmental decision-making at regional- down to local-scales. Re VA has developed analysis and assessment methods to...
Structural Similitude and Scaling Laws for Plates and Shells: A Review
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Starnes, J. H., Jr.; Rezaeepazhand, J.
2000-01-01
This paper deals with the development and use of scaled-down models in order to predict the structural behavior of large prototypes. The concept is fully described and examples are presented which demonstrate its applicability to beam-plates, plates and cylindrical shells of laminated construction. The concept is based on the use of field equations, which govern the response behavior of both the small model as well as the large prototype. The conditions under which the experimental data of a small model can be used to predict the behavior of a large prototype are called scaling laws or similarity conditions and the term that best describes the process is structural similitude. Moreover, since the term scaling is used to describe the effect of size on strength characteristics of materials, a discussion is included which should clarify the difference between "scaling law" and "size effect". Finally, a historical review of all published work in the broad area of structural similitude is presented for completeness.
Hussain, A. K. M. Ghulam; Ruthbah, Ummul H.; Quah, Anne C. K.; Abdullah, Abu S.
2015-01-01
Background Smoking and passive smoking are collectively the biggest preventable cause of death in Bangladesh, with major public health burden of morbidity, disability, mortality and community costs. The available studies of tobacco use in Bangladesh, however, do not necessarily employ nationally representative samples needed to monitor the problem at a national scale. This paper examines the prevalence and patterns of tobacco use among adults in Bangladesh and the changes over time using large nationally representative comparable surveys. Methods Using data from two enumerations of the International Tobacco Control (ITC) Bangladesh Project conducted in 2009 and 2012, prevalence estimates are obtained for all tobacco products by socio-economic determinants and sample types of over 90,000 individuals drawn from over 30,000 households. Household level sample weights are used to obtain nationally representative prevalence estimates and standard errors. Statistical tests of difference in the estimates between two time periods are based on a logistic regression model that accounts for the complex sampling design. Using a multinomial logit model, the time trend in tobacco use status is identified to capture the effects of macro level determinants including changes in tobacco control policies. Results Between 2009 and 2012, overall tobacco use went down from 42.4% to 36.3%. The decline is more pronounced with respect to smokeless tobacco use than smoking. The prevalence of exclusive cigarette smoking went up from 7.2% to 10.6%; exclusive bidi smoking remained stable at around 2%; while smoking both cigarette and bidi went down from 4.6% to 1.8%; exclusive smokeless tobacco use went down from 20.2% to 16.9%; and both smokeless tobacco use and smoking went down from 8.4% to 5.1%. In general, the prevalence of tobacco use is higher among men, increases from younger to older age groups, and is higher among poorer people. Smoking prevalence is the highest among the slum population, followed by the tribal population, the national population and the border area population, suggesting greater burden of tobacco use among the disadvantaged groups. Conclusions The overall decline in tobacco use can be viewed as a structural shift in the tobacco market in Bangladesh from low value products such as bidi and smokeless tobacco to high value cigarettes, which is expected with the growth in income and purchasing power of the general population. Despite the reduction in overall tobacco use, the male smoking prevalence in Bangladesh is still high at 37%. The world average of daily smoking among men is 31.1%. The Tobacco Control Act 2005 and the Amendment have yet to make a significant impact in curbing tobacco usage in Bangladesh. The findings in this paper further suggest that the tobacco control policies in Bangladesh need to include targeted interventions to restrain the use of particular types of tobacco products among specific demographic and socio-economic groups of the population, such as smoked tobacco among men, smokeless tobacco among women, and both smoked and smokeless tobacco among those living in rural areas, those in low socio-economic status and those belonging to the tribal and the slum population. PMID:26559051
Nargis, Nigar; Thompson, Mary E; Fong, Geoffrey T; Driezen, Pete; Hussain, A K M Ghulam; Ruthbah, Ummul H; Quah, Anne C K; Abdullah, Abu S
2015-01-01
Smoking and passive smoking are collectively the biggest preventable cause of death in Bangladesh, with major public health burden of morbidity, disability, mortality and community costs. The available studies of tobacco use in Bangladesh, however, do not necessarily employ nationally representative samples needed to monitor the problem at a national scale. This paper examines the prevalence and patterns of tobacco use among adults in Bangladesh and the changes over time using large nationally representative comparable surveys. Using data from two enumerations of the International Tobacco Control (ITC) Bangladesh Project conducted in 2009 and 2012, prevalence estimates are obtained for all tobacco products by socio-economic determinants and sample types of over 90,000 individuals drawn from over 30,000 households. Household level sample weights are used to obtain nationally representative prevalence estimates and standard errors. Statistical tests of difference in the estimates between two time periods are based on a logistic regression model that accounts for the complex sampling design. Using a multinomial logit model, the time trend in tobacco use status is identified to capture the effects of macro level determinants including changes in tobacco control policies. Between 2009 and 2012, overall tobacco use went down from 42.4% to 36.3%. The decline is more pronounced with respect to smokeless tobacco use than smoking. The prevalence of exclusive cigarette smoking went up from 7.2% to 10.6%; exclusive bidi smoking remained stable at around 2%; while smoking both cigarette and bidi went down from 4.6% to 1.8%; exclusive smokeless tobacco use went down from 20.2% to 16.9%; and both smokeless tobacco use and smoking went down from 8.4% to 5.1%. In general, the prevalence of tobacco use is higher among men, increases from younger to older age groups, and is higher among poorer people. Smoking prevalence is the highest among the slum population, followed by the tribal population, the national population and the border area population, suggesting greater burden of tobacco use among the disadvantaged groups. The overall decline in tobacco use can be viewed as a structural shift in the tobacco market in Bangladesh from low value products such as bidi and smokeless tobacco to high value cigarettes, which is expected with the growth in income and purchasing power of the general population. Despite the reduction in overall tobacco use, the male smoking prevalence in Bangladesh is still high at 37%. The world average of daily smoking among men is 31.1%. The Tobacco Control Act 2005 and the Amendment have yet to make a significant impact in curbing tobacco usage in Bangladesh. The findings in this paper further suggest that the tobacco control policies in Bangladesh need to include targeted interventions to restrain the use of particular types of tobacco products among specific demographic and socio-economic groups of the population, such as smoked tobacco among men, smokeless tobacco among women, and both smoked and smokeless tobacco among those living in rural areas, those in low socio-economic status and those belonging to the tribal and the slum population.
NASA Astrophysics Data System (ADS)
Tian, You; Zhao, Dapeng
2012-06-01
We used 190,947 high-quality P-wave arrival times from 8421 local earthquakes and 1,098,022 precise travel-time residuals from 6470 teleseismic events recorded by the EarthScope/USArray transportable array to determine a detailed three-dimensional P-wave velocity model of the crust and mantle down to 1000 km depth under the western United States (US). Our tomography revealed strong heterogeneities in the crust and upper mantle under the western US. Prominent high-velocity anomalies are imaged beneath Idaho Batholith, central Colorado Plateau, Cascadian subduction zone, stable North American Craton, Transverse Ranges, and Southern Sierra Nevada. Prominent low-velocity anomalies are imaged at depths of 0-200 km beneath Snake River Plain, which may represent a small-scale convection beneath the western US. The low-velocity structure deviates variably from a narrow vertical plume conduit extending down to ˜1000 km depth, suggesting that the Yellowstone hotspot may have a lower-mantle origin. The Juan de Fuca slab is imaged as a dipping high-velocity anomaly under the western US. The slab geometry and its subducted depth vary in the north-south direction. In the southern parts the slab may have subducted down to >600 km depth. A "slab hole" is revealed beneath Oregon, which shows up as a low-velocity anomaly at depths of ˜100 to 300 km. The formation of the slab hole may be related to the Newberry magmatism. The removal of flat subducted Farallon slab may have triggered the vigorous magmatism in the Basin and Range and southern part of Rocky Mountains and also resulted in the uplift of the Colorado Plateau and Rocky Mountains.
Thermal regime of an ice-wedge polygon landscape near Barrow, Alaska
NASA Astrophysics Data System (ADS)
Daanen, R. P.; Liljedahl, A. K.
2017-12-01
Tundra landscapes are changing all over the circumpolar Arctic due to permafrost degradation. Soil cracking and infilling of meltwater repeated over thousands of years form ice wedges, which produce the characteristic surface pattern of ice-wedge polygon tundra. Rapid top-down thawing of massive ice leads to differential ground subsidence and sets in motion a series of short- and long-term hydrological and ecological changes. Subsequent responses in the soil thermal regime drive further permafrost degradation and/or stabilization. Here we explore the soil thermal regime of an ice-wedge polygon terrain near Utqiagvik (formerly Barrow) with the Water balance Simulation Model (WaSiM). WaSiM is a hydro-thermal model developed to simulate the water balance at the watershed scale and was recently refined to represent the hydrological processes unique to cold climates. WaSiM includes modules that represent surface runoff, evapotranspiration, groundwater, and soil moisture, while active layer freezing and thawing is based on a direct coupling of hydrological and thermal processes. A new snow module expands the vadose zone calculations into the snow pack, allowing the model to simulate the snow as a porous medium similar to soil. Together with a snow redistribution algorithm based on local topography, this latest addition to WaSiM makes simulation of the ground thermal regime much more accurate during winter months. Effective representation of ground temperatures during winter is crucial in the simulation of the permafrost thermal regime and allows for refined predictions of future ice-wedge degradation or stabilization.
NASA Astrophysics Data System (ADS)
Rizzo, R. E.; Healy, D.; De Siena, L.
2015-12-01
The success of any model prediction is largely dependent on the accuracy with which its parameters are known. In characterising fracture networks in naturally fractured rocks, the main issues are related with the difficulties in accurately up- and down-scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture attributes (fracture lengths, apertures, orientations and densities) represents a fundamental step which can aid the estimation of permeability and fluid flow, which are of primary importance in a number of contexts ranging from hydrocarbon production in fractured reservoirs and reservoir stimulation by hydrofracturing, to geothermal energy extraction and deeper Earth systems, such as earthquakes and ocean floor hydrothermal venting. This work focuses on linking fracture data collected directly from outcrops to permeability estimation and fracture network modelling. Outcrop studies can supplement the limited data inherent to natural fractured systems in the subsurface. The study area is a highly fractured upper Miocene biosiliceous mudstone formation cropping out along the coastline north of Santa Cruz (California, USA). These unique outcrops exposes a recently active bitumen-bearing formation representing a geological analogue of a fractured top seal. In order to validate field observations as useful analogues of subsurface reservoirs, we describe a methodology of statistical analysis for more accurate probability distribution of fracture attributes, using Maximum Likelihood Estimators. These procedures aim to understand whether the average permeability of a fracture network can be predicted reducing its uncertainties, and if outcrop measurements of fracture attributes can be used directly to generate statistically identical fracture network models.
Anarchic Yukawas and top partial compositeness: the flavour of a successful marriage
NASA Astrophysics Data System (ADS)
Cacciapaglia, Giacomo; Cai, Haiying; Flacke, Thomas; Lee, Seung J.; Parolini, Alberto; Serôdio, Hugo
2015-06-01
The top quark can be naturally singled out from other fermions in the Standard Model due to its large mass, of the order of the electroweak scale. We follow this reasoning in models of pseudo Nambu Goldstone Boson composite Higgs, which may derive from an underlying confining dynamics. We consider a new class of flavour models, where the top quark obtains its mass via partial compositeness, while the lighter fermions acquire their masses by a deformation of the dynamics generated at a high flavour scale. One interesting feature of such scenario is that it can avoid all the flavour constraints without the need of flavour symmetries, since the flavour scale can be pushed high enough. We show that both flavour conserving and violating constraints can be satisfied with top partial compositeness without invoking any flavour symmetry for the up-type sector, in the case of the minimal SO(5)/SO(4) coset with top partners in the four-plet and singlet of SO(4). In the down-type sector, some degree of alignment is required if all down-type quarks are elementary. We show that taking the bottom quark partially composite provides a dynamical explanation for the hierarchy causing this alignment. We present explicit realisations of this mechanism which do not require to include additional bottom partner fields. Finally, these conclusions are generalised to scenarios with non-minimal cosets and top partners in larger representations.
REPRESENTATION OF ATMOSPHERIC MOTION IN MODELS OF REGIONAL-SCALE AIR POLLUTION
A method is developed for generating ensembles of wind fields for use in regional scale (1000 km) models of transport and diffusion. The underlying objective is a methodology for representing atmospheric motion in applied air pollution models that permits explicit treatment of th...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, J. Y.; Riley, W. J.
We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus) limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1) formulation of the biogeochemical processes with a matrix of stoichiometric coefficients and (2) application of Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulationmore » approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy) Land Model (ALM)) of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a CENTURY-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1) produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions, (2) properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed, (3) successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.« less
A model for allometric scaling of mammalian metabolism with ambient heat loss.
Kwak, Ho Sang; Im, Hong G; Shim, Eun Bo
2016-03-01
Allometric scaling, which represents the dependence of biological traits or processes on body size, is a long-standing subject in biological science. However, there has been no study to consider heat loss to the ambient and an insulation layer representing mammalian skin and fur for the derivation of the scaling law of metabolism. A simple heat transfer model is proposed to analyze the allometry of mammalian metabolism. The present model extends existing studies by incorporating various external heat transfer parameters and additional insulation layers. The model equations were solved numerically and by an analytic heat balance approach. A general observation is that the present heat transfer model predicted the 2/3 surface scaling law, which is primarily attributed to the dependence of the surface area on the body mass. External heat transfer effects introduced deviations in the scaling law, mainly due to natural convection heat transfer, which becomes more prominent at smaller mass. These deviations resulted in a slight modification of the scaling exponent to a value < 2/3. The finding that additional radiative heat loss and the consideration of an outer insulation fur layer attenuate these deviation effects and render the scaling law closer to 2/3 provides in silico evidence for a functional impact of heat transfer mode on the allometric scaling law in mammalian metabolism.
NASA Astrophysics Data System (ADS)
Rogers, K. G.; Brondizio, E.; Roy, K.; Syvitski, J. P.
2015-12-01
The increased vulnerability of deltaic communities to coastal flooding as a result of upstream engineering has been acknowledged for decades. What has received less attention is the sensitivity of deltas to the interactions between river basin modifications and local scale cultivation and irrigation. Combined with reduced river and sediment discharge, soil and water management practices in coastal areas may exacerbate the risk of tidal flooding, erosion of arable land, and salinization of soils and groundwater associated with sea level rise. This represents a cruel irony to smallholder subsistence farmers whose priorities are food, water and economic security, rather than sustainability of the environment. Such issues challenge disciplinary approaches and require integrated social-biophysical models able to understand and diagnose these complex relationships. This study applies a new conceptual framework to define the relevant social and physical units operating on the common pool resources of climate, water and sediment in the Bengal Delta (Bangladesh). The new framework will inform development of a nested geospatial analysis and a coupled model to identify multi-scale social-biophysical feedbacks associated with smallholder soil and water management practices, coastal dynamics, basin modification, and climate vulnerability in tropical deltas. The framework was used to create household surveys for collecting data on climate perceptions, land and water management, and governance. Test surveys were administered to rural farmers in 14 villages during a reconnaissance visit to coastal Bangladesh. Initial results demonstrate complexity and heterogeneity at the local scale in both biophysical conditions and decision-making. More importantly, the results illuminate how national and geopolitical-level policies scale down to impact local-level environmental and social stability in communities already vulnerable to coastal flooding. Here, we will discuss components of the new conceptual framework, present results from the test surveys, and demonstrate how the framework can be dynamically adapted to reflect complex interactions at multiple scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbird, David; Sitaraman, Hariswaran; Stickel, Jonathan
If advanced biofuels are to measurably displace fossil fuels in the near term, they will have to operate at levels of scale, efficiency, and margin unprecedented in the current biotech industry. For aerobically-grown products in particular, scale-up is complex and the practical size, cost, and operability of extremely large reactors is not well understood. Put simply, the problem of how to attain fuel-class production scales comes down to cost-effective delivery of oxygen at high mass transfer rates and low capital and operating costs. To that end, very large reactor vessels (>500 m3) are proposed in order to achieve favorable economiesmore » of scale. Additionally, techno-economic evaluation indicates that bubble-column reactors are more cost-effective than stirred-tank reactors in many low-viscosity cultures. In order to advance the design of extremely large aerobic bioreactors, we have performed computational fluid dynamics (CFD) simulations of bubble-column reactors. A multiphase Euler-Euler model is used to explicitly account for the spatial distribution of air (i.e., gas bubbles) in the reactor. Expanding on the existing bioreactor CFD literature (typically focused on the hydrodynamics of bubbly flows), our simulations include interphase mass transfer of oxygen and a simple phenomenological reaction representing the uptake and consumption of dissolved oxygen by submerged cells. The simulations reproduce the expected flow profiles, with net upward flow in the center of column and downward flow near the wall. At high simulated oxygen uptake rates (OUR), oxygen-depleted regions can be observed in the reactor. By increasing the gas flow to enhance mixing and eliminate depleted areas, a maximum oxygen transfer (OTR) rate is obtained as a function of superficial velocity. These insights regarding minimum superficial velocity and maximum reactor size are incorporated into NREL's larger techno-economic models to supplement standard reactor design equations.« less
Coalescing colony model: Mean-field, scaling, and geometry
NASA Astrophysics Data System (ADS)
Carra, Giulia; Mallick, Kirone; Barthelemy, Marc
2017-12-01
We analyze the coalescing model where a `primary' colony grows and randomly emits secondary colonies that spread and eventually coalesce with it. This model describes population proliferation in theoretical ecology, tumor growth, and is also of great interest for modeling urban sprawl. Assuming the primary colony to be always circular of radius r (t ) and the emission rate proportional to r (t) θ , where θ >0 , we derive the mean-field equations governing the dynamics of the primary colony, calculate the scaling exponents versus θ , and compare our results with numerical simulations. We then critically test the validity of the circular approximation for the colony shape and show that it is sound for a constant emission rate (θ =0 ). However, when the emission rate is proportional to the perimeter, the circular approximation breaks down and the roughness of the primary colony cannot be discarded, thus modifying the scaling exponents.
NASA Astrophysics Data System (ADS)
Wang, Rong; Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu
2018-02-01
There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation-constrained estimate, which is several times larger than the bottom-up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry-transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top-down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error.
NASA Technical Reports Server (NTRS)
Dawson, C. R.; Omar, E.
1977-01-01
Wind tunnel test data are analysed to determine ground effects and the effectiveness of the aerodynamic control surfaces to provide a technology base for a Navy type A V/STOL airplane. Three 14CM (5.5 inch) turbopowered simulators were used to power the model which was tested primarily in the following configurations: (1) VTOL with flaps deployed, gear down, and engines tilted to 80 deg, 90 deg, and 95 deg, (2) STOL with flap and gear down and engines tilted to 50 deg; and (3) Loiter with flaps and gear up and L/C nacelles off. Data acquired during the tests are included as an appendix.
NASA Astrophysics Data System (ADS)
Horstemeyer, M. F.
This review of multiscale modeling covers a brief history of various multiscale methodologies related to solid materials and the associated experimental influences, the various influence of multiscale modeling on different disciplines, and some examples of multiscale modeling in the design of structural components. Although computational multiscale modeling methodologies have been developed in the late twentieth century, the fundamental notions of multiscale modeling have been around since da Vinci studied different sizes of ropes. The recent rapid growth in multiscale modeling is the result of the confluence of parallel computing power, experimental capabilities to characterize structure-property relations down to the atomic level, and theories that admit multiple length scales. The ubiquitous research that focus on multiscale modeling has broached different disciplines (solid mechanics, fluid mechanics, materials science, physics, mathematics, biological, and chemistry), different regions of the world (most continents), and different length scales (from atoms to autos).
NASA Astrophysics Data System (ADS)
Wang, Chuanjie; Liu, Huan; Zhang, Ying; Chen, Gang; Li, Yujie; Zhang, Peng
2017-12-01
Micro-forming is one promising technology for manufacturing micro metal parts. However, the traditional metal-forming theories fail to analyze the plastic deformation behavior in micro-scale due to the size effect arising from the part geometry scaling down from macro-scale to micro-scale. To reveal the mechanism of plastic deformation behavior size effect in micro-scale, the geometrical parameters and the induced variation of microstructure by them need to be integrated in the developed constitutive models considering the free surface effect. In this research, the variations of dislocation cell diameter with original grain size, strain and location (surface grain or inner grain) are derived according the previous research data. Then the overall flow stress of the micro specimen is determined by employing the surface layer model and the relationship between dislocation cell diameter and the flow stress. This new developed constitutive model considers the original grain size, geometrical dimension and strain simultaneously. The flow stresses in micro-tensile tests of thin sheets are compared with calculated results using the developed constitutive model. The calculated and experimental results match well. Thus the validity of the developed constitutive model is verified.
Continuously distributed magnetization profile for millimeter-scale elastomeric undulatory swimming
NASA Astrophysics Data System (ADS)
Diller, Eric; Zhuang, Jiang; Zhan Lum, Guo; Edwards, Matthew R.; Sitti, Metin
2014-04-01
We have developed a millimeter-scale magnetically driven swimming robot for untethered motion at mid to low Reynolds numbers. The robot is propelled by continuous undulatory deformation, which is enabled by the distributed magnetization profile of a flexible sheet. We demonstrate control of a prototype device and measure deformation and speed as a function of magnetic field strength and frequency. Experimental results are compared with simple magnetoelastic and fluid propulsion models. The presented mechanism provides an efficient remote actuation method at the millimeter scale that may be suitable for further scaling down in size for micro-robotics applications in biotechnology and healthcare.
Experimental investigation of the crashworthiness of scaled composite sailplane fuselages
NASA Technical Reports Server (NTRS)
Kampf, Karl-Peter; Crawley, Edward F.; Hansman, R. John, Jr.
1989-01-01
The crash dynamics and energy absorption of composite sailplane fuselage segments undergoing nose-down impact were investigated. More than 10 quarter-scale structurally similar test articles, typical of high-performance sailplane designs, were tested. Fuselages segments were fabricated of combinations of fiberglass, graphite, Kevlar, and Spectra fabric materials. Quasistatic and dynamic tests were conducted. The quasistatic tests were found to replicate the strain history and failure modes observed in the dynamic tests. Failure modes of the quarter-scale model were qualitatively compared with full-scale crash evidence and quantitatively compared with current design criteria. By combining material and structural improvements, substantial increases in crashworthiness were demonstrated.
LHC-scale left-right symmetry and unification
NASA Astrophysics Data System (ADS)
Arbeláez, Carolina; Romão, Jorge C.; Hirsch, Martin; Malinský, Michal
2014-02-01
We construct a comprehensive list of nonsupersymmetric standard model extensions with a low-scale left-right (LR)-symmetric intermediate stage that may be obtained as simple low-energy effective theories within a class of renormalizable SO(10) grand unified theories. Unlike the traditional "minimal" LR models many of our example settings support a perfect gauge coupling unification even if the LR scale is in the LHC domain at a price of only (a few copies of) one or two types of extra fields pulled down to the TeV-scale ballpark. We discuss the main aspects of a potentially realistic model building conforming the basic constraints from the quark and lepton sector flavor structure, proton decay limits, etc. We pay special attention to the theoretical uncertainties related to the limited information about the underlying unified framework in the bottom-up approach, in particular, to their role in the possible extraction of the LR-breaking scale. We observe a general tendency for the models without new colored states in the TeV domain to be on the verge of incompatibility with the proton stability constraints.
Effects of De-spinning and Lithosphere Thickening on the Lunar Fossil Bulge
NASA Astrophysics Data System (ADS)
Zhong, S.; Qin, C.; Phillips, R. J.
2016-12-01
The Moon has abnormally large degree-2 anomalies in gravity and shape (or bulge). The degree-2 gravity coefficients C20 and C22 are, respectively, 22 and 7 times greater than expected from the Moon's current orbital and rotational states. One prevalent hypothesis, called the fossil bulge hypothesis, interprets the current degree-2 shape as a remnant of the bulge that froze in when the Moon was closer to the Earth with stronger tidal and rotational potentials. However, the dynamic feasibility of the freeze-in process has never been quantitatively examined. In this study, we explore, using numerical models of viscoelastic deformation with time-dependent rotational potential and lithospheric rheology, how the degree-2 bulge would evolve with time as the early Moon cools and migrates away from the Earth. Our model includes two competing effects: 1) a thickening lithosphere with time through cooling, which helps maintain the bulge, and 2) de-spinning through tidal locking, which tends to reduce the bulge. In our model, a strong lithosphere is represented by the topmost layer that is orders of magnitude more viscous than the mantle. The benchmark results show that our numerical model can compute the bulge size accurately. Our calculations start with a bulge size that is in hydrostatic equilibrium with the initial rotational rate. The bulge reduces with time as the Moon spins down, while the lithosphere can support certain amount of bulge as it thickens. We find that the final size of the bulge is controlled by the relative time scales of the two processes. At limiting cases, if the time scale of de-spinning were much larger than that of lithosphere thickening, the bulge size would be largely maintained. Conversely, the bulge size would be reduced significantly. We will consider more realistic time scales for these two processes, as well as effects of other subsequent processes after lunar magma ocean crystallization, such as large impacts and mare volcanism.
NASA Astrophysics Data System (ADS)
Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; Gonzalez-Nicolas, Ana; Illangasekare, Tissa
2017-01-01
Incorporating hysteresis into models is important to accurately capture the two phase flow behavior when porous media systems undergo cycles of drainage and imbibition such as in the cases of injection and post-injection redistribution of CO2 during geological CO2 storage (GCS). In the traditional model of two-phase flow, existing constitutive models that parameterize the hysteresis associated with these processes are generally based on the empirical relationships. This manuscript presents development and testing of mathematical hysteretic capillary pressure—saturation—relative permeability models with the objective of more accurately representing the redistribution of the fluids after injection. The constitutive models are developed by relating macroscopic variables to basic physics of two-phase capillary displacements at pore-scale and void space distribution properties. The modeling approach with the developed constitutive models with and without hysteresis as input is tested against some intermediate-scale flow cell experiments to test the ability of the models to represent movement and capillary trapping of immiscible fluids under macroscopically homogeneous and heterogeneous conditions. The hysteretic two-phase flow model predicted the overall plume migration and distribution during and post injection reasonably well and represented the postinjection behavior of the plume more accurately than the nonhysteretic models. Based on the results in this study, neglecting hysteresis in the constitutive models of the traditional two-phase flow theory can seriously overpredict or underpredict the injected fluid distribution during post-injection under both homogeneous and heterogeneous conditions, depending on the selected value of the residual saturation in the nonhysteretic models.
Carbon Transformations and Source - Sink Dynamics along a River, Marsh, Estuary, Ocean Continuum
NASA Astrophysics Data System (ADS)
Anderson, I. C.; Crosswell, J.; Czapla, K.; Van Dam, B.
2017-12-01
Estuaries, the transition zone between land and the coastal ocean, are highly dynamic systems in which carbon sourced from watersheds, marshes, atmosphere, and ocean may be transformed, sequestered, or exported. The net fate of carbon in estuaries, governed by the interactions of biotic and physical drivers varying on spatial and temporal scales, is currently uncertain because of limited observational data. In this study, conducted in a temperate, microtidal, and shallow North Carolina USA estuary, carbon exchanges via river, tributary, and fringing salt marsh, air-water fluxes, sediment C accumulation, and metabolism were monitored over two-years, with sharply different amounts of rainfall. Air-water CO2 fluxes and metabolic variables were simultaneously measured in channel and shoal by conducting high-resolution surveys at dawn, dusk and the following dawn. Marsh CO2 exchanges, sediment C inputs, and lateral exports of DIC and DOC were also measured. Carbon flows between estuary regions and export to the coastal ocean were calculated by quantifying residual transport of DIC and TOC down-estuary as flows were modified by sources, sinks and internal transformations. Variation in metabolic rates, CO2, TOC and DIC exchanges were large when determined for short time and limited spatial scales. However, when scaled to annual and whole estuarine scales, variation tended to decrease because of counteracting metabolic rates and fluxes between channel and shoal or between seasons. Although overall salt marshes accumulated OC, they were a negligible source of DIC and DOC to the estuary, and net inputs of C to the marsh were mainly derived from sediment OC. These results, as observed in other observational studies of estuaries, show that riverine input, light, temperature and metabolism are major controls on carbon cycling. Comparison of our results with other types of estuaries varying in depth, latitude, and nutrification demonstrates large discrepancies underscoring the limitations of current sampling designs, models and datasets in representing system-scale diversity; thus, a more practical approach may be to choose a small number of representative coastal systems, coordinate research efforts to quantify the relevant fluxes and constrain a range of environmental conditions that influence carbon cycling.
Yang, X I A; Meneveau, C
2017-04-13
In recent years, there has been growing interest in large-eddy simulation (LES) modelling of atmospheric boundary layers interacting with arrays of wind turbines on complex terrain. However, such terrain typically contains geometric features and roughness elements reaching down to small scales that typically cannot be resolved numerically. Thus subgrid-scale models for the unresolved features of the bottom roughness are needed for LES. Such knowledge is also required to model the effects of the ground surface 'underneath' a wind farm. Here we adapt a dynamic approach to determine subgrid-scale roughness parametrizations and apply it for the case of rough surfaces composed of cuboidal elements with broad size distributions, containing many scales. We first investigate the flow response to ground roughness of a few scales. LES with the dynamic roughness model which accounts for the drag of unresolved roughness is shown to provide resolution-independent results for the mean velocity distribution. Moreover, we develop an analytical roughness model that accounts for the sheltering effects of large-scale on small-scale roughness elements. Taking into account the shading effect, constraints from fundamental conservation laws, and assumptions of geometric self-similarity, the analytical roughness model is shown to provide analytical predictions that agree well with roughness parameters determined from LES.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Bloom, A. A.; Exbrayat, J. F.; van der Velde, I.; Peters, W.; Williams, M.
2014-12-01
Large uncertainties preside over terrestrial carbon flux estimates on a global scale. In particular, the strongly coupled dynamics between net ecosystem productivity and disturbance C losses are poorly constrained. To gain an improved understanding of ecosystem C dynamics from regional to global scale, we apply a Markov Chain Monte Carlo based model-data-fusion approach into the CArbon DAta-MOdel fraMework (CARDAMOM). We assimilate MODIS LAI and burned area, plant-trait data, and use the Harmonized World Soil Database (HWSD) and maps of above ground biomass as prior knowledge for initial conditions. We optimize model parameters based on (a) globally spanning observations and (b) ecological and dynamic constraints that force single parameter values and parameter inter-dependencies to be representative of real world processes. We determine the spatial and temporal dynamics of major terrestrial C fluxes and model parameter values on a global scale (GPP = 123 +/- 8 Pg C yr-1 & NEE = -1.8 +/- 2.7 Pg C yr-1). We further show that the incorporation of disturbance fluxes, and accounting for their instantaneous or delayed effect, is of critical importance in constraining global C cycle dynamics, particularly in the tropics. In a higher resolution case study centred on the Amazon Basin we show how fires not only trigger large instantaneous emissions of burned matter, but also how they are responsible for a sustained reduction of up to 50% in plant uptake following the depletion of biomass stocks. The combination of these two fire-induced effects leads to a 1 g C m-2 d-1reduction in the strength of the net terrestrial carbon sink. Through our simulations at regional and global scale, we advocate the need to assimilate disturbance metrics in global terrestrial carbon cycle models to bridge the gap between globally spanning terrestrial carbon cycle data and the full dynamics of the ecosystem C cycle. Disturbances are especially important because their quick occurrence may have long-term effects on ecosystems. Our synthetic simulations show that while tropical ecosystems uptake may reach pre-disturbance level after a decade, biomass stocks would most likely need more than a century to recover from a single extreme disturbance event.
Multi-scale hydrometeorological observation and modelling for flash-flood understanding
NASA Astrophysics Data System (ADS)
Braud, I.; Ayral, P.-A.; Bouvier, C.; Branger, F.; Delrieu, G.; Le Coz, J.; Nord, G.; Vandervaere, J.-P.; Anquetin, S.; Adamovic, M.; Andrieu, J.; Batiot, C.; Boudevillain, B.; Brunet, P.; Carreau, J.; Confoland, A.; Didon-Lescot, J.-F.; Domergue, J.-M.; Douvinet, J.; Dramais, G.; Freydier, R.; Gérard, S.; Huza, J.; Leblois, E.; Le Bourgeois, O.; Le Boursicaud, R.; Marchand, P.; Martin, P.; Nottale, L.; Patris, N.; Renard, B.; Seidel, J.-L.; Taupin, J.-D.; Vannier, O.; Vincendon, B.; Wijbrans, A.
2014-02-01
This paper presents a coupled observation and modelling strategy aiming at improving the understanding of processes triggering flash floods. This strategy is illustrated for the Mediterranean area using two French catchments (Gard and Ardèche) larger than 2000 km2. The approach is based on the monitoring of nested spatial scales: (1) the hillslope scale, where processes influencing the runoff generation and its concentration can be tackled; (2) the small to medium catchment scale (1-100 km2) where the impact of the network structure and of the spatial variability of rainfall, landscape and initial soil moisture can be quantified; (3) the larger scale (100-1000 km2) where the river routing and flooding processes become important. These observations are part of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) Enhanced Observation Period (EOP) and lasts four years (2012-2015). In terms of hydrological modelling the objective is to set up models at the regional scale, while addressing small and generally ungauged catchments, which is the scale of interest for flooding risk assessment. Top-down and bottom-up approaches are combined and the models are used as "hypothesis testing" tools by coupling model development with data analyses, in order to incrementally evaluate the validity of model hypotheses. The paper first presents the rationale behind the experimental set up and the instrumentation itself. Second, we discuss the associated modelling strategy. Results illustrate the potential of the approach in advancing our understanding of flash flood processes at various scales.
A Method for Label-Free, Differential Top-Down Proteomics.
Ntai, Ioanna; Toby, Timothy K; LeDuc, Richard D; Kelleher, Neil L
2016-01-01
Biomarker discovery in the translational research has heavily relied on labeled and label-free quantitative bottom-up proteomics. Here, we describe a new approach to biomarker studies that utilizes high-throughput top-down proteomics and is the first to offer whole protein characterization and relative quantitation within the same experiment. Using yeast as a model, we report procedures for a label-free approach to quantify the relative abundance of intact proteins ranging from 0 to 30 kDa in two different states. In this chapter, we describe the integrated methodology for the large-scale profiling and quantitation of the intact proteome by liquid chromatography-mass spectrometry (LC-MS) without the need for metabolic or chemical labeling. This recent advance for quantitative top-down proteomics is best implemented with a robust and highly controlled sample preparation workflow before data acquisition on a high-resolution mass spectrometer, and the application of a hierarchical linear statistical model to account for the multiple levels of variance contained in quantitative proteomic comparisons of samples for basic and clinical research.
NASA Astrophysics Data System (ADS)
Nasir, R. E. M.; Ahmad, A. M.; Latif, Z. A. A.; Saad, R. M.; Kuntjoro, W.
2017-12-01
Blended wing-body (BWB) aircraft having planform configuration similar to those previously researched and published by other researchers does not guarantee that an efficient aerodynamics in term of lift-to-drag ratio can be achieved. In this wind tunnel experimental study, BWB half model is used. The model is also being scaled down to 71.5% from the actual size. Based on the results, the maximum lift coefficient is found to be 0.763 when the angle is at 27.5° after which the model starts to stall. The minimum drag coefficient is 0.014, measured at zero angle of attack. The corrected lift-to-drag ratio (L/D) is 15.9 at angle 7.8°. The scaled model has a big flat surface that surely gives an inaccurate data but the data obtained shall give some insights for future perspective towards the BWB model being tested.
NASA Astrophysics Data System (ADS)
Yver-Kwok, C. E.; Müller, D.; Caldow, C.; Lebegue, B.; Mønster, J. G.; Rella, C. W.; Scheutz, C.; Schmidt, M.; Ramonet, M.; Warneke, T.; Broquet, G.; Ciais, P.
2013-10-01
This paper describes different methods to estimate methane emissions at different scales. These methods are applied to a waste water treatment plant (WWTP) located in Valence, France. We show that Fourier Transform Infrared (FTIR) measurements as well as Cavity Ring Down Spectroscopy (CRDS) can be used to measure emissions from the process to the regional scale. To estimate the total emissions, we investigate a tracer release method (using C2H2) and the Radon tracer method (using 222Rn). For process-scale emissions, both tracer release and chamber techniques were used. We show that the tracer release method is suitable to quantify facility- and some process-scale emissions, while the Radon tracer method encompasses not only the treatment station but also a large area around. Thus the Radon tracer method is more representative of the regional emissions around the city. Uncertainties for each method are described. Applying the methods to CH4 emissions, we find that the main source of emissions of the plant was not identified with certainty during this short campaign, although the primary source of emissions is likely to be from solid sludge. Overall, the waste water treatment plant represents a small part (3%) of the methane emissions of the city of Valence and its surroundings,which is in agreement with the national inventories.
PSD-95 and PSD-93 Play Critical but Distinct Roles in Synaptic Scaling Up and Down
Sun, Qian; Turrigiano, Gina G.
2011-01-01
Synaptic scaling stabilizes neuronal firing through the homeostatic regulation of postsynaptic strength, but the mechanisms by which chronic changes in activity lead to bidirectional adjustments in synaptic AMPAR abundance are incompletely understood. Further, it remains unclear to what extent scaling up and scaling down utilize distinct molecular machinery. PSD-95 is a scaffold protein proposed to serve as a binding “slot” that determines synaptic AMPAR content, and synaptic PSD-95 abundance is regulated by activity, raising the possibility that activity-dependent changes in the synaptic abundance of PSD-95 or other MAGUKs drives the bidirectional changes in AMPAR accumulation during synaptic scaling. We found that synaptic PSD-95 and SAP102 (but not PSD-93) abundance were bidirectionally regulated by activity, but these changes were not sufficient to drive homeostatic changes in synaptic strength. Although not sufficient, the PSD-95-MAGUKs were necessary for synaptic scaling, but scaling up and down were differentially dependent on PSD-95 and PSD-93. Scaling down was completely blocked by reduced or enhanced PSD-95, through a mechanism that depended on the PDZ1/2 domains. In contrast scaling up could be supported by either PSD-95 or PSD-93 in a manner that depended on neuronal age, and was unaffected by a superabundance of PSD-95. Taken together, our data suggest that scaling up and down of quantal amplitude is not driven by changes in synaptic abundance of PSD-95-MAGUKs, but rather that the PSD-95 MAGUKs serve as critical synaptic organizers that utilize distinct protein-protein interactions to mediate homeostatic accumulation and loss of synaptic AMPAR. PMID:21543610
Modeling the effect of dune sorting on the river long profile
NASA Astrophysics Data System (ADS)
Blom, A.
2012-12-01
River dunes, which occur in low slope sand bed and sand-gravel bed rivers, generally show a downward coarsening pattern due to grain flows down their avalanche lee faces. These grain flows cause coarse particles to preferentially deposit at lower elevations of the lee face, while fines show a preference for its upper elevations. Before considering the effect of this dune sorting mechanism on the river long profile, let us first have a look at some general trends along the river profile. Tributaries increasing the river's water discharge in streamwise direction also cause a streamwise increase in flow depth. As under subcritical conditions mean dune height generally increases with increasing flow depth, the dune height shows a streamwise increase, as well. This means that also the standard deviation of bedform height increases in streamwise direction, as in earlier work it was found that the standard deviation of bedform height linearly increases with an increasing mean value of bedform height. As a result of this streamwise increase in standard deviation of dune height, the above-mentioned dune sorting then results in a loss of coarse particles to the lower elevations of the bed that are less and even rarely exposed to the flow. This loss of coarse particles to lower elevations thus increases the rate of fining in streamwise direction. As finer material is more easily transported downstream than coarser material, a smaller bed slope is required to transport the same amount of sediment downstream. This means that dune sorting adds to river profile concavity, compared to the combined effect of abrasion, selective transport and tributaries. A Hirano-type mass conservation model is presented that deals with dune sorting. The model includes two active layers: a bedform layer representing the sediment in the bedforms and a coarse layer representing the coarse and less mobile sediment underneath migrating bedforms. The exposure of the coarse layer is governed by the rate of sediment supply from upstream. By definition the sum of the exposure of both layers equals unity. The model accounts for vertical sediment fluxes due to grain flows down the bedform lee face and the formation of a less mobile coarse layer. The model with its vertical sediment fluxes is validated against earlier flume experiments. It deals well with the transition between a plane bed and a bedform-dominated bed. Applying the model to field scale confirms that dune sorting increases river profile concavity.
Modeling aspects of the surface reconstruction problem
NASA Astrophysics Data System (ADS)
Toth, Charles K.; Melykuti, Gabor
1994-08-01
The ultimate goal of digital photogrammetry is to automatically produce digital maps which may in turn form the basis of GIS. Virtually all work in surface reconstruction deals with various kinds of approximations and constraints that are applied. In this paper we extend these concepts in various ways. For one, matching is performed in object space. Thus, matching and densification (modeling) is performed in the same reference system. Another extension concerns the solution of the second sub-problem. Rather than simply densifying (interpolating) the surface, we propose to model it. This combined top-down and bottom-up approach is performed in scale space, whereby the model is refined until compatibility between the data and expectations is reached. The paper focuses on the modeling aspects of the surface reconstruction problem. Obviously, the top-down and bottom-up model descriptions ought to be in a form which allows the generation and verification of hypotheses. Another crucial question is the degree of a priori scene knowledge necessary to constrain the solution space.
Size and frequency of natural forest disturbances and the Amazon forest carbon balance
Espírito-Santo, Fernando D.B.; Gloor, Manuel; Keller, Michael; Malhi, Yadvinder; Saatchi, Sassan; Nelson, Bruce; Junior, Raimundo C. Oliveira; Pereira, Cleuton; Lloyd, Jon; Frolking, Steve; Palace, Michael; Shimabukuro, Yosio E.; Duarte, Valdete; Mendoza, Abel Monteagudo; López-González, Gabriela; Baker, Tim R.; Feldpausch, Ted R.; Brienen, Roel J.W.; Asner, Gregory P.; Boyd, Doreen S.; Phillips, Oliver L.
2014-01-01
Forest inventory studies in the Amazon indicate a large terrestrial carbon sink. However, field plots may fail to represent forest mortality processes at landscape-scales of tropical forests. Here we characterize the frequency distribution of disturbance events in natural forests from 0.01 ha to 2,651 ha size throughout Amazonia using a novel combination of forest inventory, airborne lidar and satellite remote sensing data. We find that small-scale mortality events are responsible for aboveground biomass losses of ~1.7 Pg C y−1 over the entire Amazon region. We also find that intermediate-scale disturbances account for losses of ~0.2 Pg C y−1, and that the largest-scale disturbances as a result of blow-downs only account for losses of ~0.004 Pg C y−1. Simulation of growth and mortality indicates that even when all carbon losses from intermediate and large-scale disturbances are considered, these are outweighed by the net biomass accumulation by tree growth, supporting the inference of an Amazon carbon sink. PMID:24643258
Size and frequency of natural forest disturbances and the Amazon forest carbon balance.
Espírito-Santo, Fernando D B; Gloor, Manuel; Keller, Michael; Malhi, Yadvinder; Saatchi, Sassan; Nelson, Bruce; Junior, Raimundo C Oliveira; Pereira, Cleuton; Lloyd, Jon; Frolking, Steve; Palace, Michael; Shimabukuro, Yosio E; Duarte, Valdete; Mendoza, Abel Monteagudo; López-González, Gabriela; Baker, Tim R; Feldpausch, Ted R; Brienen, Roel J W; Asner, Gregory P; Boyd, Doreen S; Phillips, Oliver L
2014-03-18
Forest inventory studies in the Amazon indicate a large terrestrial carbon sink. However, field plots may fail to represent forest mortality processes at landscape-scales of tropical forests. Here we characterize the frequency distribution of disturbance events in natural forests from 0.01 ha to 2,651 ha size throughout Amazonia using a novel combination of forest inventory, airborne lidar and satellite remote sensing data. We find that small-scale mortality events are responsible for aboveground biomass losses of ~1.7 Pg C y(-1) over the entire Amazon region. We also find that intermediate-scale disturbances account for losses of ~0.2 Pg C y(-1), and that the largest-scale disturbances as a result of blow-downs only account for losses of ~0.004 Pg C y(-1). Simulation of growth and mortality indicates that even when all carbon losses from intermediate and large-scale disturbances are considered, these are outweighed by the net biomass accumulation by tree growth, supporting the inference of an Amazon carbon sink.
Prenatal pharmacotherapy rescues brain development in a Down's syndrome mouse model.
Guidi, Sandra; Stagni, Fiorenza; Bianchi, Patrizia; Ciani, Elisabetta; Giacomini, Andrea; De Franceschi, Marianna; Moldrich, Randal; Kurniawan, Nyoman; Mardon, Karine; Giuliani, Alessandro; Calzà, Laura; Bartesaghi, Renata
2014-02-01
Intellectual impairment is a strongly disabling feature of Down's syndrome, a genetic disorder of high prevalence (1 in 700-1000 live births) caused by trisomy of chromosome 21. Accumulating evidence shows that widespread neurogenesis impairment is a major determinant of abnormal brain development and, hence, of intellectual disability in Down's syndrome. This defect is worsened by dendritic hypotrophy and connectivity alterations. Most of the pharmacotherapies designed to improve cognitive performance in Down's syndrome have been attempted in Down's syndrome mouse models during adult life stages. Yet, as neurogenesis is mainly a prenatal event, treatments aimed at correcting neurogenesis failure in Down's syndrome should be administered during pregnancy. Correction of neurogenesis during the very first stages of brain formation may, in turn, rescue improper brain wiring. The aim of our study was to establish whether it is possible to rescue the neurodevelopmental alterations that characterize the trisomic brain with a prenatal pharmacotherapy with fluoxetine, a drug that is able to restore post-natal hippocampal neurogenesis in the Ts65Dn mouse model of Down's syndrome. Pregnant Ts65Dn females were treated with fluoxetine from embryonic Day 10 until delivery. On post-natal Day 2 the pups received an injection of 5-bromo-2-deoxyuridine and were sacrificed after either 2 h or after 43 days (at the age of 45 days). Untreated 2-day-old Ts65Dn mice exhibited a severe neurogenesis reduction and hypocellularity throughout the forebrain (subventricular zone, subgranular zone, neocortex, striatum, thalamus and hypothalamus), midbrain (mesencephalon) and hindbrain (cerebellum and pons). In embryonically treated 2-day-old Ts65Dn mice, precursor proliferation and cellularity were fully restored throughout all brain regions. The recovery of proliferation potency and cellularity was still present in treated Ts65Dn 45-day-old mice. Moreover, embryonic treatment restored dendritic development, cortical and hippocampal synapse development and brain volume. Importantly, these effects were accompanied by recovery of behavioural performance. The cognitive deficits caused by Down's syndrome have long been considered irreversible. The current study provides novel evidence that a pharmacotherapy with fluoxetine during embryonic development is able to fully rescue the abnormal brain development and behavioural deficits that are typical of Down's syndrome. If the positive effects of fluoxetine on the brain of a mouse model are replicated in foetuses with Down's syndrome, fluoxetine, a drug usable in humans, may represent a breakthrough for the therapy of intellectual disability in Down's syndrome.
USDA-ARS?s Scientific Manuscript database
Biophysical models intended for routine applications at a range of scales should attempt to balance the competing demands of generality and simplicity and be capable of realistically simulating the response of CO2 and energy fluxes to environmental and physiological forcings. At the same time they m...
Parrini, Martina; Ghezzi, Diego; Deidda, Gabriele; Medrihan, Lucian; Castroflorio, Enrico; Alberti, Micol; Baldelli, Pietro; Cancedda, Laura; Contestabile, Andrea
2017-12-04
Down syndrome (DS) is caused by the triplication of human chromosome 21 and represents the most frequent genetic cause of intellectual disability. The trisomic Ts65Dn mouse model of DS shows synaptic deficits and reproduces the essential cognitive disabilities of the human syndrome. Aerobic exercise improved various neurophysiological dysfunctions in Ts65Dn mice, including hippocampal synaptic deficits, by promoting synaptogenesis and neurotransmission at glutamatergic terminals. Most importantly, the same intervention also prompted the recovery of hippocampal adult neurogenesis and synaptic plasticity and restored cognitive performance in trisomic mice. Additionally, the expression of brain-derived neurotrophic factor (BDNF) was markedly decreased in the hippocampus of patients with DS. Since the positive effect of exercise was paralleled by increased BDNF expression in trisomic mice, we investigated the effectiveness of a BDNF-mimetic treatment with 7,8-dihydroxyflavone at alleviating intellectual disabilities in the DS model. Pharmacological stimulation of BDNF signaling rescued synaptic plasticity and memory deficits in Ts65Dn mice. Based on our findings, Ts65Dn mice benefit from interventions aimed at promoting brain plasticity, and we provide evidence that BDNF signaling represents a potentially new pharmacological target for treatments aimed at rescuing cognitive disabilities in patients with DS.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Analysis of SET pulses propagation probabilities in sequential circuits
NASA Astrophysics Data System (ADS)
Cai, Shuo; Yu, Fei; Yang, Yiqun
2018-05-01
As the feature size of CMOS transistors scales down, single event transient (SET) has been an important consideration in designing logic circuits. Many researches have been done in analyzing the impact of SET. However, it is difficult to consider numerous factors. We present a new approach for analyzing the SET pulses propagation probabilities (SPPs). It considers all masking effects and uses SET pulses propagation probabilities matrices (SPPMs) to represent the SPPs in current cycle. Based on the matrix union operations, the SPPs in consecutive cycles can be calculated. Experimental results show that our approach is practicable and efficient.
NASA Astrophysics Data System (ADS)
Zhou, Bowen; Chow, Fotini
2012-11-01
This numerical study investigates the nighttime flow dynamics in a steep valley. The Owens Valley in California is highly complex, and represents a challenging terrain for large-eddy simulations (LES). To ensure a faithful representation of the nighttime atmospheric boundary layer (ABL), realistic external boundary conditions are provided through grid nesting. The model obtains initial and lateral boundary conditions from reanalysis data, and bottom boundary conditions from a land-surface model. We demonstrate the ability to extend a mesoscale model to LES resolutions through a systematic grid-nesting framework, achieving accurate simulations of the stable ABL over complex terrain. Nighttime cold-air flow was channeled through a gap on the valley sidewall. The resulting katabatic current induced a cross-valley flow. Directional shear against the down-valley flow in the lower layers of the valley led to breaking Kelvin-Helmholtz waves at the interface, which is captured only on the LES grid. Later that night, the flow transitioned from down-slope to down-valley near the western sidewall, leading to a transient warming episode. Simulation results are verified against field observations and reveal good spatial and temporal precision. Supported by NSF grant ATM-0645784.
Earthquake scaling laws for rupture geometry and slip heterogeneity
NASA Astrophysics Data System (ADS)
Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro
2016-04-01
We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip distributions. To further characterize the spatial correlations of slip heterogeneity, we analyze the power spectral decay of slip applying the 2-D von Karman auto-correlation function (parameterized by the Hurst exponent, H, and correlation lengths along strike and down-slip). The Hurst exponent is scale invariant, H = 0.83 (± 0.12), while the correlation lengths scale with source dimensions (seismic moment), thus implying characteristic physical scales of earthquake ruptures. Our self-consistent scaling relationships allow constraining the generation of slip-heterogeneity scenarios for physics-based ground-motion and tsunami simulations.
Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases
Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N.
2016-01-01
A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed. PMID:26858668
The Impact of ARM on Climate Modeling. Chapter 26
NASA Technical Reports Server (NTRS)
Randall, David A.; Del Genio, Anthony D.; Donner, Leo J.; Collins, William D.; Klein, Stephen A.
2016-01-01
Climate models are among humanity's most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability, and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of the Earth down to one hundred kilometers or smaller, and implicitly include the effects of processes on even smaller scales down to a micron or so. The atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM). In an AGCM, calculations are done on a three-dimensional grid, which in some of today's climate models consists of several million grid cells. For each grid cell, about a dozen variables are time-stepped as the model integrates forward from its initial conditions. These so-called prognostic variables have special importance because they are the only things that a model remembers from one time step to the next; everything else is recreated on each time step by starting from the prognostic variables and the boundary conditions. The prognostic variables typically include information about the mass of dry air, the temperature, the wind components, water vapor, various condensed-water species, and at least a few chemical species such as ozone. A good way to understand how climate models work is to consider the lengthy and complex process used to develop one. Lets imagine that a new AGCM is to be created, starting from a blank piece of paper. The model may be intended for a particular class of applications, e.g., high-resolution simulations on time scales of a few decades. Before a single line of code is written, the conceptual foundation of the model must be designed through a creative envisioning that starts from the intended application and is based on current understanding of how the atmosphere works and the inventory of mathematical methods available.
Brazilian LTER: ecosystem and biodiversity information in support of decision-making.
Barbosa, F A R; Scarano, F R; Sabará, M G; Esteves, F A
2004-01-01
Brazil officially joined the International Long Term Ecological Research (ILTER) network in January 2000, when nine research sites were created and funded by the Brazilian Council for Science and Technology (CNPq). Two-years later some positive signs already emerge of the scientific, social and political achievements of the Brazilian LTER program. We discuss examples of how ecosystem and biodiversity information gathered within a long-term research approach are currently subsidizing decision-making as regards biodiversity conservation and watershed management at local and regional scales. Success in this respect has often been related to satisfactory communication between scientists, private companies, government and local citizens. Environmental education programs in the LTER sites are playing an important role in social and political integration. Most examples of integration of ecological research to decision-making in Brazil derive from case studies at local or regional scale. Despite the predominance of a bottom-up integrative pathway (from case studies to models; from local to national scale), some top-down initiatives are also in order, such as the construction of a model to estimate the inpact of different macroeconomic policies and growth trajectories on land use. We believe science and society in Brazil will benefit of the coexistence of bottom-up and top-down integrative approaches.
Makarava, Natallia; Menz, Stephan; Theves, Matthias; Huisinga, Wilhelm; Beta, Carsten; Holschneider, Matthias
2014-10-01
Amoebae explore their environment in a random way, unless external cues like, e.g., nutrients, bias their motion. Even in the absence of cues, however, experimental cell tracks show some degree of persistence. In this paper, we analyzed individual cell tracks in the framework of a linear mixed effects model, where each track is modeled by a fractional Brownian motion, i.e., a Gaussian process exhibiting a long-term correlation structure superposed on a linear trend. The degree of persistence was quantified by the Hurst exponent of fractional Brownian motion. Our analysis of experimental cell tracks of the amoeba Dictyostelium discoideum showed a persistent movement for the majority of tracks. Employing a sliding window approach, we estimated the variations of the Hurst exponent over time, which allowed us to identify points in time, where the correlation structure was distorted ("outliers"). Coarse graining of track data via down-sampling allowed us to identify the dependence of persistence on the spatial scale. While one would expect the (mode of the) Hurst exponent to be constant on different temporal scales due to the self-similarity property of fractional Brownian motion, we observed a trend towards stronger persistence for the down-sampled cell tracks indicating stronger persistence on larger time scales.
NASA Astrophysics Data System (ADS)
Nijssen, Bart; Clark, Martyn; Mizukami, Naoki; Chegwidden, Oriana
2016-04-01
Most existing hydrological models use a fixed representation of landscape structure. For example, high-resolution, spatially-distributed models may use grid cells that exchange moisture through the saturated subsurface or may divide the landscape into hydrologic response units that only exchange moisture through surface channels. Alternatively, many regional models represent the landscape through coarse elements that do not model any moisture exchange between these model elements. These spatial organizations are often represented at a low-level in the model code and its data structures, which makes it difficult to evaluate different landscape representations using the same hydrological model. Instead, such experimentation requires the use of multiple, different hydrological models, which in turn complicates the analysis, because differences in model outcomes are no longer constrained by differing spatial representations. This inflexibility in the representation of landscape structure also limits a model's capability for scaling local processes to regional outcomes. In this study, we used the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to evaluate different model spatial configurations to represent landscape structure and to evaluate scaling behavior. SUMMA can represent the moisture exchange between arbitrarily shaped landscape elements in a number of different ways, while using the same model parameterizations for vertical fluxes. This allows us to isolate the effects of changes in landscape representations on modeled hydrological fluxes and states. We examine the effects of spatial configuration in Reynolds Creek, Idaho, USA, which is a research watershed with gaged areas from 1-20 km2. We then use the same modeling system to evaluate scaling behavior in simulated hydrological fluxes in the Columbia River Basin, Pacific Northwest, USA. This basin drains more than 500,000 km2 and includes the Reynolds Creek Watershed.
A performance evaluation of various coatings, substrate materials, and solar collector systems
NASA Technical Reports Server (NTRS)
Dolan, F. J.
1976-01-01
An experimental apparatus was constructed and utilized in conjunction with both a solar simulator and actual sunlight to test and evaluate various solar panel coatings, panel designs, and scaled-down collector subsystems. Data were taken by an automatic digital data acquisition system and reduced and printed by a computer system. The solar collector test setup, data acquisition system, and data reduction and printout systems were considered to have operated very satisfactorily. Test data indicated that there is a practical or useful limit in scaling down beyond which scaled-down testing cannot produce results comparable to results of larger scale tests. Test data are presented as are schematics and pictures of test equipment and test hardware.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
High-resolution simulation of deep pencil beam surveys - analysis of quasi-periodicity
NASA Astrophysics Data System (ADS)
Weiss, A. G.; Buchert, T.
1993-07-01
We carry out pencil beam constructions in a high-resolution simulation of the large-scale structure of galaxies. The initial density fluctuations are taken to have a truncated power spectrum. All the models have {OMEGA} = 1. As an example we present the results for the case of "Hot-Dark-Matter" (HDM) initial conditions with scale-free n = 1 power index on large scales as a representative of models with sufficient large-scale power. We use an analytic approximation for particle trajectories of a self-gravitating dust continuum and apply a local dynamical biasing of volume elements to identify luminous matter in the model. Using this method, we are able to resolve formally a simulation box of 1200h^-1^ Mpc (e.g. for HDM initial conditions) down to the scale of galactic halos using 2160^3^ particles. We consider this as the minimal resolution necessary for a sensible simulation of deep pencil beam data. Pencil beam probes are taken for a given epoch using the parameters of observed beams. In particular, our analysis concentrates on the detection of a quasi-periodicity in the beam probes using several different methods. The resulting beam ensembles are analyzed statistically using number distributions, pair-count histograms, unnormalized pair-counts, power spectrum analysis and trial-period folding. Periodicities are classified according to their significance level in the power spectrum of the beams. The simulation is designed for application to parameter studies which prepare future observational projects. We find that a large percentage of the beams show quasi- periodicities with periods which cluster at a certain length scale. The periods found range between one and eight times the cutoff length in the initial fluctuation spectrum. At significance levels similar to those of the data of Broadhurst et al. (1990), we find about 15% of the pencil beams to show periodicities, about 30% of which are around the mean separation of rich clusters, while the distribution of scales reaches values of more than 200h^-1^ Mpc. The detection of periodicities larger than the typical void size must not be due to missing of "walls" (like the so called "Great Wall" seen in the CfA catalogue of galaxies), but can be due to different clustering properties of galaxies along the beams.
ON THE BIOMECHANICAL FUNCTION OF SCAFFOLDS FOR ENGINEERING LOAD BEARING SOFT TISSUES
Stella, John A.; D’Amore, Antonio; Wagner, William R.; Sacks, Michael S.
2010-01-01
Replacement or regeneration of load bearing soft tissues has long been the impetus for the development bioactive materials. While maturing, current efforts continue to be confounded by our lack of understanding of the intricate multi-scale hierarchical arrangements and interactions typically found in native tissues. The current state of the art in biomaterial processing enables a degree of controllable microstructure that can be used for the development of model systems to deduce fundamental biological implications of matrix morphologies on cell function. Furthermore, the development of computational frameworks which allow for the simulation of experimentally derived observations represents a positive departure from what has mostly been an empirically driven field, enabling a deeper understanding of the highly complex biological mechanisms we wish to ultimately emulate. Ongoing research is actively pursuing new materials and processing methods to control material structure down to the micro-scale to sustain or improve cell viability, guide tissue growth, and provide mechanical integrity all while exhibiting the capacity to degrade in a controlled manner. The purpose of this review is not to focus solely on material processing but to assess the ability of these techniques to produce mechanically sound tissue surrogates, highlight the unique structural characteristics produced in these materials, and discuss how this translates to distinct macroscopic biomechanical behaviors. PMID:20060509
Object segmentation controls image reconstruction from natural scenes
2017-01-01
The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801
ERIC Educational Resources Information Center
Davies, Malonne; Landis, Linda; Landis, Arthur
2009-01-01
After studying phenomena related to the positions and motions of the Earth, Sun, and Moon, many students are familiar with the positional ordering of the planets, but their knowledge of the distances involved is vague. Scale models are one means of bringing extreme sizes into better focus, cutting them down to relative values that they can better…
Air emissions due to wind and solar power.
Katzenstein, Warren; Apt, Jay
2009-01-15
Renewables portfolio standards (RPS) encourage large-scale deployment of wind and solar electric power. Their power output varies rapidly, even when several sites are added together. In many locations, natural gas generators are the lowest cost resource available to compensate for this variability, and must ramp up and down quickly to keep the grid stable, affecting their emissions of NOx and CO2. We model a wind or solar photovoltaic plus gas system using measured 1-min time-resolved emissions and heat rate data from two types of natural gas generators, and power data from four wind plants and one solar plant. Over a wide range of renewable penetration, we find CO2 emissions achieve approximately 80% of the emissions reductions expected if the power fluctuations caused no additional emissions. Using steam injection, gas generators achieve only 30-50% of expected NOx emissions reductions, and with dry control NOx emissions increase substantially. We quantify the interaction between state RPSs and NOx constraints, finding that states with substantial RPSs could see significant upward pressure on NOx permit prices, if the gas turbines we modeled are representative of the plants used to mitigate wind and solar power variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiff, Avery J.; Cranmer, Steven R.
Coronal loops trace out bipolar, arch-like magnetic fields above the Sun’s surface. Recent measurements that combine rotational tomography, extreme-ultraviolet imaging, and potential-field extrapolation have shown the existence of large loops with inverted-temperature profiles, i.e., loops for which the apex temperature is a local minimum, not a maximum. These “down loops” appear to exist primarily in equatorial quiet regions near solar minimum. We simulate both these and the more prevalent large-scale “up loops” by modeling coronal heating as a time-steady superposition of (1) dissipation of incompressible Alfvén wave turbulence and (2) dissipation of compressive waves formed by mode conversion from themore » initial population of Alfvén waves. We found that when a large percentage (>99%) of the Alfvén waves undergo this conversion, heating is greatly concentrated at the footpoints and stable “down loops” are created. In some cases we found loops with three maxima that are also gravitationally stable. Models that agree with the tomographic temperature data exhibit higher gas pressures for “down loops” than for “up loops,” which is consistent with observations. These models also show a narrow range of Alfvén wave amplitudes: 3 to 6 km s{sup -1} at the coronal base. This is low in comparison to typical observed amplitudes of 20–30 km s{sup -1} in bright X-ray loops. However, the large-scale loops we model are believed to compose a weaker diffuse background that fills much of the volume of the corona. By constraining the physics of loops that underlie quiescent streamers, we hope to better understand the formation of the slow solar wind.« less
Non-Linear Cosmological Power Spectra in Real and Redshift Space
NASA Technical Reports Server (NTRS)
Taylor, A. N.; Hamilton, A. J. S.
1996-01-01
We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.
Coding of odors by temporal binding within a model network of the locust antennal lobe.
Patel, Mainak J; Rangan, Aaditya V; Cai, David
2013-01-01
The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin-Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements.
Telli, Onur; Mermerkaya, Murat; Hajiyev, Perviz; Aydogdu, Ozgu; Afandiyev, Faraj; Suer, Evren; Soygur, Tarkan; Burgu, Berk
2015-03-01
We evaluated whether stress levels in children and parents during radiological evaluation after febrile urinary tract infection are really lower using the top-down approach, where (99m)technetium dimercaptosuccinic acid renal scintigraphy is used initially, than the bottom-up approach, where voiding cystourethrography is initially performed and repeated examinations are easier for all. We prospectively evaluated 120 children 3 to 8 years old. Pain ratings were obtained using the Faces Pain Scale-Revised, and conversation during the procedure was evaluated using the Child-Adult Medical Procedure Interaction Scale-Revised by 2 independent observers. To evaluate parental anxiety, the State-Trait Anxiety Inventory form was also completed. Following a documented febrile urinary tract infection children were randomized to the top-down or bottom-up group. A third group of 44 children undergoing repeat voiding cystourethrography and their parents were also evaluated. Child ratings of pain using the Faces Pain Scale-Revised were not significantly different between the top-down group following (99m)technetium dimercaptosuccinic acid renal scintigraphy (2.99 on a scale of 10) and the bottom-up group following voiding cystourethrography (3.21). Also the Faces Pain Scale-Revised was not significantly different in the repeat voiding cystourethrography group (3.35). On the Child-Adult Medical Procedure Interaction Scale-Revised there was negative correlation between child coping and child distress, as well as rate of child distress and adult coping promoting behavior. Parental state anxiety scores were significantly less in the top-down and repeat voiding cystourethrography groups than in the bottom-up group. Although the top-down approach and repeat voiding cystourethrography cause less anxiety for caregivers, these values do not correlate to pain scale in children. This finding might be due to lack of appropriate evaluation tools of pediatric pain and anxiety. However, the theory that the top-down approach is less invasive, and thus less stressful, requires further research. The Child-Adult Medical Procedure Interaction Scale-Revised data indicate that influences in adult-child interaction are bidirectional. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Monte Carlo modelling of large scale NORM sources using MCNP.
Wallace, J D
2013-12-01
The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Pushing down the low-mass halo concentration frontier with the Lomonosov cosmological simulations
NASA Astrophysics Data System (ADS)
Pilipenko, Sergey V.; Sánchez-Conde, Miguel A.; Prada, Francisco; Yepes, Gustavo
2017-12-01
We introduce the Lomonosov suite of high-resolution N-body cosmological simulations covering a full box of size 32 h-1 Mpc with low-mass resolution particles (2 × 107 h-1 M⊙) and three zoom-in simulations of overdense, underdense and mean density regions at much higher particle resolution (4 × 104 h-1 M⊙). The main purpose of this simulation suite is to extend the concentration-mass relation of dark matter haloes down to masses below those typically available in large cosmological simulations. The three different density regions available at higher resolution provide a better understanding of the effect of the local environment on halo concentration, known to be potentially important for small simulation boxes and small halo masses. Yet, we find the correction to be small in comparison with the scatter of halo concentrations. We conclude that zoom simulations, despite their limited representativity of the volume of the Universe, can be effectively used for the measurement of halo concentrations at least at the halo masses probed by our simulations. In any case, after a precise characterization of this effect, we develop a robust technique to extrapolate the concentration values found in zoom simulations to larger volumes with greater accuracy. Altogether, Lomonosov provides a measure of the concentration-mass relation in the halo mass range 107-1010 h-1 M⊙ with superb halo statistics. This work represents a first important step to measure halo concentrations at intermediate, yet vastly unexplored halo mass scales, down to the smallest ones. All Lomonosov data and files are public for community's use.
NASA Technical Reports Server (NTRS)
Kurzeja, R. J.; Haggard, K. V.; Grose, W. L.
1984-01-01
The distribution of ozone below 60 km altitude has been simulated in two experiments employing a nine-layer quasi-geostrophic spectral model and linear parameterization of ozone photochemistry, the first of which included thermal and orographic forcing of the planetary scale waves, while the second omitted it. The first experiment exhibited a high latitude winter ozone buildup which was due to a Brewer-Dodson circulation forced by large amplitude (planetary scale) waves in the winter lower stratosphere. Photochemistry was also found to be important down to lower altitudes (20 km) in the summer stratosphere than had previously been supposed.
Object-oriented analysis and design of a health care management information system.
Krol, M; Reich, D L
1999-04-01
We have created a prototype for a universal object-oriented model of a health care system compatible with the object-oriented approach used in version 3.0 of the HL7 standard for communication messages. A set of three models has been developed: (1) the Object Model describes the hierarchical structure of objects in a system--their identity, relationships, attributes, and operations; (2) the Dynamic Model represents the sequence of operations in time as a collection of state diagrams for object classes in the system; and (3) functional Diagram represents the transformation of data within a system by means of data flow diagrams. Within these models, we have defined major object classes of health care participants and their subclasses, associations, attributes and operators, states, and behavioral scenarios. We have also defined the major processes and subprocesses. The top-down design approach allows use, reuse, and cloning of standard components.
A perspective on sustained marine observations for climate modelling and prediction
Dunstone, Nick J.
2014-01-01
Here, I examine some of the many varied ways in which sustained global ocean observations are used in numerical modelling activities. In particular, I focus on the use of ocean observations to initialize predictions in ocean and climate models. Examples are also shown of how models can be used to assess the impact of both current ocean observations and to simulate that of potential new ocean observing platforms. The ocean has never been better observed than it is today and similarly ocean models have never been as capable at representing the real ocean as they are now. However, there remain important unanswered questions that can likely only be addressed via future improvements in ocean observations. In particular, ocean observing systems need to respond to the needs of the burgeoning field of near-term climate predictions. Although new ocean observing platforms promise exciting new discoveries, there is a delicate balance to be made between their funding and that of the current ocean observing system. Here, I identify the need to secure long-term funding for ocean observing platforms as they mature, from a mainly research exercise to an operational system for sustained observation over climate change time scales. At the same time, considerable progress continues to be made via ship-based observing campaigns and I highlight some that are dedicated to addressing uncertainties in key ocean model parametrizations. The use of ocean observations to understand the prominent long time scale changes observed in the North Atlantic is another focus of this paper. The exciting first decade of monitoring of the Atlantic meridional overturning circulation by the RAPID-MOCHA array is highlighted. The use of ocean and climate models as tools to further probe the drivers of variability seen in such time series is another exciting development. I also discuss the need for a concerted combined effort from climate models and ocean observations in order to understand the current slow-down in surface global warming. PMID:25157195
Modelling spatiotemporal change using multidimensional arrays Meng
NASA Astrophysics Data System (ADS)
Lu, Meng; Appel, Marius; Pebesma, Edzer
2017-04-01
The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.
Tang, J. Y.; Riley, W. J.
2016-02-05
We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus) limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1) formulation of the biogeochemical processes with a matrix of stoichiometric coefficients and (2) application of Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulationmore » approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy) Land Model (ALM)) of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a CENTURY-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1) produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions, (2) properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed, (3) successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.« less
NASA Astrophysics Data System (ADS)
Luczak, M. M.; Mucchi, E.; Telega, J.
2016-09-01
The goal of the research is to develop a vibration-based procedure for the identification of structural failures in a laboratory scale model of a tripod supporting structure of an offshore wind turbine. In particular, this paper presents an experimental campaign on the scale model tested in two stages. Stage one encompassed the model tripod structure tested in air. The second stage was done in water. The tripod model structure allows to investigate the propagation of a circumferential representative crack of a cylindrical upper brace. The in-water test configuration included the tower with three bladed rotor. The response of the structure to the different waves loads were measured with accelerometers. Experimental and operational modal analysis was applied to identify the dynamic properties of the investigated scale model for intact and damaged state with different excitations and wave patterns. A comprehensive test matrix allows to assess the differences in estimated modal parameters due to damage or as potentially introduced by nonlinear structural response. The presented technique proves to be effective for detecting and assessing the presence of representative cracks.
Franco, Antonio; Price, Oliver R; Marshall, Stuart; Jolliet, Olivier; Van den Brink, Paul J; Rico, Andreu; Focks, Andreas; De Laender, Frederik; Ashauer, Roman
2017-03-01
Current regulatory practice for chemical risk assessment suffers from the lack of realism in conventional frameworks. Despite significant advances in exposure and ecological effect modeling, the implementation of novel approaches as high-tier options for prospective regulatory risk assessment remains limited, particularly among general chemicals such as down-the-drain ingredients. While reviewing the current state of the art in environmental exposure and ecological effect modeling, we propose a scenario-based framework that enables a better integration of exposure and effect assessments in a tiered approach. Global- to catchment-scale spatially explicit exposure models can be used to identify areas of higher exposure and to generate ecologically relevant exposure information for input into effect models. Numerous examples of mechanistic ecological effect models demonstrate that it is technically feasible to extrapolate from individual-level effects to effects at higher levels of biological organization and from laboratory to environmental conditions. However, the data required to parameterize effect models that can embrace the complexity of ecosystems are large and require a targeted approach. Experimental efforts should, therefore, focus on vulnerable species and/or traits and ecological conditions of relevance. We outline key research needs to address the challenges that currently hinder the practical application of advanced model-based approaches to risk assessment of down-the-drain chemicals. Integr Environ Assess Manag 2017;13:233-248. © 2016 SETAC. © 2016 SETAC.
Statistical model of exotic rotational correlations in emergent space-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan
2017-06-06
A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictionsmore » for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.« less
GEWEX Continental-scale International Project (GCIP)
NASA Technical Reports Server (NTRS)
Try, Paul
1993-01-01
The Global Energy and Water Cycle Experiment (GEWEX) represents the World Climate Research Program activities on clouds, radiation, and land-surface processes. The goal of the program is to reproduce and predict, by means of suitable models, the variations of the global hydrological regime and its impact on atmospheric and oceanic dynamics. However, GEWEX is also concerned with variations in regional hydrological processes and water resources and their response to changes in the environment such as increasing greenhouse gases. In fact, GEWEX contains a major new international project called the GEWEX Continental-scale International Project (GCIP), which is designed to bridge the gap between the small scales represented by hydrological models and those scales that are practical for predicting the regional impacts of climate change. The development and use of coupled mesoscale-hydrological models for this purpose is a high priority in GCIP. The objectives of GCIP are presented.
Petri Net controller synthesis based on decomposed manufacturing models.
Dideban, Abbas; Zeraatkar, Hashem
2018-06-01
Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Spectrophotovoltaic orbital power generation, phase 2
NASA Technical Reports Server (NTRS)
Lo, S. K.; Stoltzman, D.; Knowles, G.; Lin, R.
1981-01-01
A subscale model of the spectral splitting concentrator system with 10" aperture is defined and designed. The model is basically a scaled down version of Phase 1 design with an effective concentration ratio up to 1000:1. The system performance is predicted to be 21.5% for the 2 cell GaAs/Si system, and 20% for Si/GaAs at AM2 using realistic component efficiencies. Component cost of the model is projected in the $50K range. Component and system test plans are also detailed.
Bouzid, Assil; Pasquarello, Alfredo
2018-04-19
Based on constant Fermi-level molecular dynamics and a proper alignment scheme, we perform simulations of the Pt(111)/water interface under variable bias potential referenced to the standard hydrogen electrode (SHE). Our scheme yields a potential of zero charge μ pzc of ∼0.22 eV relative to the SHE and a double layer capacitance C dl of ≃19 μF cm -2 , in excellent agreement with experimental measurements. In addition, we study the structural reorganization of the electrical double layer for bias potentials ranging from -0.92 eV to +0.44 eV and find that O down configurations, which are dominant at potentials above the pzc, reorient to favor H down configurations as the measured potential becomes negative. Our modeling scheme allows one to not only access atomic-scale processes at metal/water interfaces, but also to quantitatively estimate macroscopic electrochemical quantities.
NASA Astrophysics Data System (ADS)
Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.
2017-12-01
Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.
Interactions between drought and soil biogeochemistry: scaling from molecules to meters
NASA Astrophysics Data System (ADS)
Schimel, J.; Schaeffer, S. M.
2011-12-01
Water is the perhaps the single most critical resource for life, yet most terrestrial ecosystems experience regular drought. Reduced water potential causes physiological stress; reduced diffusion limits resource availability when microbes may need resources to acclimate. Most biogeochemical models, however, have assumed that soil processes either slow down or stop during drought. But organisms survive and enzymes remain viable. In California, as soils stay dry through the long summer drought, microbial biomass actually increases and pools of extractable organic C increase, probably because extracellular enzymes continue to break down plant detritus (notably roots). Yet 14C suggests that in deeper soils, the pulse of C released on rewetting comes from pools with turnover times of as long as 800 years. What are the mechanisms that regulate these complex dynamics? They appear to involve differential moisture sensitivity for the activity of extracellular enzymes, substrate diffusion, and microbial metabolism. Rewetting not only redistributes materials made available during the drought, but it also disrupts aggregates and may make previously-protected substrates available as well. We have used several methods to simply capture these linkages between water and carbon in models that are applicable at the ecosystem scale and that could improve our ability to model biogeochemical cycles in arid and semi-arid ecosystems. One is a simple empirical modification to the DAYCENT model while the other is a mechanistic model that incorporates microbial dry-season processes.
Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.
3D virtual human atria: A computational platform for studying clinical atrial fibrillation.
Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui
2011-10-01
Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and AF arrhythmogenesis. Results of such simulations can be directly compared with electrophysiological and endocardial mapping data, as well as clinical ECG recordings. The virtual human atria can provide in-depth insights into 3D excitation propagation processes within atrial walls of a whole heart in vivo, which is beyond the current technical capabilities of experimental or clinical set-ups. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
Scales and kinetics of granular flows.
Goldhirsch, I.
1999-09-01
When a granular material experiences strong forcing, as may be the case, e.g., for coal or gravel flowing down a chute or snow (or rocks) avalanching down a mountain slope, the individual grains interact by nearly instantaneous collisions, much like in the classical model of a gas. The dissipative nature of the particle collisions renders this analogy incomplete and is the source of a number of phenomena which are peculiar to "granular gases," such as clustering and collapse. In addition, the inelasticity of the collisions is the reason that granular gases, unlike atomic ones, lack temporal and spatial scale separation, a fact manifested by macroscopic mean free paths, scale dependent stresses, "macroscopic measurability" of "microscopic fluctuations" and observability of the effects of the Burnett and super-Burnett "corrections." The latter features may also exist in atomic fluids but they are observable there only under extreme conditions. Clustering, collapse and a kinetic theory for rapid flows of dilute granular systems, including a derivation of boundary conditions, are described alongside the mesoscopic properties of these systems with emphasis on the effects, theoretical conclusions and restrictions imposed by the lack of scale separation. (c) 1999 American Institute of Physics.
On a radiative origin of the Standard Model from trinification
NASA Astrophysics Data System (ADS)
Camargo-Molina, José Eliel; Morais, António P.; Pasechnik, Roman; Wessén, Jonas
2016-09-01
In this work, we present a trinification-based grand unified theory incorporating a global SU(3) family symmetry that after a spontaneous breaking leads to a left-right symmetric model. Already at the classical level, this model can accommodate the matter content and the quark Cabbibo mixing in the Standard Model (SM) with only one Yukawa coupling at the unification scale. Considering the minimal low-energy scenario with the least amount of light states, we show that the resulting effective theory enables dynamical breaking of its gauge group down to that of the SM by means of radiative corrections accounted for by the renormalisation group evolution at one loop. This result paves the way for a consistent explanation of the SM breaking scale and fermion mass hierarchies.
Delayed pull-in transitions in overdamped MEMS devices
NASA Astrophysics Data System (ADS)
Gomez, Michael; Moulton, Derek E.; Vella, Dominic
2018-01-01
We consider the dynamics of overdamped MEMS devices undergoing the pull-in instability. Numerous previous experiments and numerical simulations have shown a significant increase in the pull-in time under DC voltages close to the pull-in voltage. Here the transient dynamics slow down as the device passes through a meta-stable or bottleneck phase, but this slowing down is not well understood quantitatively. Using a lumped parallel-plate model, we perform a detailed analysis of the pull-in dynamics in this regime. We show that the bottleneck phenomenon is a type of critical slowing down arising from the pull-in transition. This allows us to show that the pull-in time obeys an inverse square-root scaling law as the transition is approached; moreover we determine an analytical expression for this pull-in time. We then compare our prediction to a wide range of pull-in time data reported in the literature, showing that the observed slowing down is well captured by our scaling law, which appears to be generic for overdamped pull-in under DC loads. This realization provides a useful design rule with which to tune dynamic response in applications, including state-of-the-art accelerometers and pressure sensors that use pull-in time as a sensing mechanism. We also propose a method to estimate the pull-in voltage based only on data of the pull-in times.
Ditching Tests of a 1/24-Scale Model of the Lockheed XR60-1 Airplane, TED No. NACA 235
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J.; Cederborg, Gibson A.
1948-01-01
The ditching characteristics of the Lockheed XR60-1 airplane were determined by tests of a 1/24-scale dynamic model in calm water at the Langley tank no. 2 monorail. Various landing attitudes, flap settings, speeds, and conditions of damager were investigated. The ditching behavior was evaluated from recordings of decelerations, length of runs, and motions of the model. Scale-strength bottoms and simulated crumpled bottoms were used to reproduce probable damage to the fuselage. It was concluded that the airplane should be ditched at a landing attitude of about 5 deg with flaps full down. At this attitude, the maximum longitudinal deceleration should not exceed 2g and the landing run will be bout three fuselage lengths. Damage to the fuselage will not be excessive and will be greatest near the point of initial contact with the water.
Spin foam propagator: A new perspective to include the cosmological constant
NASA Astrophysics Data System (ADS)
Han, Muxin; Huang, Zichang; Zipfel, Antonia
2018-04-01
In recent years, the calculation of the first nonvanishing order of the metric 2-point function or graviton propagator in a semiclassical limit has evolved as a standard test for the credibility of a proposed spin foam model. The existing results of spin foam graviton propagators rely heavily on the so-called double scaling limit where spins j are large and the Barbero-Immirzi parameter γ is small such that the area A ∝j γ is approximately constant. However, it seems that this double scaling limit is bound to break down in models including a cosmological constant. We explore this in detail for the recently proposed model [7 H. M. Haggard, M. Han, W. Kaminski, and A. Riello, Nucl. Phys. B900, 1 (2015), 10.1016/j.nuclphysb.2015.08.023.] by Haggard, Han, Kaminski, and Riello and discuss alternative definitions of a graviton propagator, in which the double scaling limit can be avoided.
Ditching Investigation of a 1/12-Scale Model of the Douglas F3D-2 Airplane, TED No. NACA DE 381
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J.; Thompson, William C.
1955-01-01
An investigation of a 1/12- scale dynamically similar model of the Douglas F3D-2 airplane was made in calm water to observe the ditching behavior and to determine the safest procedure for making an emergency water landing. Various conditions of damage were simulated to determine the behavior which probably would occur in a full-scale ditching. The behavior of the model was determined from motion-picture records, time- history acceleration records, and visual observations. It was concluded that the airplane should be ditched at a medium high attitude of about 8 degrees with the landing flaps down 40 degrees. In calm water the airplane will probably make a smooth run of about 550 feet and will have a maximum longitudinal deceleration of about 3g. The fuselage bottom will probably be damaged enough to allow the fuselage to fill with water very rapidly.
Giustino, Feliciano; Umari, Paolo; Pasquarello, Alfredo
2003-12-31
Using a density-functional approach, we study the dielectric permittivity across interfaces at the atomic scale. Focusing on the static and high-frequency permittivities of SiO2 films on silicon, for oxide thicknesses from 12 A down to the atomic scale, we find a departure from bulk values in accord with experiment. A classical three-layer model accounts for the calculated permittivities and is supported by the microscopic polarization profile across the interface. The local screening varies on length scales corresponding to first-neighbor distances, indicating that the dielectric transition is governed by the chemical grading. Silicon-induced gap states are shown to play a minor role.
Coalescence of Magnetic Islands in the low resistivity Hall MHD Regime.
NASA Astrophysics Data System (ADS)
Knoll, D. A.; Chacon, L.; Simakov, A. N.
2006-10-01
We revisit the well-known problem of the coalescence of magnetic islands in the context of Hall MHD. Unlike previous work, we focus on regimes of small resistivity (S ˜10^6) and where the ion skin depth diL (system size). These conditions are of relevance, for instance, in the solar corona and the earth's magnetotail. We aim to address under which conditions such systems can exhibit fast reconnection. First, we revisit the resistive MHD problem to further understand the well-known sloshing result. Next, the interaction between the ion inertial length, di, and the dynamically evolving current sheet scale length, (δJ), is established. Initially, diδJ. If η is such that (δJ) dynamically thins down to di prior to the well-known sloshing phenomena, then sloshing is avoided. This results in peak reconnection rates which are η-independent and scale as √di. However, if di is small enough that resistivity prevents (δJ) from thinning down to this scale prior to sloshing, then reconnection (and sloshing) proceeds as in the resistive MHD model. Finally, we discuss our development of a semi-analytical model to describe the well-known sloshing result in the resistive MHD model, and our plans to extend it to Hall MHD. D. A. Knoll, L. Chac'on, Phys. Plasmas, 13 (3), p.032307 (2006). D. A. Knoll, L. Chac'on, Phys. Rev. Lett., 96, 135001 (2006). A. Simakov, L. Chac'on, D. A. Knoll, Phys. Plasmas, accepted (2006).
Scaling in the vicinity of the four-state Potts fixed point
NASA Astrophysics Data System (ADS)
Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.
2017-08-01
We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.
How to Connect Cardiac Excitation to the Atomic Interactions of Ion Channels.
Silva, Jonathan R
2018-01-23
Many have worked to create cardiac action potential models that explicitly represent atomic-level details of ion channel structure. Such models have the potential to define new therapeutic directions and to show how nanoscale perturbations to channel function predispose patients to deadly cardiac arrhythmia. However, there have been significant experimental and theoretical barriers that have limited model usefulness. Recently, many of these barriers have come down, suggesting that considerable progress toward creating these long-sought models may be possible in the near term. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Representation of sub-element scale variability in snow accumulation and ablation is increasingly recognized as important in distributed hydrologic modelling. Representing sub-grid scale variability may be accomplished through numerical integration of a nested grid or through a l...
Krstolic, Jennifer L.; Hayes, Donald C.
2010-01-01
Data collected with the GeoXT Trimble GPS unit using ArcPad 6.1. (summer 2006-2007). Files were created within a geodatabase to create a data dictionary for use in ArcPad during data collection. Drop down lists for habitat type, substrate, depth, width, length, and descriptions were included. Data files produced on the GeoXT were point shapefiles that could be checked back into the geodatabase and viewable as a layer. Points were gathered while canoeing along the South Fork Shenandoah River. Each location marked a change in meso-scale habitat type. GPS points were supplemented with GIS-derived points in areas where manual measurements were made. The points were used to generate a line coverage. This coverage represents physical habitat at a meso-scale (width of stream).
Yamaguchi, Takami; Ishikawa, Takuji; Imai, Y; Matsuki, N; Xenos, Mikhail; Deng, Yuefan; Bluestein, Danny
2010-03-01
A major computational challenge for a multiscale modeling is the coupling of disparate length and timescales between molecular mechanics and macroscopic transport, spanning the spatial and temporal scales characterizing the complex processes taking place in flow-induced blood clotting. Flow and pressure effects on a cell-like platelet can be well represented by a continuum mechanics model down to the order of the micrometer level. However, the molecular effects of adhesion/aggregation bonds are on the order of nanometer. A successful multiscale model of platelet response to flow stresses in devices and the ensuing clotting responses should be able to characterize the clotting reactions and their interactions with the flow. This paper attempts to describe a few of the computational methods that were developed in recent years and became available to researchers in the field. They differ from traditional approaches that dominate the field by expanding on prevailing continuum-based approaches, or by completely departing from them, yielding an expanding toolkit that may facilitate further elucidation of the underlying mechanisms of blood flow and the cellular response to it. We offer a paradigm shift by adopting a multidisciplinary approach with fluid dynamics simulations coupled to biophysical and biochemical transport.
NASA Astrophysics Data System (ADS)
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.
2018-03-01
Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.
Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Teixeira, João
2018-01-01
Abstract Large‐scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid‐scale turbulence and convection—such as that they adjust instantaneously to changes in resolved‐scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary‐layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large‐scale models. Here we lay the theoretical foundations for an extended eddy‐diffusivity mass‐flux (EDMF) scheme that has explicit time‐dependence and memory of subgrid‐scale variables and is designed to represent all subgrid‐scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross‐sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large‐scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time‐dependent life cycle. PMID:29780442
Ca2+/calmodulin binding to PSD-95 mediates homeostatic synaptic scaling down.
Chowdhury, Dhrubajyoti; Turner, Matthew; Patriarchi, Tommaso; Hergarden, Anne C; Anderson, David; Zhang, Yonghong; Sun, Junqing; Chen, Chao-Yin; Ames, James B; Hell, Johannes W
2018-01-04
Postsynaptic density protein-95 (PSD-95) localizes AMPA-type glutamate receptors (AMPARs) to postsynaptic sites of glutamatergic synapses. Its postsynaptic displacement is necessary for loss of AMPARs during homeostatic scaling down of synapses. Here, we demonstrate that upon Ca 2+ influx, Ca 2+ /calmodulin (Ca 2+ /CaM) binding to the N-terminus of PSD-95 mediates postsynaptic loss of PSD-95 and AMPARs during homeostatic scaling down. Our NMR structural analysis identified E17 within the PSD-95 N-terminus as important for binding to Ca 2+ /CaM by interacting with R126 on CaM. Mutating E17 to R prevented homeostatic scaling down in primary hippocampal neurons, which is rescued via charge inversion by ectopic expression of CaM R 126E , as determined by analysis of miniature excitatory postsynaptic currents. Accordingly, increased binding of Ca 2+ /CaM to PSD-95 induced by a chronic increase in Ca 2+ influx is a critical molecular event in homeostatic downscaling of glutamatergic synaptic transmission. © 2017 The Authors.
Blood Flow: Multi-scale Modeling and Visualization (July 2011)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-01-01
Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations. This animation presents early results of two studies used in the development of a multi-scale visualization methodology. The fisrt illustrates a flow of healthy (red) and diseased (blue) blood cells with a Dissipative Particle Dynamics (DPD) method. Each bloodmore » cell is represented by a mesh, small spheres show a sub-set of particles representing the blood plasma, while instantaneous streamlines and slices represent the ensemble average velocity. In the second we investigate the process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of an aneruysm. Simulation was performed on Kraken at the National Institute for Computational Sciences. Visualization was produced using resources of the Argonne Leadership Computing Facility at Argonne National Laboratory.« less
Andrews, Elisabeth; Balkanski, Yves; Boucher, Olivier; Myhre, Gunnar; Samset, Bjørn Hallvard; Schulz, Michael; Schuster, Gregory L.; Valari, Myrto; Tao, Shu
2018-01-01
Abstract There is high uncertainty in the direct radiative forcing of black carbon (BC), an aerosol that strongly absorbs solar radiation. The observation‐constrained estimate, which is several times larger than the bottom‐up estimate, is influenced by the spatial representativeness error due to the mesoscale inhomogeneity of the aerosol fields and the relatively low resolution of global chemistry‐transport models. Here we evaluated the spatial representativeness error for two widely used observational networks (AErosol RObotic NETwork and Global Atmosphere Watch) by downscaling the geospatial grid in a global model of BC aerosol absorption optical depth to 0.1° × 0.1°. Comparing the models at a spatial resolution of 2° × 2° with BC aerosol absorption at AErosol RObotic NETwork sites (which are commonly located near emission hot spots) tends to cause a global spatial representativeness error of 30%, as a positive bias for the current top‐down estimate of global BC direct radiative forcing. By contrast, the global spatial representativeness error will be 7% for the Global Atmosphere Watch network, because the sites are located in such a way that there are almost an equal number of sites with positive or negative representativeness error. PMID:29937603
A Mass Diffusion Model for Dry Snow Utilizing a Fabric Tensor to Characterize Anisotropy
NASA Astrophysics Data System (ADS)
Shertzer, Richard H.; Adams, Edward E.
2018-03-01
A homogenization algorithm for randomly distributed microstructures is applied to develop a mass diffusion model for dry snow. Homogenization is a multiscale approach linking constituent behavior at the microscopic level—among ice and air—to the macroscopic material—snow. Principles of continuum mechanics at the microscopic scale describe water vapor diffusion across an ice grain's surface to the air-filled pore space. Volume averaging and a localization assumption scale up and down, respectively, between microscopic and macroscopic scales. The model yields a mass diffusivity expression at the macroscopic scale that is, in general, a second-order tensor parameterized by both bulk and microstructural variables. The model predicts a mass diffusivity of water vapor through snow that is less than that through air. Mass diffusivity is expected to decrease linearly with ice volume fraction. Potential anisotropy in snow's mass diffusivity is captured due to the tensor representation. The tensor is built from directional data assigned to specific, idealized microstructural features. Such anisotropy has been observed in the field and laboratories in snow morphologies of interest such as weak layers of depth hoar and near-surface facets.
Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale
NASA Astrophysics Data System (ADS)
Barrios, M. I.
2013-12-01
The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.
Validity of thermally-driven small-scale ventilated filling box models
NASA Astrophysics Data System (ADS)
Partridge, Jamie L.; Linden, P. F.
2013-11-01
The majority of previous work studying building ventilation flows at laboratory scale have used saline plumes in water. The production of buoyancy forces using salinity variations in water allows dynamic similarity between the small-scale models and the full-scale flows. However, in some situations, such as including the effects of non-adiabatic boundaries, the use of a thermal plume is desirable. The efficacy of using temperature differences to produce buoyancy-driven flows representing natural ventilation of a building in a small-scale model is examined here, with comparison between previous theoretical and new, heat-based, experiments.
A Multi-Scale, Integrated Approach to Representing Watershed Systems
NASA Astrophysics Data System (ADS)
Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos
2014-05-01
Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.
Multiple constraint analysis of regional land-surface carbon flux
D.P. Turner; M. Göckede; B.E. Law; W.D. Ritts; W.B. Cohen; Z. Yang; T. Hudiburg; R. Kennedy; M. Duane
2011-01-01
We applied and compared bottom-up (process model-based) and top-down (atmospheric inversion-based) scaling approaches to evaluate the spatial and temporal patterns of net ecosystem production (NEP) over a 2.5 Ã 105 km2 area (the state of Oregon) in the western United States. Both approaches indicated a carbon sink over this...
Pellatt, Marlow G; Goring, Simon J; Bodtker, Karin M; Cannon, Alex J
2012-04-01
Under the Canadian Species at Risk Act (SARA), Garry oak (Quercus garryana) ecosystems are listed as "at-risk" and act as an umbrella for over one hundred species that are endangered to some degree. Understanding Garry oak responses to future climate scenarios at scales relevant to protected area managers is essential to effectively manage existing protected area networks and to guide the selection of temporally connected migration corridors, additional protected areas, and to maintain Garry oak populations over the next century. We present Garry oak distribution scenarios using two random forest models calibrated with down-scaled bioclimatic data for British Columbia, Washington, and Oregon based on 1961-1990 climate normals. The suitability models are calibrated using either both precipitation and temperature variables or using only temperature variables. We compare suitability predictions from four General Circulation Models (GCMs) and present CGCM2 model results under two emissions scenarios. For each GCM and emissions scenario we apply the two Garry oak suitability models and use the suitability models to determine the extent and temporal connectivity of climatically suitable Garry oak habitat within protected areas from 2010 to 2099. The suitability models indicate that while 164 km(2) of the total protected area network in the region (47,990 km(2)) contains recorded Garry oak presence, 1635 and 1680 km(2) of climatically suitable Garry oak habitat is currently under some form of protection. Of this suitable protected area, only between 6.6 and 7.3% will be "temporally connected" between 2010 and 2099 based on the CGCM2 model. These results highlight the need for public and private protected area organizations to work cooperatively in the development of corridors to maintain temporal connectivity in climatically suitable areas for the future of Garry oak ecosystems.
On the Relationship Between Tooth Shape and Masticatory Efficiency: A Finite Element Study.
Berthaume, Michael A
2016-05-01
Dental topography has successfully linked disparate tooth shapes to distinct dietary categories, but not to masticatory efficiency. Here, the relationship between four dental topographic metrics and brittle food item breakdown efficiency during compressive biting was investigated using a parametric finite element model of a bunodont molar. Food item breakdown efficiency was chosen to represent masticatory efficiency as it isolated tooth-food item interactions, where most other categories of masticatory efficiency include several aspects of the masticatory process. As relative food item size may affect the presence/absence of any relationship, four isometrically scaled, hemispherical, proxy food items were considered. Topographic metrics were uncorrelated to food item breakdown efficiency irrespective of relative food item size, and dental topographic metrics were largely uncorrelated to one another. The lack of a correlation between topographic metrics and food item breakdown efficiency is not unexpected as not all food items break down in the same manner (e.g., nuts are crushed, leaves are sheared), and only one food item shape was considered. In addition, food item breakdown efficiency describes tooth-food item interactions and requires location and shape specific information, which are absent from dental topographic metrics. This makes it unlikely any one efficiency metric will be correlated to all topographic metrics. These results emphasize the need to take into account how food items break down during biting, ingestion, and mastication when investigating the mechanical relationship between food item shape, size, mechanical properties, and breakdown, and tooth shape. © 2016 Wiley Periodicals, Inc.
Gogol, Katarzyna; Brunner, Martin; Preckel, Franzis; Goetz, Thomas; Martin, Romain
2016-01-01
The present study investigated the developmental dynamics of general and subject-specific (i.e., mathematics, French, and German) components of students' academic self-concept, anxiety, and interest. To this end, the authors integrated three lines of research: (a) hierarchical and multidimensional approaches to the conceptualization of each construct, (b) longitudinal analyses of bottom-up and top-down developmental processes across hierarchical levels, and (c) developmental processes across subjects. The data stemmed from two longitudinal large-scale samples (N = 3498 and N = 3863) of students attending Grades 7 and 9 in Luxembourgish schools. Nested-factor models were applied to represent each construct at each grade level. The analyses demonstrated that several characteristics were shared across constructs. All constructs were multidimensional in nature with respect to the different subjects, showed a hierarchical organization with a general component at the apex of the hierarchy, and had a strong separation between the subject-specific components at both grade levels. Further, all constructs showed moderate differential stabilities at both the general (0.42 < r < 0.55) and subject-specific levels (0.45 < r < 0.73). Further, little evidence was found for top-down or bottom-up developmental processes. Rather, general and subject-specific components in Grade 9 proved to be primarily a function of the corresponding components in Grade 7. Finally, change in several subject-specific components could be explained by negative effects across subjects. PMID:27014162
Gogol, Katarzyna; Brunner, Martin; Preckel, Franzis; Goetz, Thomas; Martin, Romain
2016-01-01
The present study investigated the developmental dynamics of general and subject-specific (i.e., mathematics, French, and German) components of students' academic self-concept, anxiety, and interest. To this end, the authors integrated three lines of research: (a) hierarchical and multidimensional approaches to the conceptualization of each construct, (b) longitudinal analyses of bottom-up and top-down developmental processes across hierarchical levels, and (c) developmental processes across subjects. The data stemmed from two longitudinal large-scale samples (N = 3498 and N = 3863) of students attending Grades 7 and 9 in Luxembourgish schools. Nested-factor models were applied to represent each construct at each grade level. The analyses demonstrated that several characteristics were shared across constructs. All constructs were multidimensional in nature with respect to the different subjects, showed a hierarchical organization with a general component at the apex of the hierarchy, and had a strong separation between the subject-specific components at both grade levels. Further, all constructs showed moderate differential stabilities at both the general (0.42 < r < 0.55) and subject-specific levels (0.45 < r < 0.73). Further, little evidence was found for top-down or bottom-up developmental processes. Rather, general and subject-specific components in Grade 9 proved to be primarily a function of the corresponding components in Grade 7. Finally, change in several subject-specific components could be explained by negative effects across subjects.
The Rosenberg Self-Esteem Scale: a bifactor answer to a two-factor question?
McKay, Michael T; Boduszek, Daniel; Harvey, Séamus A
2014-01-01
Despite its long-standing and widespread use, disagreement remains regarding the structure of the Rosenberg Self-Esteem Scale (RSES). In particular, concern remains regarding the degree to which the scale assesses self-esteem as a unidimensional or multidimensional (positive and negative self-esteem) construct. Using a sample of 3,862 high school students in the United Kingdom, 4 models were tested: (a) a unidimensional model, (b) a correlated 2-factor model in which the 2 latent variables are represented by positive and negative self-esteem, (c) a hierarchical model, and (d) a bifactor model. The totality of results including item loadings, goodness-of-fit indexes, reliability estimates, and correlations with self-efficacy measures all supported the bifactor model, suggesting that the 2 hypothesized factors are better understood as "grouping" factors rather than as representative of latent constructs. Accordingly, this study supports the unidimensionality of the RSES and the scoring of all 10 items to produce a global self-esteem score.
Finite-size scaling above the upper critical dimension in Ising models with long-range interactions
NASA Astrophysics Data System (ADS)
Flores-Sola, Emilio J.; Berche, Bertrand; Kenna, Ralph; Weigel, Martin
2015-01-01
The correlation length plays a pivotal role in finite-size scaling and hyperscaling at continuous phase transitions. Below the upper critical dimension, where the correlation length is proportional to the system length, both finite-size scaling and hyperscaling take conventional forms. Above the upper critical dimension these forms break down and a new scaling scenario appears. Here we investigate this scaling behaviour by simulating one-dimensional Ising ferromagnets with long-range interactions. We show that the correlation length scales as a non-trivial power of the linear system size and investigate the scaling forms. For interactions of sufficiently long range, the disparity between the correlation length and the system length can be made arbitrarily large, while maintaining the new scaling scenarios. We also investigate the behavior of the correlation function above the upper critical dimension and the modifications imposed by the new scaling scenario onto the associated Fisher relation.
The global reference atmospheric model, mod 2 (with two scale perturbation model)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Hargraves, W. R.
1976-01-01
The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.
Ballistics and anatomical modelling - A review.
Humphrey, Caitlin; Kumaratilake, Jaliya
2016-11-01
Ballistics is the study of a projectiles motion and can be broken down into four stages: internal, intermediate, external and terminal ballistics. The study of the effects a projectile has on a living tissue is referred to as wound ballistics and falls within terminal ballistics. To understand the effects a projectile has on living tissues the mechanisms of wounding need to be understood. These include the permanent and temporary cavities, energy, yawing, tumbling and fragmenting. Much ballistics research has been conducted including using cadavers, animal models and simulants such as ballistics ordnance gelatine. Further research is being conducted into developing anatomical, 3D, experimental and computational models. However, these models need to accurately represent the human body and its heterogeneous nature which involves understanding the biomechanical properties of the different tissues and organs. Further research is needed to accurately represent the human tissues with simulants and is slowly being conducted. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A study of the dynamics and photochemistry of vibrationally excited ozone in the middle atmosphere
NASA Astrophysics Data System (ADS)
Kaufmann, M.; Gil-Lopez, S.; Imk-Iaa Mipas/Envisat Team
The vibrational states of ozone depart from local thermodynamic equilibrium (LTE) due to radiative absorption, chemical pumping, spontaneous emission, and photochemical reaction. The recombination reaction of O+O_2 is the most important source of highly excited ozone, however the distribution of the energy disposal (nascent distribution) is poorly known. In addition, the collisional relaxation scheme is another significant source of uncertainty in the modeling of ozone infrared radiation. We have built a non-LTE model being part of the Generic RAdiative traNsfer AnD non-LTE population Algorithm (GRANADA) that represents the nascent distribution by means of a linear surprisal model. For the collisional relaxation we extrapolate measured rates of the fundamental bands to hot band transitions by using a Landau-Teller type scaling in combination with a relaxation-path dependent energy-gap model. In this talk we present model results in terms of ozone vibrational temperatures and atmospheric limb radiances in the 4-15 μm region. The modeled limb spectra will be compared with measurements from the MIPAS (Michelson Interferometer for Passive Atmosphere Sounding) instrument on board of the polar orbiter ENVISAT. The high spectral resolution of this instrument gives a unique opportunity to observe ozone non-LTE emissions down to the lower stratosphere where ozone densities are high enough to sense even highly excited vibrational states. These give valuable information for the validation of the non-LTE scheme, and therefore will improve the quality of upper mesospheric ozone retrievals.
Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Papadopoulos, G. A.
2017-12-01
The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.
"Handling" seismic hazard: 3D printing of California Faults
NASA Astrophysics Data System (ADS)
Kyriakopoulos, C.; Potter, M.; Richards-Dinger, K. B.
2017-12-01
As earth scientists, we face the challenge of how to explain and represent our work and achievements to the general public. Nowadays, this problem is partially alleviated by the use of modern visualization tools such as advanced scientific software (Paraview.org), high resolution monitors, elaborate video simulations, and even 3D Virtual Reality goggles. However, the ability to manipulate and examine a physical object in 3D is still an important tool to connect better with the public. For that reason, we are presenting a scaled 3D printed version of the complex network of earthquake faults active in California based on that used by the Uniform California Earthquake Rupture Forecast 3 (UCERF3) (Field et al., 2013). We start from the fault geometry in the UCERF3.1 deformation model files. These files contain information such as the coordinates of the surface traces of the faults, dip angle, and depth extent. The fault specified in the above files are triangulated at 1km resolution and exported as a facet (.fac) file. The facet file is later imported into the Trelis 15.1 mesh generator (csimsoft.com). We use Trelis to perform the following three operations: First, we scale down the model so that 100 mm corresponds to 100km. Second, we "thicken" the walls of the faults; wall thickness of at least 1mm is necessary in 3D printing. We thicken fault geometry by 1mm on each side of the faults for a total of 2mm thickness. Third, we break down the model into parts that will fit the printing bed size ( 25 x 20mm). Finally, each part is exported in stereolithography format (.stl). For our project, we are using the 3D printing facility within the Creat'R Lab in the UC Riverside Orbach Science Library. The 3D printer is a MakerBot Replicator Desktop, 5th Generation. The resolution of print is 0.2mm (Standard quality). The printing material is the MakerBot PLA Filament, 1.75 mm diameter, large Spool, green. The most complex part of the display model requires approximately 17 hours to print. After assembly, the length of the display is 1.4m. From our initial effort in printing and handling of the 3D printed faults, we conclude that a physical, 3D-printed model is very efficient in eliminating common misconceptions that nonscientists have about earthquake faults, particularly their geometry, extension and orientation in space.
Scaling and Single Event Effects (SEE) Sensitivity
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.
2003-01-01
This paper begins by discussing the potential for scaling down transistors and other components to fit more of them on chips in order to increasing computer processing speed. It also addresses technical challenges to further scaling. Components have been scaled down enough to allow single particles to have an effect, known as a Single Event Effect (SEE). This paper explores the relationship between scaling and the following SEEs: Single Event Upsets (SEU) on DRAMs and SRAMs, Latch-up, Snap-back, Single Event Burnout (SEB), Single Event Gate Rupture (SEGR), and Ion-induced soft breakdown (SBD).
Tip vortices in the actuator line model
NASA Astrophysics Data System (ADS)
Martinez, Luis; Meneveau, Charles
2017-11-01
The actuator line model (ALM) is a widely used tool to represent the wind turbine blades in computational fluid dynamics without the need to resolve the full geometry of the blades. The ALM can be optimized to represent the `correct' aerodynamics of the blades by choosing an appropriate smearing length scale ɛ. This appropriate length scale creates a tip vortex which induces a downwash near the tip of the blade. A theoretical frame-work is used to establish a solution to the induced velocity created by a tip vortex as a function of the smearing length scale ɛ. A correction is presented which allows the use of a non-optimal smearing length scale but still provides the downwash which would be induced using the optimal length scale. Thanks to the National Science Foundation (NSF) who provided financial support for this research via Grants IGERT 0801471, IIA-1243482 (the WINDINSPIRE project) and ECCS-1230788.
Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.
A Newton-Euler Description for Sediment Movement.
NASA Astrophysics Data System (ADS)
Maniatis, G.; Hoey, T.; Drysdale, T.; Hodge, R. A.; Valyrakis, M.
2015-12-01
We present progress from the development of a purpose specific sensing system for sediment transport (Maniatis et al. 2013). This system utilises the capabilities of contemporary inertial micro-sensors (strap-down accelerometers and gyroscopes) to record fluvial transport from the moving body-frame of artificial pebbles modelled precisely to represent the motion of real, coarse sediment grains (D90=100 mm class). This type of measurements can be useful in the context of sediment transport only if the existing mathematical understanding of the process is updated. We test a new mathematical model which defines specifically how the data recorded in the body frame of the sensor (Lagrangian frame of reference) can be generalised to the reference frame of the flow (channel, Eulerian frame of reference). Given the association of the two most widely used models for sediment transport with those frames of reference (Shields' to Eulerian frame and HA. Einstein's to Lagrangian frame), this description builds the basis for the definition of explicit incipient motion criteria (Maniatis et al. 2015) and for the upscaling from point-grain scale measurements to averaged, cross-sectional, stream related metrics. Flume experiments where conducted in the Hydraulics laboratory of the University of Glasgow where a spherical sensor of 800 mm diameter and capable of recoding inertial dynamics at 80Hz frequency was tested under fluvial transport conditions. We managed to measure the dynamical response of the unit during pre-entrainment/entrainment transitions, on scaled and non-scaled to the sensor's diameter bed and for a range of hydrodynamic conditions (slope up to 0.02 and flow increase rate up to 0.05m3.s-1. Preliminary results from field deployment on a mixed bedrock-alluvial channel are also presented. Maniatis et. al 2013 J. Sens. Actuator Netw. 2013, 2(4), 761-779; Maniatis et. al 2015: "CALCULATION OF EXPLICIT PROBABILITY OF ENTRAINMENT BASED ON INERTIAL ACCELERATION MEASUREMENTS" J. Hydraulic Engineering, Under review.
NASA Astrophysics Data System (ADS)
Resovsky, A.; Yang, Z. L.
2015-12-01
Methane (CH4) is an important greenhouse gas, and the predominant source of natural atmospheric CH4 globally is its production in wetland soils. Wetlands and marshes in the southeastern U.S. comprise over 40 million acres of land and thus represent a significant component of the global climate system. CH4 contributions from these and other subtropical systems remain difficult to quantify, however. Existing field measurements are lacking in both spatial and temporal coverage, inhibiting efforts to produce regional estimates through upscaling. Top-down constraints on emissions have been generated using satellite remote sensing retrievals of column CH4 (e.g., Frankenberg et al., 2005, 2008, Bergamaschi et al., 2007, 2013, Bloom et al., 2010, Wecht et al., 2014), but such approaches typically require preexisting emissions estimates to discern individual source contributions. Land Surface Models (LSMs) have the potential to produce realistic results, but such predictions rely on accurate representations of sub-grid scale processes responsible for emissions. Since net fluxes are governed by complex interactions between local environmental and biogeochemical factors including water table position, soil temperature, soil substrate availability and vegetation type, reliable flux simulations depend not only upon how such processes are resolved but how skillfully the land surface state itself is predicted by a given model. Here, we examine simulations using CLM4Me, a CH4 biogeochemistry model run within CESM, and compare results to recently compiled flux estimations from satellite remote sensing data. We then examine how seasonal CH4 flux simulations in CLM4Me are affected by alternative parameterizations of inundated land fraction. A global inundation dataset is calculated using DYPTOP, a newly-developed TOPMODEL implementation specifically designed to simulate the dynamics of wetland spatial distribution. We find evidence that DYPTOP may improve wetland CH4 flux predictions over subtropical regions in CLM4.5, and propose a computationally efficient framework for fine-scale tuning of this scheme to more accurately represent the role of subtropical and temperate wetlands in global climate projections.
Interactive coupling of regional climate and sulfate aerosol models over eastern Asia
NASA Astrophysics Data System (ADS)
Qian, Yun; Giorgi, Filippo
1999-03-01
The NCAR regional climate model (RegCM) is interactively coupled to a simple radiatively active sulfate aerosol model over eastern Asia. Both direct and indirect aerosol effects are represented. The coupled model system is tested for two simulation periods, November 1994 and July 1995, with aerosol sources representative of present-day anthropogenic sulfur emissions. The model sensitivity to the intensity of the aerosol source is also studied. The main conclusions from our work are as follows: (1) The aerosol distribution and cycling processes show substantial regional spatial variability, and temporal variability varying on a range of scales, from the diurnal scale of boundary layer and cumulus cloud evolution to the 3-10 day scale of synoptic scale events and the interseasonal scale of general circulation features; (2) both direct and indirect aerosol forcings have regional effects on surface climate; (3) the regional climate response to the aerosol forcing is highly nonlinear, especially during the summer, due to the interactions with cloud and precipitation processes; (4) in our simulations the role of the aerosol indirect effects is dominant over that of direct effects; (5) aerosol-induced feedback processes can affect the aerosol burdens at the subregional scale. This work constitutes the first step in a long term research project aimed at coupling a hierarchy of chemistry/aerosol models to the RegCM over the eastern Asia region.
Driscoll, Jessica; Hay, Lauren E.; Bock, Andrew R.
2017-01-01
Assessment of water resources at a national scale is critical for understanding their vulnerability to future change in policy and climate. Representation of the spatiotemporal variability in snowmelt processes in continental-scale hydrologic models is critical for assessment of water resource response to continued climate change. Continental-extent hydrologic models such as the U.S. Geological Survey National Hydrologic Model (NHM) represent snowmelt processes through the application of snow depletion curves (SDCs). SDCs relate normalized snow water equivalent (SWE) to normalized snow covered area (SCA) over a snowmelt season for a given modeling unit. SDCs were derived using output from the operational Snow Data Assimilation System (SNODAS) snow model as daily 1-km gridded SWE over the conterminous United States. Daily SNODAS output were aggregated to a predefined watershed-scale geospatial fabric and used to also calculate SCA from October 1, 2004 to September 30, 2013. The spatiotemporal variability in SNODAS output at the watershed scale was evaluated through the spatial distribution of the median and standard deviation for the time period. Representative SDCs for each watershed-scale modeling unit over the conterminous United States (n = 54,104) were selected using a consistent methodology and used to create categories of snowmelt based on SDC shape. The relation of SDC categories to the topographic and climatic variables allow for national-scale categorization of snowmelt processes.
Foley, Kitty-Rose; Taffe, John; Bourke, Jenny; Einfeld, Stewart L; Tonge, Bruce J; Trollor, Julian; Leonard, Helen
2016-01-01
Young people with intellectual disability exhibit substantial and persistent problem behaviours compared with their non-disabled peers. The aim of this study was to compare changes in emotional and behavioural problems for young people with intellectual disability with and without Down syndrome as they transition into adulthood in two different Australian cohorts. Emotional and behavioural problems were measured over three time points using the Developmental Behaviour Checklist (DBC) for those with Down syndrome (n = 323 at wave one) and compared to those with intellectual disability of another cause (n = 466 at wave one). Outcome scores were modelled using random effects regression as linear functions of age, Down syndrome status, ability to speak and gender. DBC scores of those with Down syndrome were lower than those of people without Down syndrome indicating fewer behavioural problems on all scales except communication disturbance. For both groups disruptive, communication disturbance, anxiety and self-absorbed DBC subscales all declined on average over time. There were two important differences between changes in behaviours for these two cohorts. Depressive symptoms did not significantly decline for those with Down syndrome compared to those without Down syndrome. The trajectory of the social relating behaviours subscale differed between these two cohorts, where those with Down syndrome remained relatively steady and, for those with intellectual disability from another cause, the behaviours increased over time. These results have implications for needed supports and opportunities for engagement in society to buffer against these emotional and behavioural challenges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arcilesi, David J.; Ham, Tae Kyu; Kim, In Hun
2015-07-01
A critical event in the safety analysis of the very high-temperature gas-cooled reactor (VHTR) is an air-ingress accident. This accident is initiated, in its worst case scenario, by a double-ended guillotine break of the coaxial cross vessel, which leads to a rapid reactor vessel depressurization. In a VHTR, the reactor vessel is located within a reactor cavity that is filled with air during normal operating conditions. Following the vessel depressurization, the dominant mode of ingress of an air–helium mixture into the reactor vessel will either be molecular diffusion or density-driven stratified flow. The mode of ingress is hypothesized to dependmore » largely on the break conditions of the cross vessel. Since the time scales of these two ingress phenomena differ by orders of magnitude, it is imperative to understand under which conditions each of these mechanisms will dominate in the air ingress process. Computer models have been developed to analyze this type of accident scenario. There are, however, limited experimental data available to understand the phenomenology of the air-ingress accident and to validate these models. Therefore, there is a need to design and construct a scaled-down experimental test facility to simulate the air-ingress accident scenarios and to collect experimental data. The current paper focuses on the analyses performed for the design and operation of a 1/8th geometric scale (by height and diameter), high-temperature test facility. A geometric scaling analysis for the VHTR, a time scale analysis of the air-ingress phenomenon, a transient depressurization analysis of the reactor vessel, a hydraulic similarity analysis of the test facility, a heat transfer characterization of the hot plenum, a power scaling analysis for the reactor system, and a design analysis of the containment vessel are discussed.« less
Photonically enabled Ka-band radar and infrared sensor subscale testbed
NASA Astrophysics Data System (ADS)
Lohr, Michele B.; Sova, Raymond M.; Funk, Kevin B.; Airola, Marc B.; Dennis, Michael L.; Pavek, Richard E.; Hollenbeck, Jennifer S.; Garrison, Sean K.; Conard, Steven J.; Terry, David H.
2014-10-01
A subscale radio frequency (RF) and infrared (IR) testbed using novel RF-photonics techniques for generating radar waveforms is currently under development at The Johns Hopkins University Applied Physics Laboratory (JHU/APL) to study target scenarios in a laboratory setting. The linearity of Maxwell's equations allows the use of millimeter wavelengths and scaled-down target models to emulate full-scale RF scene effects. Coupled with passive IR and visible sensors, target motions and heating, and a processing and algorithm development environment, this testbed provides a means to flexibly and cost-effectively generate and analyze multi-modal data for a variety of applications, including verification of digital model hypotheses, investigation of correlated phenomenology, and aiding system capabilities assessment. In this work, concept feasibility is demonstrated for simultaneous RF, IR, and visible sensor measurements of heated, precessing, conical targets and of a calibration cylinder. Initial proof-of-principle results are shown of the Ka-band subscale radar, which models S-band for 1/10th scale targets, using stretch processing and Xpatch models.
Impact of Federal Tax Policy on Utility-Scale Solar Deployment Given Financing Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mai, Trieu; Cole, Wesley; Krishnan, Venkat
In this study, the authors conducted a literature review of approaches and assumptions used by other modeling teams and consultants with respect to solar project financing; developed and incorporated an ability to model the likely financing shift away from more expensive sources of capital and toward cheaper sources as the investment tax credit declines in the ReEDS model; and used the 'before and after' versions of the ReEDS model to isolate and analyze the deployment impact of the financing shift under a range of conditions. Using ReEDS scenarios with this improved capability, we find that this 'financing' shift would softenmore » the blow of the ITC reversion; however, the overall impacts of such a shift in capital structure are estimated to be small and near-term utility-scale PV deployment is found to be much more sensitive to other factors that might drive down utility-scale PV prices.« less
Ditching Tests of a 1/18-Scale Model of the Lockheed Constellation Airplane
NASA Technical Reports Server (NTRS)
Fisher, Lloyd J.; Morris, Garland J.
1948-01-01
Tests were made of a 1/18-scale dynamically similar model of the Lockheed Constellation airplane to investigate its ditching characteristics and proper ditching technique. Scale-strength bottoms were used to reproduce probable damage to the fuselage. The model was landed in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds, and fuselage configuration were simulated. The behavior of the model was determined from visual observations, by recording the longitudinal decelerations, and by taking motion pictures of the ditchings. Data are presented in tabular form, sequence photographs, and time-history deceleration curves. It was concluded that the airplane should be ditched at a medium nose-high landing attitude with the landing flaps full down. The airplane will probably make a deep run with heavy spray and may even dive slightly. The fuselage will be damaged and leak substantially but in calm water probably will not flood rapidly. Maximum longitudinal decelerations in a calm-water ditching will be about 4g.
NASA Astrophysics Data System (ADS)
Laleian, A.; Valocchi, A. J.; Werth, C. J.
2017-12-01
Multiscale models of reactive transport in porous media are capable of capturing complex pore-scale processes while leveraging the efficiency of continuum-scale models. In particular, porosity changes caused by biofilm development yield complex feedbacks between transport and reaction that are difficult to quantify at the continuum scale. Pore-scale models, needed to accurately resolve these dynamics, are often impractical for applications due to their computational cost. To address this challenge, we are developing a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled with a mortar method providing continuity at interfaces. We explore two decompositions of coupled pore-scale and continuum-scale regions to study biofilm growth in a transverse mixing zone. In the first decomposition, all reaction is confined to a pore-scale region extending the transverse mixing zone length. Only solute transport occurs in the surrounding continuum-scale regions. Relative to a fully pore-scale result, we find the multiscale model with this decomposition has a reduced run time and consistent result in terms of biofilm growth and solute utilization. In the second decomposition, reaction occurs in both an up-gradient pore-scale region and a down-gradient continuum-scale region. To quantify clogging, the continuum-scale model implements empirical relations between porosity and continuum-scale parameters, such as permeability and the transverse dispersion coefficient. Solutes are sufficiently mixed at the end of the pore-scale region, such that the initial reaction rate is accurately computed using averaged concentrations in the continuum-scale region. Relative to a fully pore-scale result, we find accuracy of biomass growth in the multiscale model with this decomposition improves as the interface between pore-scale and continuum-scale regions moves downgradient where transverse mixing is more fully developed. Also, this decomposition poses additional challenges with respect to mortar coupling. We explore these challenges and potential solutions. While recent work has demonstrated growing interest in multiscale models, further development is needed for their application to field-scale subsurface contaminant transport and remediation.
NASA Astrophysics Data System (ADS)
Warner, J. C.; Armstrong, B. N.; He, R.; Zambon, J. B.; Olabarrieta, M.; Voulgaris, G.; Kumar, N.; Haas, K. A.
2012-12-01
Understanding processes responsible for coastal change is important for managing both our natural and economic coastal resources. Coastal processes respond from both local scale and larger regional scale forcings. Understanding these processes can lead to significant insight into how the coastal zone evolves. Storms are one of the primary driving forces causing coastal change from a coupling of wave and wind driven flows. Here we utilize a numerical modeling approach to investigate these dynamics of coastal storm impacts. We use the Coupled Ocean - Atmosphere - Wave - Sediment Transport (COAWST) Modeling System that utilizes the Model Coupling Toolkit to exchange prognostic variables between the ocean model ROMS, atmosphere model WRF, wave model SWAN, and the Community Sediment Transport Modeling System (CSTMS) sediment routines. The models exchange fields of sea-surface temperature, ocean currents, water levels, bathymetry, wave heights, lengths, periods, bottom orbital velocities, and atmospheric surface heat and momentum fluxes, atmospheric pressure, precipitation, and evaporation. Data fields are exchanged using regridded flux conservative sparse matrix interpolation weights computed from the SCRIP spherical coordinate remapping interpolation package. We describe the modeling components and the model field exchange methods. As part of the system, the wave and ocean models run with cascading, refined, spatial grids to provide increased resolution, scaling down to resolve nearshore wave driven flows simulated by the vortex force formulation, all within selected regions of a larger, coarser-scale coastal modeling system. The ocean and wave models are driven by the atmospheric component, which is affected by wave dependent ocean-surface roughness and sea surface temperature which modify the heat and momentum fluxes at the ocean-atmosphere interface. We describe the application of the modeling system to several regions of multi-scale complexity to identify the significance of larger scale forcing cascading down to smaller scales and to investigate the interactions of the coupled system with increasing degree of model-model interactions. Three examples include the impact of Hurricane Ivan in 2004 in the Gulf of Mexico, Hurricane Ida in 2009 that evolved into a tropical storm on the US East coast, and passage of strong cold fronts across the US southeast. Results identify that hurricane intensity is extremely sensitive to sea-surface temperature, with a reduction in intensity when the atmosphere is coupled to the ocean model due to rapid cooling of the ocean from the surface through the mixed layer. Coupling of the ocean to the atmosphere also results in decreased boundary layer stress and coupling of the waves to the atmosphere results in increased sea-surface stress. Wave results are sensitive to both ocean and atmospheric coupling due to wave-current interactions with the ocean and wave-growth from the atmospheric wind stress. Sediment resuspension at regional scale during the hurricane is controlled by shelf width and wave propagation during hurricane approach. Results from simulation of passage of cold fronts suggest that synoptic meteorological systems can strongly impact surf zone and inner shelf response, therefore act as a strong driver for long term littoral sediment transport. We will also present some of the challenges faced to develop the modeling system.
NASA Technical Reports Server (NTRS)
Sheth, Rubik; Bannon, Erika; Bower, Chad
2009-01-01
In order to control system and component temperatures, many spacecraft thermal control systems use a radiator coupled with a pumped fluid loop to reject waste heat from the vehicle. Since heat loads and radiation environments can vary considerably according to mission phase, the thermal control system must be able to vary the heat rejection. The ability to "turn down" the heat rejected from the thermal control system is critically important when designing the system.. Electrochromic technology as a radiator coating is being investigated to vary the amount of heat being rejected by a radiator. Coupon level tests were performed to test the feasibility of the technology. Furthermore, thermal math models were developed to better understand the turndown ratios required by full scale radiator architectures to handle the various operation scenarios during a mission profile for Altair Lunar Lander. This paper summarizes results from coupon level tests as well as thermal math models developed to investigate how electrochromics can be used to provide the largest turn down ratio for a radiator. Data from the various design concepts of radiators and their architectures are outlined. Recommendations are made on which electrochromic radiator concept should be carried further for future thermal vacuum testing.
NASA Technical Reports Server (NTRS)
Bannon, Erika T.; Bower, Chad E.; Sheth, Rubik; Stephan, Ryan
2010-01-01
In order to control system and component temperatures, many spacecraft thermal control systems use a radiator coupled with a pumped fluid loop to reject waste heat from the vehicle. Since heat loads and radiation environments can vary considerably according to mission phase, the thermal control system must be able to vary the heat rejection. The ability to "turn down" the heat rejected from the thermal control system is critically important when designing the system. Electrochromic technology as a radiator coating is being investigated to vary the amount of heat rejected by a radiator. Coupon level tests were performed to test the feasibility of this technology. Furthermore, thermal math models were developed to better understand the turndown ratios required by full scale radiator architectures to handle the various operation scenarios encountered during a mission profile for the Altair Lunar Lander. This paper summarizes results from coupon level tests as well as the thermal math models developed to investigate how electrochromics can be used to increase turn down ratios for a radiator. Data from the various design concepts of radiators and their architectures are outlined. Recommendations are made on which electrochromic radiator concept should be carried further for future thermal vacuum testing.
2014-01-01
This report summarizes the 7th meeting of the Global Alliance to Eliminate Lymphatic Filariasis (GAELF), Washington DC, November 18–19, 2012. The theme, “A Future Free of Lymphatic Filariasis: Reaching the Vision by Scaling Up, Scaling Down and Reaching Out”, emphasized new strategies and partnerships necessary to reach the 2020 goal of elimination of lymphatic filariasis (LF) as a public-health problem. PMID:24450283
An energy function for dynamics simulations of polypeptides in torsion angle space
NASA Astrophysics Data System (ADS)
Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.
1998-05-01
Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.
NASA Technical Reports Server (NTRS)
Quan, M.
1975-01-01
Results of wind tunnel tests were conducted to determine boundary layer characteristics on the lower surface of a space shuttle orbiter. Total pressure and temperature profile data at various model stations were obtained using a movable, four-degree-of-freedom probe mechanism and static pressure taps on the model surface. During a typical run, the probe was located over a preselected model location, then driven down through the bondary layer until contact was made with the model surface.
NASA Astrophysics Data System (ADS)
Miller, James D.
2003-10-01
A spiral model of pitch interrelates tone chroma, tone height, equal temperament scales, and a cochlear map. Donkin suggested in 1870 that the pitch of tones could be well represented by an equiangular spiral. More recently, the cylindrical helix has been popular for representing tone chroma and tone height. Here it is shown that tone chroma, tone height, and cochlear position can be conveniently related to tone frequency via a planar spiral. For this ``equal-temperament spiral,'' (ET Spiral) tone chroma is conceived as a circular array with semitones at 30° intervals. The frequency of sound on the cent scale (re 16.351 Hz) is represented by the radius of the spiral defined by r=(1200/2π)θr, where θr is in radians. By these definitions, one revolution represents one octave, 1200 cents, 30° represents a semitone, the radius relates θ to cents in accordance with equal temperament (ET) tuning, and the arclength of the spiral matches the mapping of sound frequency to the basilar membrane. Thus, the ET Spiral gives tone chroma as θ, tone height as the cent scale, and the cochlear map as the arclength. The possible implications and directions for further work are discussed.
The Use of Climate Projections in the Modelling of Bud Burst
NASA Astrophysics Data System (ADS)
O'Neill, Bridget F.; Caffara, Amelia; Gleeson, Emily; Semmler, Tido; McGrath, Ray; Donnelly, Alison
2010-05-01
Recent changes in global climate, such as increasing temperature, have had notable effects on the phenology (timing of biological events) of plants. The effects are variable across habitats and between species, but increasing temperatures have been shown to advance certain key phenophases of trees, such as bud burst (beginning of leaf unfolding). This project considered climate change impacts on phenology of plants at a local scale in Ireland. The output from the ENSEMBLES climate simulations were down-scaled to Ireland and utilised by a phenological model to project changes over the next 50-100 years. This project helps to showcase the potential use of climate simulations in phenological research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soria, José, E-mail: jose.soria@probien.gob.ar; Gauthier, Daniel; Flamant, Gilles
2015-09-15
Highlights: • A CFD two-scale model is formulated to simulate heavy metal vaporization from waste incineration in fluidized beds. • MSW particle is modelled with the macroscopic particle model. • Influence of bed dynamics on HM vaporization is included. • CFD predicted results agree well with experimental data reported in literature. • This approach may be helpful for fluidized bed reactor modelling purposes. - Abstract: Municipal Solid Waste Incineration (MSWI) in fluidized bed is a very interesting technology mainly due to high combustion efficiency, great flexibility for treating several types of waste fuels and reduction in pollutants emitted with themore » flue gas. However, there is a great concern with respect to the fate of heavy metals (HM) contained in MSW and their environmental impact. In this study, a coupled two-scale CFD model was developed for MSWI in a bubbling fluidized bed. It presents an original scheme that combines a single particle model and a global fluidized bed model in order to represent the HM vaporization during MSW combustion. Two of the most representative HM (Cd and Pb) with bed temperatures ranging between 923 and 1073 K have been considered. This new approach uses ANSYS FLUENT 14.0 as the modelling platform for the simulations along with a complete set of self-developed user-defined functions (UDFs). The simulation results are compared to the experimental data obtained previously by the research group in a lab-scale fluid bed incinerator. The comparison indicates that the proposed CFD model predicts well the evolution of the HM release for the bed temperatures analyzed. It shows that both bed temperature and bed dynamics have influence on the HM vaporization rate. It can be concluded that CFD is a rigorous tool that provides valuable information about HM vaporization and that the original two-scale simulation scheme adopted allows to better represent the actual particle behavior in a fluid bed incinerator.« less
Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.
NASA Astrophysics Data System (ADS)
Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric
2016-04-01
SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.
NASA Astrophysics Data System (ADS)
Petrukovich, A.; Artemyev, A.; Nakamura, R.
Reconnection is the key process responsible for the magnetotail dynamics. Driven reconnection in the distant tail is not sufficient to support global magnetospheric convection and the near Earth neutral line spontaneously forms to restore the balance. Mechanisms of initiation of such near-Earth magnetotail reconnection still represent one of major unresolved issues in space physics. We review the progress in this topic during the last decade. Recent theoretical advances suggest several variants of overcoming the famous tearing stability problem. Multipoint spacecraft observations reveal detailed structure of pre-onset current sheet of and reconnection zone down to ion larmor scale, supporting the importance of unstable state development through internal magnetotail reconfiguration.
Chen, Wei; Deng, Da
2014-11-11
We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.
Correlation between physical anomaly and behavioral abnormalities in Down syndrome
Bhattacharyya, Ranjan; Sanyal, Debasish; Roy, Krishna; Bhattacharyya, Sumita
2010-01-01
Objective: The minor physical anomaly (MPA) is believed to reflect abnormal development of the CNS. The aim is to find incidence of MPA and its behavioral correlates in Down syndrome and to compare these findings with the other causes of intellectual disability and normal population. Materials and Methods: One-hundred and forty intellectually disabled people attending a tertiary care set-up and from various NGOs are included in the study. The age-matched group from normal population was also studied for comparison. MPA are assessed by using Modified Waldrop scale and behavioral abnormality by Diagnostic assessment scale for severely handicapped (DASH II scale). Results: The Down syndrome group had significantly more MPA than other two groups and most of the MPA is situated in the global head region. There is strong correlation (P < 0.001) between the various grouped items of Modified Waldrop scale. Depression subscale is correlated with anomalies in the hands (P < 0.001), feet and Waldrop total items (P < 0.005). Mania item of DASH II scale is related with anomalies around the eyes (P < 0.001). Self-injurious behavior and total Waldrop score is negatively correlated with global head. Conclusion: Down syndrome group has significantly more MPA and a pattern of correlation between MPA and behavioral abnormalities exists which necessitates a large-scale study. PMID:21559153
Effects of Hot-Spot Geometry on Backscattering and Down-Scattering Neutron Spectra
NASA Astrophysics Data System (ADS)
Mohamed, Z. L.; Mannion, O. M.; Forrest, C. J.; Knauer, J. P.; Anderson, K. S.; Radha, P. B.
2017-10-01
The measured neutron spectrum produced by a fusion experiment plays a key role in inferring observable quantities. One important observable is the areal density of an implosion, which is inferred by measuring the scattering of neutrons. This project seeks to use particle-transport simulations to model the effects of hot-spot geometry on backscattering and down-scattering neutron spectra along different lines of sight. Implosions similar to those conducted at the Laboratory of Laser Energetics are modeled by neutron transport through a DT plasma and a DT ice shell using the particle transport codes MCNP and IRIS. Effects of hot-spot geometry are obtained by ``detecting'' scattered neutrons along different lines of sight. This process is repeated for various hot-spot geometries representing known shape distortions between the hot spot and the shell. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Representing macropore flow at the catchment scale: a comparative modeling study
NASA Astrophysics Data System (ADS)
Liu, D.; Li, H. Y.; Tian, F.; Leung, L. R.
2017-12-01
Macropore flow is an important hydrological process that generally enhances the soil infiltration capacity and velocity of subsurface water. Up till now, macropore flow is mostly simulated with high-resolution models. One possible drawback of this modeling approach is the difficulty to effectively represent the overall typology and connectivity of the macropore networks. We hypothesize that modeling macropore flow directly at the catchment scale may be complementary to the existing modeling strategy and offer some new insights. Tsinghua Representative Elementary Watershed model (THREW model) is a semi-distributed hydrology model, where the fundamental building blocks are representative elementary watersheds (REW) linked by the river channel network. In THREW, all the hydrological processes are described with constitutive relationships established directly at the REW level, i.e., catchment scale. In this study, the constitutive relationship of macropore flow drainage is established as part of THREW. The enhanced THREW model is then applied at two catchments with deep soils but distinct climates, the humid Asu catchment in the Amazon River basin, and the arid Wei catchment in the Yellow River basin. The Asu catchment has an area of 12.43km2 with mean annual precipitation of 2442mm. The larger Wei catchment has an area of 24800km2 but with mean annual precipitation of only 512mm. The rainfall-runoff processes are simulated at a hourly time step from 2002 to 2005 in the Asu catchment and from 2001 to 2012 in the Wei catchment. The role of macropore flow on the catchment hydrology will be analyzed comparatively over the Asu and Wei catchments against the observed streamflow, evapotranspiration and other auxiliary data.
Miyazawa, Yasumasa; Guo, Xinyu; Varlamov, Sergey M.; Miyama, Toru; Yoda, Ken; Sato, Katsufumi; Kano, Toshiyuki; Sato, Keiji
2015-01-01
At the present time, ocean current is being operationally monitored mainly by combined use of numerical ocean nowcast/forecast models and satellite remote sensing data. Improvement in the accuracy of the ocean current nowcast/forecast requires additional measurements with higher spatial and temporal resolution as expected from the current observation network. Here we show feasibility of assimilating high-resolution seabird and ship drift data into an operational ocean forecast system. Data assimilation of geostrophic current contained in the observed drift leads to refinement in the gyre mode events of the Tsugaru warm current in the north-eastern sea of Japan represented by the model. Fitting the observed drift to the model depends on ability of the drift representing geostrophic current compared to that representing directly wind driven components. A preferable horizontal scale of 50 km indicated for the seabird drift data assimilation implies their capability of capturing eddies with smaller horizontal scale than the minimum scale of 100 km resolved by the satellite altimetry. The present study actually demonstrates that transdisciplinary approaches combining bio-/ship- logging and numerical modeling could be effective for enhancement in monitoring the ocean current. PMID:26633309
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Dynamically consistent parameterization of mesoscale eddies. Part III: Deterministic approach
NASA Astrophysics Data System (ADS)
Berloff, Pavel
2018-07-01
This work continues development of dynamically consistent parameterizations for representing mesoscale eddy effects in non-eddy-resolving and eddy-permitting ocean circulation models and focuses on the classical double-gyre problem, in which the main dynamic eddy effects maintain eastward jet extension of the western boundary currents and its adjacent recirculation zones via eddy backscatter mechanism. Despite its fundamental importance, this mechanism remains poorly understood, and in this paper we, first, study it and, then, propose and test its novel parameterization. We start by decomposing the reference eddy-resolving flow solution into the large-scale and eddy components defined by spatial filtering, rather than by the Reynolds decomposition. Next, we find that the eastward jet and its recirculations are robustly present not only in the large-scale flow itself, but also in the rectified time-mean eddies, and in the transient rectified eddy component, which consists of highly anisotropic ribbons of the opposite-sign potential vorticity anomalies straddling the instantaneous eastward jet core and being responsible for its continuous amplification. The transient rectified component is separated from the flow by a novel remapping method. We hypothesize that the above three components of the eastward jet are ultimately driven by the small-scale transient eddy forcing via the eddy backscatter mechanism, rather than by the mean eddy forcing and large-scale nonlinearities. We verify this hypothesis by progressively turning down the backscatter and observing the induced flow anomalies. The backscatter analysis leads us to formulating the key eddy parameterization hypothesis: in an eddy-permitting model at least partially resolved eddy backscatter can be significantly amplified to improve the flow solution. Such amplification is a simple and novel eddy parameterization framework implemented here in terms of local, deterministic flow roughening controlled by single parameter. We test the parameterization skills in an hierarchy of non-eddy-resolving and eddy-permitting modifications of the original model and demonstrate, that indeed it can be highly efficient for restoring the eastward jet extension and its adjacent recirculation zones. The new deterministic parameterization framework not only combines remarkable simplicity with good performance but also is dynamically transparent, therefore, it provides a powerful alternative to the common eddy diffusion and emerging stochastic parameterizations.
Effective pore-scale dispersion upscaling with a correlated continuous time random walk approach
NASA Astrophysics Data System (ADS)
Le Borgne, T.; Bolster, D.; Dentz, M.; de Anna, P.; Tartakovsky, A.
2011-12-01
We investigate the upscaling of dispersion from a pore-scale analysis of Lagrangian velocities. A key challenge in the upscaling procedure is to relate the temporal evolution of spreading to the pore-scale velocity field properties. We test the hypothesis that one can represent Lagrangian velocities at the pore scale as a Markov process in space. The resulting effective transport model is a continuous time random walk (CTRW) characterized by a correlated random time increment, here denoted as correlated CTRW. We consider a simplified sinusoidal wavy channel model as well as a more complex heterogeneous pore space. For both systems, the predictions of the correlated CTRW model, with parameters defined from the velocity field properties (both distribution and correlation), are found to be in good agreement with results from direct pore-scale simulations over preasymptotic and asymptotic times. In this framework, the nontrivial dependence of dispersion on the pore boundary fluctuations is shown to be related to the competition between distribution and correlation effects. In particular, explicit inclusion of spatial velocity correlation in the effective CTRW model is found to be important to represent incomplete mixing in the pore throats.
Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Wenbin
2014-08-29
This report documents the work performed by General Motors (GM) under the Cooperative agreement No. DE-EE0000470, “Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance,” in collaboration with the Penn State University (PSU), University of Tennessee Knoxville (UTK), Rochester Institute of Technology (RIT), and University of Rochester (UR) via subcontracts. The overall objectives of the project are to investigate and synthesize fundamental understanding of transport phenomena at both the macro- and micro-scales for the development of a down-the-channel model that accounts for all transport domains in a broad operating space. GM as a prime contractor focused onmore » cell level experiments and modeling, and the Universities as subcontractors worked toward fundamental understanding of each component and associated interface.« less
Spherical cows in the sky with fab four
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaloper, Nemanja; Sandora, McCullen, E-mail: kaloper@physics.ucdavis.edu, E-mail: mesandora@ucdavis.edu
2014-05-01
We explore spherically symmetric static solutions in a subclass of unitary scalar-tensor theories of gravity, called the 'Fab Four' models. The weak field large distance solutions may be phenomenologically viable, but only if the Gauss-Bonnet term is negligible. Only in this limit will the Vainshtein mechanism work consistently. Further, classical constraints and unitarity bounds constrain the models quite tightly. Nevertheless, in the limits where the range of individual terms at large scales is respectively Kinetic Braiding, Horndeski, and Gauss-Bonnet, the horizon scale effects may occur while the theory satisfies Solar system constraints and, marginally, unitarity bounds. On the other hand,more » to bring the cutoff down to below a millimeter constrains all the couplings scales such that 'Fab Fours' can't be heard outside of the Solar system.« less
Log-Normal Turbulence Dissipation in Global Ocean Models
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
Part 2 of a Computational Study of a Drop-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
2004-01-01
This second of three reports on a computational study of a mixing layer laden with evaporating liquid drops presents the evaluation of Large Eddy Simulation (LES) models. The LES models were evaluated on an existing database that had been generated using Direct Numerical Simulation (DNS). The DNS method and the database are described in the first report of this series, Part 1 of a Computational Study of a Drop-Laden Mixing Layer (NPO-30719), NASA Tech Briefs, Vol. 28, No.7 (July 2004), page 59. The LES equations, which are derived by applying a spatial filter to the DNS set, govern the evolution of the larger scales of the flow and can therefore be solved on a coarser grid. Consistent with the reduction in grid points, the DNS drops would be represented by fewer drops, called computational drops in the LES context. The LES equations contain terms that cannot be directly computed on the coarser grid and that must instead be modeled. Two types of models are necessary: (1) those for the filtered source terms representing the effects of drops on the filtered flow field and (2) those for the sub-grid scale (SGS) fluxes arising from filtering the convective terms in the DNS equations. All of the filtered-sourceterm models that were developed were found to overestimate the filtered source terms. For modeling the SGS fluxes, constant-coefficient Smagorinsky, gradient, and scale-similarity models were assessed and calibrated on the DNS database. The Smagorinsky model correlated poorly with the SGS fluxes, whereas the gradient and scale-similarity models were well correlated with the SGS quantities that they represented.
Sun, Jun-Hong; Zhu, Xi-Yan; Dong, Ta-Na; Zhang, Xiao-Hong; Liu, Qi-Qing; Li, San-Qiang; Du, Qiu-Xiang
2017-03-01
The combined use of multiple markers is considered a promising strategy in estimating the age of wounds. We sought to develop an "up, no change, or down" system and to explore how to combine and use various parameters. In total, 78 Sprague Dawley rats were divided randomly into a control group and contusion groups of 4-, 8-, 12-, 16-, 20-, 24-, 28-, 32-, 36-, 40-, 44-, and 48-h post-injury (n=6 per group). A contusion was produced in the right limb of the rats under diethyl ether anesthesia by a drop-ball technique; the animals were sacrificed at certain time points thereafter, using a lethal dose of pentobarbital. Levels of PUM2, TAB2, GJC1, and CHRNA1 mRNAs were detected in contused muscle using real-time PCR. An up, no change, or down system was developed with the relative quantities of the four mRNAs recorded as black, dark gray, or light gray boxes, representing up-, no change, or down-regulation of the gene of interest during wound repair. The four transcripts were combined and used as a marker cluster for color model analysis of each contusion group. Levels of PUM2, TAB2, and GJC1 mRNAs decreased, whereas that of CHRNA1 increased in wound repair (P<0.05). The up, no change, or down system was adequate to distinguish most time groups with the color model. Thus, the proposed up, no change, or down system provide the means to determine the minimal periods of early wounds. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
Ultrafast studies of shock induced chemistry-scaling down the size by turning up the heat
NASA Astrophysics Data System (ADS)
McGrane, Shawn
2015-06-01
We will discuss recent progress in measuring time dependent shock induced chemistry on picosecond time scales. Data on the shock induced chemistry of liquids observed through picosecond interferometric and spectroscopic measurements will be reconciled with shock induced chemistry observed on orders of magnitude larger time and length scales from plate impact experiments reported in the literature. While some materials exhibit chemistry consistent with simple thermal models, other materials, like nitromethane, seem to have more complex behavior. More detailed measurements of chemistry and temperature across a broad range of shock conditions, and therefore time and length scales, will be needed to achieve a real understanding of shock induced chemistry, and we will discuss efforts and opportunities in this direction.
Oil Spill Hydrodynamics, from Droplets to Oil Slicks
NASA Astrophysics Data System (ADS)
Moghimi, S.; Restrepo, J. M.; Venkataramani, S.
2016-02-01
A fundamental challenge in proposing a model for the fate of oil in oceans relates to the extreme spatio-temporal scales required by hazard/abatement studies. We formulate a multiscale model that takes into account droplet dynamics and its effects on submerged and surface oil. The upscaling of the microphysics, within a mass conserving model, allows us to resolve oil mass exchanges between the oil found on the turbulent ocean surface and the ocean interior. In addition to presenting the model and the mutl-scale methodology we apply this upscaling to the evolution of oil on shelves and show how nearshore oil spills demonstrate dynamics that are not easily captured by oil models based on idealized tracer dynamics. In particular we demonstrate how oil can slow down and even park itself under certain oceanic conditions. An explanation for this phenomenon is proposed as well.
NASA Technical Reports Server (NTRS)
Re, Richard J.; Pendergraft, Odis C., Jr.; Campbell, Richard L.
2006-01-01
A 1/4-scale wind tunnel model of an airplane configuration developed for short duration flight at subsonic speeds in the Martian atmosphere has been tested in the Langley Research Center Transonic Dynamics Tunnel. The tunnel was pumped down to extremely low pressures to represent Martian Mach/Reynolds number conditions. Aerodynamic data were obtained and upper and lower surface wind pressures were measured at one spanwise station on some configurations. Three unswept wings of the same planform but different airfoil sections were tested. Horizontal tail incidence was varied as was the deflection of plain and split trailing-edge flaps. One unswept wing configuration was tested with the lower part of the fuselage removed and the vertical/horizontal tail assembly inverted and mounted from beneath the fuselage. A sweptback wing was also tested. Tests were conducted at Mach numbers from 0.50 to 0.90. Wing chord Reynolds number was varied from 40,000 to 100,000 and angles of attack and sideslip were varied from -10deg to 20deg and -10deg to 10deg, respectively.
Experimental validation of solid rocket motor damping models
NASA Astrophysics Data System (ADS)
Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio
2017-12-01
In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe damping properties of slender launch vehicles in payload/launcher coupled load analysis.
Experimental validation of solid rocket motor damping models
NASA Astrophysics Data System (ADS)
Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio
2018-06-01
In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe damping properties of slender launch vehicles in payload/launcher coupled load analysis.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
Dark Energy from structure: a status report
NASA Astrophysics Data System (ADS)
Buchert, Thomas
2008-02-01
The effective evolution of an inhomogeneous universe model in any theory of gravitation may be described in terms of spatially averaged variables. In Einstein’s theory, restricting attention to scalar variables, this evolution can be modeled by solutions of a set of Friedmann equations for an effective volume scale factor, with matter and backreaction source terms. The latter can be represented by an effective scalar field (“morphon field”) modeling Dark Energy. The present work provides an overview over the Dark Energy debate in connection with the impact of inhomogeneities, and formulates strategies for a comprehensive quantitative evaluation of backreaction effects both in theoretical and observational cosmology. We recall the basic steps of a description of backreaction effects in relativistic cosmology that lead to refurnishing the standard cosmological equations, but also lay down a number of challenges and unresolved issues in connection with their observational interpretation. The present status of this subject is intermediate: we have a good qualitative understanding of backreaction effects pointing to a global instability of the standard model of cosmology; exact solutions and perturbative results modeling this instability lie in the right sector to explain Dark Energy from inhomogeneities. It is fair to say that, even if backreaction effects turn out to be less important than anticipated by some researchers, the concordance high-precision cosmology, the architecture of current N-body simulations, as well as standard perturbative approaches may all fall short in correctly describing the Late Universe.
Multi-scale coupled modelling of waves and currents on the Catalan shelf.
NASA Astrophysics Data System (ADS)
Grifoll, M.; Warner, J. C.; Espino, M.; Sánchez-Arcilla, A.
2012-04-01
Catalan shelf circulation is characterized by a background along-shelf flow to the southwest (including some meso-scale features) plus episodic storm driven patterns. To investigate these dynamics, a coupled multi-scale modeling system is applied to the Catalan shelf (North-western Mediterranean Sea). The implementation consists of a set of increasing-resolution nested models, based on the circulation model ROMS and the wave model SWAN as part of the COAWST modeling system, covering from the slope and shelf region (~1 km horizontal resolution) down to a local area around Barcelona city (~40 m). The system is initialized with MyOcean products in the coarsest outer domain, and uses atmospheric forcing from other sources for the increasing resolution inner domains. Results of the finer resolution domains exhibit improved agreement with observations relative to the coarser model results. Several hydrodynamic configurations were simulated to determine dominant forcing mechanisms and hydrodynamic processes that control coastal scale processes. The numerical results reveal that the short term (hours to days) inner-shelf variability is strongly influenced by local wind variability, while sea-level slope, baroclinic effects, radiation stresses and regional circulation constitute second-order processes. Additional analysis identifies the significance of shelf/slope exchange fluxes, river discharge and the effect of the spatial resolution of the atmospheric fluxes.
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Carroll, Sean
2018-01-09
General relativity is inconsistent with cosmological observations unless we invoke components of dark matter and dark energy that dominate the universe. While it seems likely that these exotic substances really do exist, the alternative is worth considering: that Einstein's general relativity breaks down on cosmological scales. I will discuss models of modified gravity, tests in the solar system and elsewhere, and consequences for cosmology.
A New Canopy Integration Factor
NASA Astrophysics Data System (ADS)
Badgley, G.; Anderegg, L. D. L.; Baker, I. T.; Berry, J. A.
2017-12-01
Ecosystem modelers have long debated how to best represent within-canopy heterogeneity. Can one big leaf represent the full range of canopy physiological responses? Or you need two leaves - sun and shade - to get things right? Is it sufficient to treat the canopy as a diffuse medium? Or would it be better to explicitly represent separate canopy layers? These are open questions that have been subject of an enormous amount of research and scrutiny. Yet regardless of how the canopy is represented, each model must grapple with correctly parameterizing its canopy in a way that properly translates leaf-level processes to the canopy and ecosystem scale. We present a new approach for integrating whole-canopy biochemistry by combining remote sensing with ecological theory. Using the Simple Biosphere model (SiB), we redefined how SiB scales photosynthetic processes from leaf-to-canopy as a function of satellite-derived measurements of solar-induced chlorophyll fluorescence (SIF). Across multiple long-term study sites, our approach improves the accuracy of daily modeled photosynthesis by as much as 25 percent. We share additional insights on how SIF might be more directly integrated into photosynthesis models, as well as present ideas for harnessing SIF to more accurately parameterize canopy biochemical variables.
Towards the Detection of Reflected Light from Exo-planets: a Comparison of Two Methods
NASA Astrophysics Data System (ADS)
Rodler, Florian; Kürster, Martin
For exo-planets the huge brightness contrast between the star and the planet constitutes an enormous challenge when attempting to observe some kind of direct signal from the planet. With high resolution spectroscopy in the visual one can exploit the fact that the spectrum reflected from the planet is essentially a copy of the rich stellar absorption line spectrum. This spectrum is shifted in wavelength according to the orbital RV of the planet and strongly scaled down in brightness by a factor of a few times 10-5, and therefore deeply buried in the noise. The S/N of the plantetary signal can be increased by applying one of the following methods. The Least Squares Deconvolution Method (LSDM, eg. Collier Cameron et al. 2002) combines the observed spectral lines into a high S/N mean line profile (star + planet), determined by least-squares deconvolution of the observed spectrum with a template spectrum (from VALD, Kupka et al. 1999). Another approach is the Data Synthesis Method (DSM, eg. Charbonneau et al. 1999), a forward data modelling technique in which the planetary signal is modelled as a scaled-down and RV-shifted version of the stellar spectrum.
Interwell Connectivity Evaluation Using Injection and Production Fluctuation Data
NASA Astrophysics Data System (ADS)
Shang, Barry Zhongqi
The development of multiscale methods for computational simulation of biophysical systems represents a significant challenge. Effective computational models that bridge physical insights obtained from atomistic simulations and experimental findings are lacking. An accurate passing of information between these scales would enable: (1) an improved physical understanding of structure-function relationships, and (2) enhanced rational strategies for molecular engineering and materials design. Two approaches are described in this dissertation to facilitate these multiscale goals. In Part I, we develop a lattice kinetic Monte Carlo model to simulate cellulose decomposition by cellulase enzymes and to understand the effects of spatial confinement on enzyme kinetics. An enhanced mechanistic understanding of this reaction system could enhance the design of cellulose bioconversion technologies for renewable and sustainable energy. Using our model, we simulate the reaction up to experimental conversion times of days, while simultaneously capturing the microscopic kinetic behaviors. Therefore, the influence of molecular-scale kinetics on the macroscopic conversion rate is made transparent. The inclusion of spatial constraints in the kinetic model represents a significant advance over classical mass-action models commonly used to describe this reaction system. We find that restrictions due to enzyme jamming and substrate heterogeneity at the molecular level play a dominate role in limiting cellulose conversion. We identify that the key rate limitations are the slow rates of enzyme complexation with glucan chains and the competition between enzyme processivity and jamming. We show that the kinetics of complexation, which involves extraction of a glucan chain end from the cellulose surface and threading through the enzyme active site, occurs slowly on the order of hours, while intrinsic hydrolytic bond cleavage occurs on the order of seconds. We also elucidate the subtle trade-off between processivity and jamming. Highly processive enzymes cleave a large fraction of a glucan chain during each processive run but are prone to jamming at obstacles. Less processive enzymes avoid jamming but cleave only a small fraction of a chain. Optimizing this trade-off maximizes the cellulose conversion rate. We also elucidate the molecular-scale kinetic origins for synergy among cellulases in enzyme mixtures. In contrast to the currently accepted theory, we show that the ability of an endoglucanase to increase the concentration of chain ends for exoglucanases is insufficient for synergy to occur. Rather, endoglucanases must enhance the rate of complexation between exoglucanases and the newly created chain ends. This enhancement occurs when the endoglucanase is able to partially decrystallize the cellulose surface. We show generally that the driving forces for complexation and jamming, which govern the kinetics of pure exoglucanases, also control the degree of synergy in endo-exo mixtures. In Part II, we focus our attention on a different multiscale problem. This challenge is the development of coarse-grained models from atomistic models to access larger length- and time-scales in a simulation. This problem is difficult because it requires a delicate balance between maintaining (1) physical simplicity in the coarse-grained model and (2) physical consistency with the atomistic model. To achieve these goals, we develop a scheme to coarse-grain an atomistic fluid model into a fluctuating hydrodynamics (FHD) model. The FHD model describes the solvent as a field of fluctuating mass, momentum, and energy densities. The dynamics of the fluid are governed by continuum balance equations and fluctuation-dissipation relations based on the constitutive transport laws. The incorporation of both macroscopic transport and microscopic fluctuation phenomena could provide richer physical insight into the behaviors of biophysical systems driven by hydrodynamic fluctuations, such as hydrophobic assembly and crystal nucleation. We further extend our coarse-graining method by developing an interfacial FHD model using information obtained from simulations of an atomistic liquid-vapor interface. We illustrate that a phenomenological Ginzburg-Landau free energy employed in the FHD model can effectively represent the attractive molecular interactions of the atomistic model, which give rise to phase separation. For argon and water, we show that the interfacial FHD model can reproduce the compressibility, surface tension, and capillary wave spectrum of the atomistic model. Via this approach, simulations that explore the coupling between hydrodynamic fluctuations and phase equilibria with molecular-scale consistency are now possible. In both Parts I and II, the emerging theme is that the combination of bottom-up coarse graining and top-down phenomenology is essential for enabling a multiscale approach to remain physically consistent with molecular-scale interactions while simultaneously capturing the collective macroscopic behaviors. This hybrid strategy enables the resulting computational models to be both physically insightful and practically meaningful. (Abstract shortened by UMI.).
ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities
NASA Astrophysics Data System (ADS)
Neggers, R.
2014-12-01
Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.
Harnessing Big Data to Represent 30-meter Spatial Heterogeneity in Earth System Models
NASA Astrophysics Data System (ADS)
Chaney, N.; Shevliakova, E.; Malyshev, S.; Van Huijgevoort, M.; Milly, C.; Sulman, B. N.
2016-12-01
Terrestrial land surface processes play a critical role in the Earth system; they have a profound impact on the global climate, food and energy production, freshwater resources, and biodiversity. One of the most fascinating yet challenging aspects of characterizing terrestrial ecosystems is their field-scale (˜30 m) spatial heterogeneity. It has been observed repeatedly that the water, energy, and biogeochemical cycles at multiple temporal and spatial scales have deep ties to an ecosystem's spatial structure. Current Earth system models largely disregard this important relationship leading to an inadequate representation of ecosystem dynamics. In this presentation, we will show how existing global environmental datasets can be harnessed to explicitly represent field-scale spatial heterogeneity in Earth system models. For each macroscale grid cell, these environmental data are clustered according to their field-scale soil and topographic attributes to define unique sub-grid tiles. The state-of-the-art Geophysical Fluid Dynamics Laboratory (GFDL) land model is then used to simulate these tiles and their spatial interactions via the exchange of water, energy, and nutrients along explicit topographic gradients. Using historical simulations over the contiguous United States, we will show how a robust representation of field-scale spatial heterogeneity impacts modeled ecosystem dynamics including the water, energy, and biogeochemical cycles as well as vegetation composition and distribution.
NASA Astrophysics Data System (ADS)
Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.
2017-06-01
This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
Teachers' Perceptions of Teaching in Workplace Simulations in Vocational Education
ERIC Educational Resources Information Center
Jossberger, Helen; Brand-Gruwel, Saskia; van de Wiel, Margje W.; Boshuizen, Henny P.
2015-01-01
In a large-scale top-down innovation operation in the Netherlands, workplace simulations have been implemented in vocational schools, where students are required to work independently and self-direct their learning. However, research has shown that the success of such large-scale top-down innovations depends on how well their execution in schools…
NASA Astrophysics Data System (ADS)
Moosdorf, N.; Langlotz, S. T.
2016-02-01
Submarine groundwater discharge (SGD) has been recognized as a relevant field of coastal research in the last years. Its implications on local scale have been documented by an increasing number of studies researching individual locations with SGD. The local studies also often emphasize its large variability. On the other end, global scale studies try to estimate SGD related fluxes of e.g. carbon (Cole et al., 2007) and nitrogen (Beusen et al., 2013). These studies naturally use a coarse resolution, too coarse to represent the aforementioned local variability of SGD (Moosdorf et al., 2015). A way to transfer information of the local variability of SGD to large scale flux estimates is needed. Here we discuss the upscaling of local studies based on the definition and typology of coastal catchments. Coastal catchments are those stretches of coast that do not drain into major rivers but directly into the sea. Their attributes, e.g. climate, topography, land cover, or lithology can be used to extrapolate from the local scale to larger scales. We present first results of a typology, compare coastal catchment attributes to SGD estimates from field studies and discuss upscaling as well as the associated uncertainties. This study aims at bridging the gap between the scales and enabling an improved representation of local scale variability on continental to global scale. With this, it can contribute to a recent initiative to model large scale SGD fluxes (NExT SGD). References: Beusen, A.H.W., Slomp, C.P., Bouwman, A.F., 2013. Global land-ocean linkage: direct inputs of nitrogen to coastal waters via submarine groundwater discharge. Environmental Research Letters, 8(3): 6. Cole, J.J., Prairie, Y.T., Caraco, N.F., McDowell, W.H., Tranvik, L.J., Striegl, R.G., Duarte, C.M., Kortelainen, P., Downing, J.A., Middelburg, J.J., Melack, J., 2007. Plumbing the global carbon cycle: Integrating inland waters into the terrestrial carbon budget. Ecosystems, 10(1): 171-184. Moosdorf, N., Stieglitz, T., Waska, H., Durr, H.H., Hartmann, J., 2015. Submarine groundwater discharge from tropical islands: a review. Grundwasser, 20(1): 53-67.
NASA Astrophysics Data System (ADS)
Xu, X.; Jain, A. K.; Calvin, K. V.
2017-12-01
Due to the rapid socioeconomic development and biophysical factors, South and Southeast Asia (SSEA) has become a hotspot region of land use and land cover changes (LULCCs) in past few decades. Uncovering the drivers of LULCC is crucial for improving the understanding of LULCC processes. Due to the differences from spatiotemporal scales, methods and data sources in previous studies, the quantitative relationships between the LULCC activities and biophysical and socioeconomic drivers at the regional scale of SSEA have not been established. Here we present a comprehensive estimation of the biophysical and socioeconomic drivers of the major LULCC activities in SSEA: changes in forest and agricultural land. We used the Climate Change Initiative land cover data developed by European Space Agency to reveal the dynamics of forest and agricultural land from 1992 to 2015. Then we synthesized 200 publications about LULCC drivers at different spatial scales in SSEA to identify the major drivers of these LULCC activities. Corresponding representative variables of the major drivers were collected. The geographically weighted regression was employed to assess the spatiotemporally heterogeneous drivers of LULCC. Moreover, we validated our results with some national level case studies in SSEA. The results showed that both biophysical conditions such as terrain, soil, and climate, and socioeconomic factors such as migration, poverty, and economy played important roles in driving the changes of forest and agricultural land. The major drivers varied in different locations and periods. Our study integrated the bottom-up knowledge from local scale case studies with the top-down estimation of LULCC drivers, therefore generated more accurate and credible results. The identified biophysical and socioeconomic components could be used to improve the LULCC modelling and projection.
A Scalar Product Model for the Multidimensional Scaling of Choice
ERIC Educational Resources Information Center
Bechtel, Gordon G.; And Others
1971-01-01
Contains a solution for the multidimensional scaling of pairwise choice when individuals are represented as dimensional weights. The analysis supplies an exact least squares solution and estimates of group unscalability parameters. (DG)
Bina, Rena; Harrington, Donna
2016-04-01
The Edinburgh Postnatal Depression Scale (EPDS) was originally created as a uni-dimensional scale to screen for postpartum depression (PPD); however, evidence from various studies suggests that it is a multi-dimensional scale measuring mainly anxiety in addition to depression. The factor structure of the EPDS seems to differ across various language translations, raising questions regarding its stability. This study examined the factor structure of the Hebrew version of the EPDS to assess whether it is uni- or multi-dimensional. Seven hundred and fifteen (n = 715) women were screened at 6 weeks postpartum using the Hebrew version of the EPDS. Confirmatory factor analysis (CFA) was used to test four models derived from the literature. Of the four CFA models tested, a 9-item two factor model fit the data best, with one factor representing an underlying depression construct and the other representing an underlying anxiety construct. for Practice The Hebrew version of the EPDS appears to consist of depression and anxiety sub-scales. Given the widespread PPD screening initiatives, anxiety symptoms should be addressed in addition to depressive symptoms, and a short scale, such as the EPDS, assessing both may be efficient.
Determination of Scaled Wind Turbine Rotor Characteristics from Three Dimensional RANS Calculations
NASA Astrophysics Data System (ADS)
Burmester, S.; Gueydon, S.; Make, M.
2016-09-01
Previous studies have shown the importance of 3D effects when calculating the performance characteristics of a scaled down turbine rotor [1-4]. In this paper the results of 3D RANS (Reynolds-Averaged Navier-Stokes) computations by Make and Vaz [1] are taken to calculate 2D lift and drag coefficients. These coefficients are assigned to FAST (Blade Element Momentum Theory (BEMT) tool from NREL) as input parameters. Then, the rotor characteristics (power and thrust coefficients) are calculated using BEMT. This coupling of RANS and BEMT was previously applied by other parties and is termed here the RANS-BEMT coupled approach. Here the approach is compared to measurements carried out in a wave basin at MARIN applying Froude scaled wind, and the direct 3D RANS computation. The data of both a model and full scale wind turbine are used for the validation and verification. The flow around a turbine blade at full scale has a more 2D character than the flow properties around a turbine blade at model scale (Make and Vaz [1]). Since BEMT assumes 2D flow behaviour, the results of the RANS-BEMT coupled approach agree better with the results of the CFD (Computational Fluid Dynamics) simulation at full- than at model-scale.
Space environment and lunar surface processes, 2
NASA Technical Reports Server (NTRS)
Comstock, G. M.
1982-01-01
The top few millimeters of a surface exposed to space represents a physically and chemically active zone with properties different from those of a surface in the environment of a planetary atmosphere. To meet the need or a quantitative synthesis of the various processes contributing to the evolution of surfaces of the Moon, Mercury, the asteroids, and similar bodies, (exposure to solar wind, solar flare particles, galactic cosmic rays, heating from solar radiation, and meteoroid bombardment), the MESS 2 computer program was developed. This program differs from earlier work in that the surface processes are broken down as a function of size scale and treated in three dimensions with good resolution on each scale. The results obtained apply to the development of soil near the surface and is based on lunar conditions. Parameters can be adjusted to describe asteroid regoliths and other space-related bodies.
Extension of Gutenberg-Richter distribution to MW -1.3, no lower limit in sight
NASA Astrophysics Data System (ADS)
Boettcher, Margaret S.; McGarr, A.; Johnston, Malcolm
2009-05-01
With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude M W -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as M W -3.9 are observed, but we find no evidence that M W -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
Structural Response and Failure of a Full-Scale Stitched Graphite-Epoxy Wing
NASA Technical Reports Server (NTRS)
Jegley, Dawn C.; Lovejoy, Andrew E.; Bush, Harold G.
2001-01-01
Analytical and experimental results of the test for an all-composite full-scale wing box are presented. The wing box is representative of a section of a 220-passenger commercial transport aircraft wing box and was designed and constructed by The Boeing Company as part of the NASA Advanced Subsonics Technology (AST) program. The semi-span wing was fabricated from a graphite-epoxy material system with cover panels and spars held together using Kevlar stitches through the thickness. No mechanical fasteners were used to hold the stiffeners to the skin of the cover panels. Tests were conducted with and without low-speed impact damage, discrete source damage and repairs. Up-bending down-bending and brake roll loading conditions were applied. The structure with nonvisible impact damage carried 97% of Design Ultimate Load prior to failure through a lower cover panel access hole. Finite element and experimental results agree for the global response of the structure.
Evaluation of the Structural Response and Failure of a Full-Scale Stitched Graphite-Epoxy Wing
NASA Astrophysics Data System (ADS)
Jegley, Dawn C.; Bush, Harold G.; Lovejoy, Andrew E.
2001-01-01
Analytical and experimental results for an all-composite full-scale wing box are presented. The wing box is representative of a section of a 220-passenger commercial transport aircraft wing box and was designed and constructed by The Boeing Company as part of the NASA Advanced Subsonics Technology (AST) program. The semi-span wing was fabricated from a graphite-epoxy material system with cover panels and spars held together using Kevlar stitches through the thickness. No mechanical fasteners were used to hold the stiffeners to the skin of the cover panels. Tests were conducted with and without low-speed impact damage, discrete source damage and repairs. Upbending, down-bending and brake roll loading conditions were applied. The structure with nonvisible impact damage carried 97% of Design Ultimate Load prior to failure through a lower cover panel access hole. Finite element and experimental results agree for the global response of the structure.
Extension of Gutenberg-Richter distribution to Mw -1.3, no lower limit in sight
Boettcher, M.S.; McGarr, A.; Johnston, M.
2009-01-01
[1] With twelve years of seismic data from TauTona Gold Mine, South Africa, we show that mining-induced earthquakes follow the Gutenberg-Richter relation with no scale break down to the completeness level of the catalog, at moment magnitude Mw -1.3. Events recorded during relatively quiet hours in 2006 indicate that catalog detection limitations, not earthquake source physics, controlled the previously reported minimum magnitude in this mine. Within the Natural Earthquake Laboratory in South African Mines (NELSAM) experiment's dense seismic array, earthquakes that exhibit shear failure at magnitudes as small as Mw -3.9 are observed, but we find no evidence that Mw -3.9 represents the minimum magnitude. In contrast to previous work, our results imply small nucleation zones and that earthquake processes in the mine can readily be scaled to those in either laboratory experiments or natural faults.
Knowledge environments representing molecular entities for the virtual physiological human.
Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M
2008-09-13
In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.
Reconstruction of Tissue-Specific Metabolic Networks Using CORDA
Schultz, André; Qutub, Amina A.
2016-01-01
Human metabolism involves thousands of reactions and metabolites. To interpret this complexity, computational modeling becomes an essential experimental tool. One of the most popular techniques to study human metabolism as a whole is genome scale modeling. A key challenge to applying genome scale modeling is identifying critical metabolic reactions across diverse human tissues. Here we introduce a novel algorithm called Cost Optimization Reaction Dependency Assessment (CORDA) to build genome scale models in a tissue-specific manner. CORDA performs more efficiently computationally, shows better agreement to experimental data, and displays better model functionality and capacity when compared to previous algorithms. CORDA also returns reaction associations that can greatly assist in any manual curation to be performed following the automated reconstruction process. Using CORDA, we developed a library of 76 healthy and 20 cancer tissue-specific reconstructions. These reconstructions identified which metabolic pathways are shared across diverse human tissues. Moreover, we identified changes in reactions and pathways that are differentially included and present different capacity profiles in cancer compared to healthy tissues, including up-regulation of folate metabolism, the down-regulation of thiamine metabolism, and tight regulation of oxidative phosphorylation. PMID:26942765
The structural inventory of a small complex impact crater: Jebel Waqf as Suwwan, Jordan
NASA Astrophysics Data System (ADS)
Kenkmann, Thomas; Sturm, Sebastian; Krüger, Tim; Salameh, Elias; Al-Raggad, Marwan; Konsul, Khalil
2017-07-01
The investigation of terrestrial impact structures is crucial to gain an in-depth understanding of impact cratering processes in the solar system. Here, we use the impact structure Jebel Waqf as Suwwan, Jordan, as a representative for crater formation into a layered sedimentary target with contrasting rheology. The complex crater is moderately eroded (300-420 m) with an apparent diameter of 6.1 km and an original rim fault diameter of 7 km. Based on extensive field work, IKONOS imagery, and geophysical surveying we present a novel geological map of the entire crater structure that provides the basis for structural analysis. Parametric scaling indicates that the structural uplift (250-350 m) and the depth of the ring syncline (<200 m) are anomalously low. The very shallow relief of the crater along with a NE vergence of the asymmetric central uplift and the enhanced deformations in the up-range and down-range sectors of the annular moat and crater rim suggest that the impact was most likely a very oblique one ( 20°). One of the major consequences of the presence of the rheologically anisotropic target was that extensive strata buckling occurred during impact cratering both on the decameter as well as on the hundred-meter scale. The crater rim is defined by a circumferential normal fault dipping mostly toward the crater. Footwall strata beneath the rim fault are bent-up in the down-range sector but appear unaffected in the up-range sector. The hanging wall displays various synthetic and antithetic rotations in the down-range sector but always shows antithetic block rotation in the up-range sector. At greater depth reverse faulting or folding is indicated at the rim indicating that the rim fault was already formed during the excavation stage.
Variable Speed Hydrodynamic Model of an Auv Utilizing Cross Tunnel Thrusters
2017-09-01
Institute NED North East Down NPS Naval Postgraduate School ODE Ordinary Differential Equation PUC Positional Uncertainty REMUS Remote Environmental Measuring ...in its depths. Rising autonomous systems such as the Remote Environmental Measuring Unit (REMUS) 100 vehicle represents not only a feat of...presented account for reduced control surface efficiency at low speeds and build an accurate representation of a REMUS AUV’s behavior while operating at
Lagos, Maureen J; Batson, Philip E
2018-06-13
We measure phonon energy gain and loss down to 20 meV in a single nanostructure using an atom-wide monochromatic electron beam. We show that the bulk and surface, energy loss and energy gain processes obey the principle of detailed balancing in nanostructured systems at thermal equilibrium. By plotting the logarithm of the ratio of the loss and gain bulk/surface scattering as a function of the excitation energy, we find a linear behavior, expected from detailed balance arguments. Since that universal linearity scales with the inverse of the nanosystem temperature only, we can measure the temperature of the probed object with precision down to about 1 K without reference to the nanomaterial. We also show that subnanometer spatial resolution (down to ∼2 Å) can be obtained using highly localized acoustic phonon scattering. The surface phonon polariton signal can also be used to measure the temperature near the nanostructure surfaces, but with unavoidable averaging over several nanometers. Comparison between transmission and aloof probe configurations suggests that our method exhibits noninvasive characteristics. Our work demonstrates the validity of the principle of detailed balancing within nanoscale materials at thermal equilibrium, and it describes a transparent method to measure nanoscale temperature, thus representing an advance in the development of a noninvasive method for measurements with angstrom resolution.
ERIC Educational Resources Information Center
Wang, Ning; Stahl, John
2012-01-01
This article discusses the use of the Many-Facets Rasch Model, via the FACETS computer program (Linacre, 2006a), to scale job/practice analysis survey data as well as to combine multiple rating scales into single composite weights representing the tasks' relative importance. Results from the Many-Facets Rasch Model are compared with those…
Better models are more effectively connected models
NASA Astrophysics Data System (ADS)
Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John
2016-04-01
The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity can be represented in models: either by allowing it to emerge from model behaviour or by parameterizing it inside model structures; and on the appropriate scale at which processes should be represented explicitly or implicitly. It will also explore how modellers themselves approach connectivity through the results of a community survey. Finally, it will present the outline of an international modelling exercise aimed at assessing how different modelling concepts can capture connectivity in real catchments.
NASA Astrophysics Data System (ADS)
Pouquet, A.; Marino, R.; Rosenberg, D. L.; Herbert, C.
2017-12-01
We present a simple model for the scaling properties of the flux Richardson number R_f (the ratio of buoyancy flux B to total momentum flux B/[B+ɛ_V]) in weakly rotating unforced stratified flows characterized by their Rossby, Froude and Reynolds numbers Ro, Fr and Re. The model is based on: (i) quasi-equipartition between kinetic and potential modes, because of gravity waves and statistical equilibria; (ii) sub-dominant vertical velocity compared to the rms value of the velocity, U, due to the dominance of two-dimensional modes and the incompressibility condition; and (iii) slowing-down and weakening of the energy transfer to small scales due to eddy-wave interactions in a weak-turbulence temporal framework where the transfer time τ_{transf} is lengthened by the inverse Froude number, namely τ_{transf}=τ_{NL}^2/τ_{w}, τ_{NL}=L/U and τ_{w}=1/N being respectively the eddy turn-over time and the wave (Brunt Vaissala) period, with L a charaacteristic scale. Three regimes in Fr, as for stratified flows, are observed using a large data base: dominant waves, eddy-wave interactions and strong turbulence. In terms of the turbulence intensity (or buoyancy Reynolds number) R_I=ɛ_V/[νN^2], with ν the viscosity and ɛ_V the kinetic energy dissipation rate, these regimes are delimited by R_I˜0.1 and R_I˜280. In the intermediate regime, the phenomenology predicts and the numerical data confirms that a linear growth in Fr is obtained for the effective kinetic energy transfer when compared to its dimensional evaluation U^3/L. Defining the mixing efficiency as Γ_f=R_f/[1-R_f], the model allows for the prediction of the scaling Γ_f˜R_I^{-1/2}, observed previously at high Froude number, but which we also find for the intermediate regime. Thus, Γ_f is not constant, contrary to the classical Osborn model, as also found in several studies without rotation. As turbulence strengthens, smaller buoyancy fluxes point to a decoupling of the velocity and temperature fluctuations, the latter becoming passive and independent of U, and one can recover the same R_I^{-1/2} scaling in the strong turbulence regime as well.
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
A multi-scale modelling procedure to quantify hydrological impacts of upland land management
NASA Astrophysics Data System (ADS)
Wheater, H. S.; Jackson, B.; Bulygina, N.; Ballard, C.; McIntyre, N.; Marshall, M.; Frogbrook, Z.; Solloway, I.; Reynolds, B.
2008-12-01
Recent UK floods have focused attention on the effects of agricultural intensification on flood risk. However, quantification of these effects raises important methodological issues. Catchment-scale data have proved inadequate to support analysis of impacts of land management change, due to climate variability, uncertainty in input and output data, spatial heterogeneity in land use and lack of data to quantify historical changes in management practices. Manipulation experiments to quantify the impacts of land management change have necessarily been limited and small scale, and in the UK mainly focused on the lowlands and arable agriculture. There is a need to develop methods to extrapolate from small scale observations to predict catchment-scale response, and to quantify impacts for upland areas. With assistance from a cooperative of Welsh farmers, a multi-scale experimental programme has been established at Pontbren, in mid-Wales, an area of intensive sheep production. The data have been used to support development of a multi-scale modelling methodology to assess impacts of agricultural intensification and the potential for mitigation of flood risk through land use management. Data are available from replicated experimental plots under different land management treatments, from instrumented field and hillslope sites, including tree shelter belts, and from first and second order catchments. Measurements include climate variables, soil water states and hydraulic properties at multiple depths and locations, tree interception, overland flow and drainflow, groundwater levels, and streamflow from multiple locations. Fine resolution physics-based models have been developed to represent soil and runoff processes, conditioned using experimental data. The detailed models are used to calibrate simpler 'meta- models' to represent individual hydrological elements, which are then combined in a semi-distributed catchment-scale model. The methodology is illustrated using field and catchment-scale simulations to demonstrate the the response of improved and unimproved grassland, and the potential effects of land management interventions, including farm ponds, tree shelter belts and buffer strips. It is concluded that the methodology developed has the potential to represent and quantify catchment-scale effects of upland management; continuing research is extending the work to a wider range of upland environments and land use types, with the aim of providing generic simulation tools that can be used to provide strategic policy guidance.
On the Importance of Displacement History in Soft-Body Contact Models
2015-07-10
Plimpton , S . J., 2001. “Granular flow down an inclined plane: Bagnold scaling and rheol- ogy”. Physical Review E, 64(5), p. 051302. [16] Zhang, H. P...Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) US Army RDECOM-TARDEC,6501 E. 11 Mile Road,Warren,MI,48397-5000 8. PERFORMING ORGANIZATION REPORT
Life Times of Simulated Traffic Jams
NASA Astrophysics Data System (ADS)
Nagel, Kai
We study a model for freeway traffic which includes strong noise taking into account the fluctuations of individual driving behavior. The model shows emergent traffic jams with a self-similar appearance near the throughput maximum of the traffic. The lifetime distribution of these jams shows a short scaling regime, which gets considerably longer if one reduces the fluctuations when driving at maximum speed but leaves the fluctuations for slowing down or accelerating unchanged. The outflow from a traffic jam self-organizes into this state of maximum throughput.
Small-scale filament eruptions as the driver of X-ray jets in solar coronal holes.
Sterling, Alphonse C; Moore, Ronald L; Falconer, David A; Adams, Mitzi
2015-07-23
Solar X-ray jets are thought to be made by a burst of reconnection of closed magnetic field at the base of a jet with ambient open field. In the accepted version of the 'emerging-flux' model, such a reconnection occurs at a plasma current sheet between the open field and the emerging closed field, and also forms a localized X-ray brightening that is usually observed at the edge of the jet's base. Here we report high-resolution X-ray and extreme-ultraviolet observations of 20 randomly selected X-ray jets that form in coronal holes at the Sun's poles. In each jet, contrary to the emerging-flux model, a miniature version of the filament eruptions that initiate coronal mass ejections drives the jet-producing reconnection. The X-ray bright point occurs by reconnection of the 'legs' of the minifilament-carrying erupting closed field, analogous to the formation of solar flares in larger-scale eruptions. Previous observations have found that some jets are driven by base-field eruptions, but only one such study, of only one jet, provisionally questioned the emerging-flux model. Our observations support the view that solar filament eruptions are formed by a fundamental explosive magnetic process that occurs on a vast range of scales, from the biggest mass ejections and flare eruptions down to X-ray jets, and perhaps even down to smaller jets that may power coronal heating. A similar scenario has previously been suggested, but was inferred from different observations and based on a different origin of the erupting minifilament.
Simulation of Anomalous Regional Climate Events with a Variable Resolution Stretched Grid GCM
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.
1999-01-01
The stretched-grid approach provides an efficient down-scaling and consistent interactions between global and regional scales due to using one variable-resolution model for integrations. It is a workable alternative to the widely used nested-grid approach introduced over a decade ago as a pioneering step in regional climate modeling. A variable-resolution General Circulation Model (GCM) employing a stretched grid, with enhanced resolution over the US as the area of interest, is used for simulating two anomalous regional climate events, the US summer drought of 1988 and flood of 1993. The special mode of integration using a stretched-grid GCM and data assimilation system is developed that allows for imitating the nested-grid framework. The mode is useful for inter-comparison purposes and for underlining the differences between these two approaches. The 1988 and 1993 integrations are performed for the two month period starting from mid May. Regional resolutions used in most of the experiments is 60 km. The major goal and the result of the study is obtaining the efficient down-scaling over the area of interest. The monthly mean prognostic regional fields for the stretched-grid integrations are remarkably close to those of the verifying analyses. Simulated precipitation patterns are successfully verified against gauge precipitation observations. The impact of finer 40 km regional resolution is investigated for the 1993 integration and an example of recovering subregional precipitation is presented. The obtained results show that the global variable-resolution stretched-grid approach is a viable candidate for regional and subregional climate studies and applications.
Soria, José; Gauthier, Daniel; Flamant, Gilles; Rodriguez, Rosa; Mazza, Germán
2015-09-01
Municipal Solid Waste Incineration (MSWI) in fluidized bed is a very interesting technology mainly due to high combustion efficiency, great flexibility for treating several types of waste fuels and reduction in pollutants emitted with the flue gas. However, there is a great concern with respect to the fate of heavy metals (HM) contained in MSW and their environmental impact. In this study, a coupled two-scale CFD model was developed for MSWI in a bubbling fluidized bed. It presents an original scheme that combines a single particle model and a global fluidized bed model in order to represent the HM vaporization during MSW combustion. Two of the most representative HM (Cd and Pb) with bed temperatures ranging between 923 and 1073K have been considered. This new approach uses ANSYS FLUENT 14.0 as the modelling platform for the simulations along with a complete set of self-developed user-defined functions (UDFs). The simulation results are compared to the experimental data obtained previously by the research group in a lab-scale fluid bed incinerator. The comparison indicates that the proposed CFD model predicts well the evolution of the HM release for the bed temperatures analyzed. It shows that both bed temperature and bed dynamics have influence on the HM vaporization rate. It can be concluded that CFD is a rigorous tool that provides valuable information about HM vaporization and that the original two-scale simulation scheme adopted allows to better represent the actual particle behavior in a fluid bed incinerator. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Harder Rain is Going to Fall: Challenges for Actionable Projections of Extremes
NASA Astrophysics Data System (ADS)
Collins, W.
2014-12-01
Hydrometeorological extremes are projected to increase in both severity and frequency as the Earth's surface continues to warm in response to anthropogenic emissions of greenhouse gases. These extremes will directly affect the availability and reliability of water and other critical resources. The most comprehensive suite of multi-model projections has been assembled under the Coupled Model Intercomparison Project version 5 (CMIP5) and assessed in the Fifth Assessment (AR5) of the Intergovernmental Panel on Climate Change (IPCC). In order for these projections to be actionable, the projections should exhibit consistency and fidelity down to the local length and timescales required for operational resource planning, for example the scales relevant for water allocations from a major watershed. In this presentation, we summarize the length and timescales relevant for resource planning and then use downscaled versions of the IPCC simulations over the contiguous United States to address three questions. First, over what range of scales is there quantitative agreement between the simulated historical extremes and in situ measurements? Second, does this range of scales in the historical and future simulations overlap with the scales relevant for resource management and adaptation? Third, does downscaling enhance the degree of multi-model consistency at scales smaller than the typical global model resolution? We conclude by using these results to highlight requirements for further model development to make the next generation of models more useful for planning purposes.
NASA Astrophysics Data System (ADS)
Barros, A. P.; Prat, O. P.; Sun, X.; Shrestha, P.; Miller, D.
2009-04-01
The classic conceptual model of orographic rainfall depicts strong stationary horizontal gradients in rainfall accumulations and landcover contrasts across topographic divides (i.e. the rainshadow) at the broad scale of mountain ranges, or isolated orographic features. Whereas this model is sufficient to fingerprint the land-modulation of precipitation at the macroscale in climate studies, and can be useful to force geological models of land evolution for example, it fails to describe the active 4D space-time gradients that are critical at the fundamental scale of mountain hydrometeorology and hydrology, that is the headwater catchment. That is, the scale at which flash-floods are generated and landslides are triggered. Our work surveying the spatial and temporal habits of clouds and rainfall for some of the world's major mountain ranges from remotely-sensed data shows a close alignment of spatial scaling behavior with landform down to the mountain fold scale, that is the ridge-valley. Likewise, we find that diurnal and seasonal cycles are organized and constrained by topography from the macro- to the meso- to the alpha-scale of individual basins varying with synoptic weather conditions. At the catchment scale, the diurnal cycle exhibits an oscillatory behavior with storm features moving up and down from the ridge crests to the valley floor and back and forth from head to mouth along the valley with strong variations in rainfall intensity and duration. Direct observations to provide quantitative estimates of precipitation at this scale are beyond the capability of satellite-based observations present and anticipated in the next 10-20 years. This limitation can be addressed by assimilating the space-time modes of variability of rainfall into satellite-observations at coarser scale using multiscale blending algorithms. The challenge is to characterize the modes of space-time variability of precipitation in a systematic, and quantitative fashion that can be generalized. It requires understanding the physical controls that govern the diurnal cycle and how these physical controls translate into spatial and temporal variability of dynamics and microphysics of precipitation in headwater catchments, and especially in the context of extreme events for natural hazards assessments. Toward this goal, we have initiated a sequence of number of intense observing period (IOP) campaigns in the Great Smoky Mountains National Park using radiosondes, tethersondes, microrain radars, and a high resolution raingauge network that for the first time monitors rainfall systematically along ridges in the Appalachians. Along with field observations, a high-resolution coupled model has been implemented to diagnose the evolution of the 4D structure of regional circulations and associated precipitation for IOP conditions and for reconstructing historical extremes associated with the interaction of tropical cyclones with the mountains. A synthesis of data analysis and model simulations will be presented.
Status of DSMT research program
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Javeed, Mehzad; Edighoffer, Harold H.
1991-01-01
The status of the Dynamic Scale Model Technology (DSMT) research program is presented. DSMT is developing scale model technology for large space structures as part of the Control Structure Interaction (CSI) program at NASA Langley Research Center (LaRC). Under DSMT a hybrid-scale structural dynamics model of Space Station Freedom was developed. Space Station Freedom was selected as the focus structure for DSMT since the station represents the first opportunity to obtain flight data on a complex, three-dimensional space structure. Included is an overview of DSMT including the development of the space station scale model and the resulting hardware. Scaling technology was developed for this model to achieve a ground test article which existing test facilities can accommodate while employing realistically scaled hardware. The model was designed and fabricated by the Lockheed Missile and Space Co., and is assembled at LaRc for dynamic testing. Also, results from ground tests and analyses of the various model components are presented along with plans for future subassembly and matted model tests. Finally, utilization of the scale model for enhancing analysis verification of the full-scale space station is also considered.
NASA Astrophysics Data System (ADS)
Kwon, Sungchul; Kim, Jin Min
2015-01-01
For a fixed-energy (FE) Manna sandpile model in one dimension, we investigate the effects of random initial conditions on the dynamical scaling behavior of an order parameter. In the FE Manna model, the density ρ of total particles is conserved, and an absorbing phase transition occurs at ρc as ρ varies. In this work, we show that, for a given ρ , random initial distributions of particles lead to the domain structure in which domains with particle densities higher and lower than ρc alternate with each other. In the domain structure, the dominant length scale is the average domain length, which increases via the coalescence of adjacent domains. At ρc, the domain structure slows down the decay of an order parameter and also causes anomalous finite-size effects, i.e., power-law decay followed by an exponential one before the quasisteady state. As a result, the interplay of particle conservation and random initial conditions causes the domain structure, which is the origin of the anomalous dynamical scaling behaviors for random initial conditions.
Ion transport in complex layered graphene-based membranes with tuneable interlayer spacing.
Cheng, Chi; Jiang, Gengping; Garvey, Christopher J; Wang, Yuanyuan; Simon, George P; Liu, Jefferson Z; Li, Dan
2016-02-01
Investigation of the transport properties of ions confined in nanoporous carbon is generally difficult because of the stochastic nature and distribution of multiscale complex and imperfect pore structures within the bulk material. We demonstrate a combined approach of experiment and simulation to describe the structure of complex layered graphene-based membranes, which allows their use as a unique porous platform to gain unprecedented insights into nanoconfined transport phenomena across the entire sub-10-nm scales. By correlation of experimental results with simulation of concentration-driven ion diffusion through the cascading layered graphene structure with sub-10-nm tuneable interlayer spacing, we are able to construct a robust, representative structural model that allows the establishment of a quantitative relationship among the nanoconfined ion transport properties in relation to the complex nanoporous structure of the layered membrane. This correlation reveals the remarkable effect of the structural imperfections of the membranes on ion transport and particularly the scaling behaviors of both diffusive and electrokinetic ion transport in graphene-based cascading nanochannels as a function of channel size from 10 nm down to subnanometer. Our analysis shows that the range of ion transport effects previously observed in simple one-dimensional nanofluidic systems will translate themselves into bulk, complex nanoslit porous systems in a very different manner, and the complex cascading porous circuities can enable new transport phenomena that are unattainable in simple fluidic systems.
On the biomechanical function of scaffolds for engineering load-bearing soft tissues.
Stella, John A; D'Amore, Antonio; Wagner, William R; Sacks, Michael S
2010-07-01
Replacement or regeneration of load-bearing soft tissues has long been the impetus for the development of bioactive materials. While maturing, current efforts continue to be confounded by our lack of understanding of the intricate multi-scale hierarchical arrangements and interactions typically found in native tissues. The current state of the art in biomaterial processing enables a degree of controllable microstructure that can be used for the development of model systems to deduce fundamental biological implications of matrix morphologies on cell function. Furthermore, the development of computational frameworks which allow for the simulation of experimentally derived observations represents a positive departure from what has mostly been an empirically driven field, enabling a deeper understanding of the highly complex biological mechanisms we wish to ultimately emulate. Ongoing research is actively pursuing new materials and processing methods to control material structure down to the micro-scale to sustain or improve cell viability, guide tissue growth, and provide mechanical integrity, all while exhibiting the capacity to degrade in a controlled manner. The purpose of this review is not to focus solely on material processing but to assess the ability of these techniques to produce mechanically sound tissue surrogates, highlight the unique structural characteristics produced in these materials, and discuss how this translates to distinct macroscopic biomechanical behaviors. Copyright 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Ion transport in complex layered graphene-based membranes with tuneable interlayer spacing
Cheng, Chi; Jiang, Gengping; Garvey, Christopher J.; Wang, Yuanyuan; Simon, George P.; Liu, Jefferson Z.; Li, Dan
2016-01-01
Investigation of the transport properties of ions confined in nanoporous carbon is generally difficult because of the stochastic nature and distribution of multiscale complex and imperfect pore structures within the bulk material. We demonstrate a combined approach of experiment and simulation to describe the structure of complex layered graphene-based membranes, which allows their use as a unique porous platform to gain unprecedented insights into nanoconfined transport phenomena across the entire sub–10-nm scales. By correlation of experimental results with simulation of concentration-driven ion diffusion through the cascading layered graphene structure with sub–10-nm tuneable interlayer spacing, we are able to construct a robust, representative structural model that allows the establishment of a quantitative relationship among the nanoconfined ion transport properties in relation to the complex nanoporous structure of the layered membrane. This correlation reveals the remarkable effect of the structural imperfections of the membranes on ion transport and particularly the scaling behaviors of both diffusive and electrokinetic ion transport in graphene-based cascading nanochannels as a function of channel size from 10 nm down to subnanometer. Our analysis shows that the range of ion transport effects previously observed in simple one-dimensional nanofluidic systems will translate themselves into bulk, complex nanoslit porous systems in a very different manner, and the complex cascading porous circuities can enable new transport phenomena that are unattainable in simple fluidic systems. PMID:26933689
Transition from Sensorimotor Stage 5 to Stage 6 by Down Syndrome Children: A Response to Gibson.
ERIC Educational Resources Information Center
Mervis, Carolyn B.; Cardoso-Martins, Claudia
1984-01-01
The study compared longitudinally performance of six Down syndrome and six nonretarded children on Object Permanence and Means-Ends Relations Scales. Results indicated that Down syndrome children progressed from Sensorimotor Stage 5 to Stage 6 at the same rate as nonretarded children, once Down's Ss slower developmental pace was considered.…
Dynamics of the cosmological relaxation after reheating
NASA Astrophysics Data System (ADS)
Choi, Kiwoon; Kim, Hyungjin; Sekiguchi, Toyokazu
2017-04-01
We examine if the cosmological relaxation mechanism, which was proposed recently as a new solution to the hierarchy problem, can be compatible with high reheating temperature well above the weak scale. As the barrier potential disappears at high temperature, the relaxion rolls down further after the reheating, which may ruin the successful implementation of the relaxation mechanism. It is noted that if the relaxion is coupled to a dark gauge boson, the new frictional force arising from dark gauge boson production can efficiently slow down the relaxion motion, which allows the relaxion to be stabilized after the electroweak phase transition for a wide range of model parameters, while satisfying the known observational constraints.
The orbitofrontal cortex and beyond: from affect to decision-making.
Rolls, Edmund T; Grabenhorst, Fabian
2008-11-01
The orbitofrontal cortex represents the reward or affective value of primary reinforcers including taste, touch, texture, and face expression. It learns to associate other stimuli with these to produce representations of the expected reward value for visual, auditory, and abstract stimuli including monetary reward value. The orbitofrontal cortex thus plays a key role in emotion, by representing the goals for action. The learning process is stimulus-reinforcer association learning. Negative reward prediction error neurons are related to this affective learning. Activations in the orbitofrontal cortex correlate with the subjective emotional experience of affective stimuli, and damage to the orbitofrontal cortex impairs emotion-related learning, emotional behaviour, and subjective affective state. With an origin from beyond the orbitofrontal cortex, top-down attention to affect modulates orbitofrontal cortex representations, and attention to intensity modulates representations in earlier cortical areas of the physical properties of stimuli. Top-down word-level cognitive inputs can bias affective representations in the orbitofrontal cortex, providing a mechanism for cognition to influence emotion. Whereas the orbitofrontal cortex provides a representation of reward or affective value on a continuous scale, areas beyond the orbitofrontal cortex such as the medial prefrontal cortex area 10 are involved in binary decision-making when a choice must be made. For this decision-making, the orbitofrontal cortex provides a representation of each specific reward in a common currency.
Godoy, Oscar; Castro-Díez, Pilar; Van Logtestijn, Richard S P; Cornelissen, Johannes H C; Valladares, Fernando
2010-03-01
Leaf traits related to the performance of invasive alien species can influence nutrient cycling through litter decomposition. However, there is no consensus yet about whether there are consistent differences in functional leaf traits between invasive and native species that also manifest themselves through their "after life" effects on litter decomposition. When addressing this question it is important to avoid confounding effects of other plant traits related to early phylogenetic divergences and to understand the mechanism underlying the observed results to predict which invasive species will exert larger effects on nutrient cycling. We compared initial leaf litter traits, and their effect on decomposability as tested in standardized incubations, in 19 invasive-native pairs of co-familial species from Spain. They included 12 woody and seven herbaceous alien species representative of the Spanish invasive flora. The predictive power of leaf litter decomposition rates followed the order: growth form > family > status (invasive vs. native) > leaf type. Within species pairs litter decomposition tended to be slower and more dependent on N and P in invaders than in natives. This difference was likely driven by the higher lignin content of invader leaves. Although our study has the limitation of not representing the natural conditions from each invaded community, it suggests a potential slowing down of the nutrient cycle at ecosystem scale upon invasion.
NASA Astrophysics Data System (ADS)
Saksena, S.; Merwade, V.; Singhofen, P.
2017-12-01
There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.
2014-01-01
Background Logos are commonly used in molecular biology to provide a compact graphical representation of the conservation pattern of a set of sequences. They render the information contained in sequence alignments or profile hidden Markov models by drawing a stack of letters for each position, where the height of the stack corresponds to the conservation at that position, and the height of each letter within a stack depends on the frequency of that letter at that position. Results We present a new tool and web server, called Skylign, which provides a unified framework for creating logos for both sequence alignments and profile hidden Markov models. In addition to static image files, Skylign creates a novel interactive logo plot for inclusion in web pages. These interactive logos enable scrolling, zooming, and inspection of underlying values. Skylign can avoid sampling bias in sequence alignments by down-weighting redundant sequences and by combining observed counts with informed priors. It also simplifies the representation of gap parameters, and can optionally scale letter heights based on alternate calculations of the conservation of a position. Conclusion Skylign is available as a website, a scriptable web service with a RESTful interface, and as a software package for download. Skylign’s interactive logos are easily incorporated into a web page with just a few lines of HTML markup. Skylign may be found at http://skylign.org. PMID:24410852
Foley, Kitty-Rose; Taffe, John; Bourke, Jenny; Einfeld, Stewart L.; Tonge, Bruce J.; Trollor, Julian; Leonard, Helen
2016-01-01
Background Young people with intellectual disability exhibit substantial and persistent problem behaviours compared with their non-disabled peers. The aim of this study was to compare changes in emotional and behavioural problems for young people with intellectual disability with and without Down syndrome as they transition into adulthood in two different Australian cohorts. Methods Emotional and behavioural problems were measured over three time points using the Developmental Behaviour Checklist (DBC) for those with Down syndrome (n = 323 at wave one) and compared to those with intellectual disability of another cause (n = 466 at wave one). Outcome scores were modelled using random effects regression as linear functions of age, Down syndrome status, ability to speak and gender. Results DBC scores of those with Down syndrome were lower than those of people without Down syndrome indicating fewer behavioural problems on all scales except communication disturbance. For both groups disruptive, communication disturbance, anxiety and self-absorbed DBC subscales all declined on average over time. There were two important differences between changes in behaviours for these two cohorts. Depressive symptoms did not significantly decline for those with Down syndrome compared to those without Down syndrome. The trajectory of the social relating behaviours subscale differed between these two cohorts, where those with Down syndrome remained relatively steady and, for those with intellectual disability from another cause, the behaviours increased over time. Conclusions These results have implications for needed supports and opportunities for engagement in society to buffer against these emotional and behavioural challenges. PMID:27391326
Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation
Balaguer-Ballester, Emili; Clark, Nicholas R.; Coath, Martin; Krumbholz, Katrin; Denham, Susan L.
2009-01-01
Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing. PMID:19266015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Danni; Zhang, Jun, E-mail: zhangjun@nudt.edu.cn; Zhong, Huihuang
2016-03-15
The expansion of cathode plasma in magnetically insulated coaxial diode (MICD) is investigated in theory and particle-in-cell (PIC) simulation. The temperature and density of the cathode plasma are about several eV and 10{sup 13}–10{sup 16 }cm{sup −3}, respectively, and its expansion velocity is of the level of few cm/μs. Through hydrodynamic theory analysis, expressions of expansion velocities in axial and radial directions are obtained. The characteristics of cathode plasma expansion have been simulated through scaled-down PIC models. Simulation results indicate that the expansion velocity is dominated by the ratio of plasma density other than the static electric field. The electric fieldmore » counteracts the plasma expansion reverse of it. The axial guiding magnetic field only reduces the radial transport coefficients by a correction factor, but not the axial ones. Both the outward and inward radial expansions of a MICD are suppressed by the much stronger guiding magnetic field and even cease.« less
NASA Astrophysics Data System (ADS)
Rehman, Khalil Ur; Malik, Aneeqa Ashfaq; Malik, M. Y.; Tahir, M.; Zehra, Iffat
2018-03-01
A short communication is structured to offer a set of scaling group of transformation for Prandtl-Eyring fluid flow yields by stretching flat porous surface. The fluid flow regime is carried with both heat and mass transfer characteristics. To seek solution of flow problem a set of scaling group of transformation is proposed by adopting Lie approach. These transformations are used to step down the partial differential equations into ordinary differential equations. The reduced system is solved by numerical method termed as shooting method. A self-coded algorithm is executed in this regard. The obtain results are elaborated by means of figures and tables.
A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume
NASA Astrophysics Data System (ADS)
Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration
2017-11-01
An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.
Quasispherical subsonic accretion in X-ray pulsars
NASA Astrophysics Data System (ADS)
Shakura, Nikolai I.; Postnov, Konstantin A.; Kochetkova, A. Yu; Hjalmarsdotter, L.
2013-04-01
A theoretical model is considered for quasispherical subsonic accretion onto slowly rotating magnetized neutron stars. In this regime, the accreting matter settles down subsonically onto the rotating magnetosphere, forming an extended quasistatic shell. Angular momentum transfer in the shell occurs via large-scale convective motions resulting, for observed pulsars, in an almost iso-angular-momentum \\omega \\sim 1/R^2 rotation law inside the shell. The accretion rate through the shell is determined by the ability of the plasma to enter the magnetosphere due to Rayleigh-Taylor instabilities, with allowance for cooling. A settling accretion regime is possible for moderate accretion rates \\dot M \\lesssim \\dot M_* \\simeq 4\\times 10^{16} g s ^{-1}. At higher accretion rates, a free-fall gap above the neutron star magnetosphere appears due to rapid Compton cooling, and the accretion becomes highly nonstationary. Observations of spin-up/spin-down rates of quasispherically wind accreting equilibrium X-ray pulsars with known orbital periods (e.g., GX 301-2 and Vela X-1) enable us to determine the main dimensionless parameters of the model, as well as to estimate surface magnetic field of the neutron star. For equilibrium pulsars, the independent measurements of the neutron star magnetic field allow for an estimate of the stellar wind velocity of the optical companion without using complicated spectroscopic measurements. For nonequilibrium pulsars, a maximum value is shown to exist for the spin-down rate of the accreting neutron star. From observations of the spin-down rate and the X-ray luminosity in such pulsars (e.g., GX 1+4, SXP 1062, and 4U 2206+54), a lower limit can be put on the neutron star magnetic field, which in all cases turns out to be close to the standard value and which agrees with cyclotron line measurements. Furthermore, both explains the spin-up/spin-down of the pulsar frequency on large time-scales and also accounts for the irregular short-term frequency fluctuations, which may correlate or anticorrelate with the observed X-ray luminosity fluctuations.
NASA Astrophysics Data System (ADS)
Goldenson, Naomi L.
Uncertainties in climate projections at the regional scale are inevitably larger than those for global mean quantities. Here, focusing on western North American regional climate, several approaches are taken to quantifying uncertainties starting with the output of global climate model projections. Internal variance is found to be an important component of the projection uncertainty up and down the west coast. To quantify internal variance and other projection uncertainties in existing climate models, we evaluate different ensemble configurations. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find internal variability can be quantified consistently using a large ensemble or an ensemble of opportunity that includes small ensembles from multiple models and climate scenarios. The latter offers the advantage of also producing estimates of uncertainty due to model differences. We conclude that climate projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible. We then conduct a small single-model ensemble of simulations using the Model for Prediction Across Scales with physics from the Community Atmosphere Model Version 5 (MPAS-CAM5) and prescribed historical sea surface temperatures. In the global variable resolution domain, the finest resolution (at 30 km) is in our region of interest over western North America and upwind over the northeast Pacific. In the finer-scale region, extreme precipitation from atmospheric rivers (ARs) is connected to tendencies in seasonal snowpack in mountains of the Northwest United States and California. In most of the Cascade Mountains, winters with more AR days are associated with less snowpack, in contrast to the northern Rockies and California's Sierra Nevadas. In snowpack observations and reanalysis of the atmospheric circulation, we find similar relationships between frequency of AR events and winter season snowpack in the western United States. In spring, however, there is not a clear relationship between number of AR days and seasonal mean snowpack across the model ensemble, so caution is urged in interpreting the historical record in the spring season. Finally, the representation of the El Nino Southern Oscillation (ENSO)--an important source of interannual climate predictability in some regions--is explored in a large single-model ensemble using ensemble Empirical Orthogonal Functions (EOFs) to find modes of variance across the entire ensemble at once. The leading EOF is ENSO. The principal components (PCs) of the next three EOFs exhibit a lead-lag relationship with the ENSO signal captured in the first PC. The second PC, with most of its variance in the summer season, is the most strongly cross-correlated with the first. This approach offers insight into how the model considered represents this important atmosphere-ocean interaction. Taken together these varied approaches quantify the implications of climate projections regionally, identify processes that make snowpack water resources vulnerable, and seek insight into how to better simulate the large-scale climate modes controlling regional variability.
NASA Astrophysics Data System (ADS)
Mccoll, K. A.; Van Heerwaarden, C.; Katul, G. G.; Gentine, P.; Entekhabi, D.
2016-12-01
While the break-down in similarity between turbulent transport of heat and momentum (or Reynolds analogy) is not disputed in the atmospheric surface layer (ASL) under unstably stratified conditions, the causes of this breakdown remain the subject of some debate. One reason for the break-down is hypothesized to be due to a change in the topology of the coherent structures and how they differently transport heat and momentum. As instability increases, coherent structures that are confined to the near-wall region transition to thermal plumes, spanning the entire boundary layer depth. Monin-Obukhov Similarity Theory (MOST), which hypothesizes that only local length scales play a role in ASL turbulent transport, implicitly assumes that thermal plumes and other large-scale structures are inactive (i.e., they do not contribute to turbulent transport despite their large energy content). Widely adopted mixing-length models for the ASL also rest on this assumption. The difficulty of characterizing low-wavenumber turbulent motions with field observations motivates the use of high-resolution Direct Numerical Simulations (DNS) that are free from sub-grid scale parameterizations and ad-hoc assumptions near the boundary. Despite the low Reynolds number, mild stratification and idealized geometry, DNS-estimated MOST functions are consistent with field experiments as are key low-frequency features of the vertical velocity variance and buoyancy spectra. Parsimonious spectral models for MOST stability correction functions for momentum (φm) and heat (φh) are derived based on idealized vertical velocity variance and buoyancy spectra fit to the corresponding DNS spectra. For φm, a spectral model requiring a local length scale (evolving with local stability conditions) that matches DNS and field observations is derived. In contrast, for φh, the aforementioned model is substantially biased unless contributions from larger length scales are also included. These results suggest that ASL heat transport cannot be precisely MO-similar, and that the breakdown of the Reynolds analogy is at least partially caused by the influence of large eddies on turbulent heat transport.
NASA Astrophysics Data System (ADS)
Guenther, A. B.; Duhl, T.
2011-12-01
Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.
Make dark matter charged again
NASA Astrophysics Data System (ADS)
Agrawal, Prateek; Cyr-Racine, Francis-Yan; Randall, Lisa; Scholtz, Jakub
2017-05-01
We revisit constraints on dark matter that is charged under a U(1) gauge group in the dark sector, decoupled from Standard Model forces. We find that the strongest constraints in the literature are subject to a number of mitigating factors. For instance, the naive dark matter thermalization timescale in halos is corrected by saturation effects that slow down isotropization for modest ellipticities. The weakened bounds uncover interesting parameter space, making models with weak-scale charged dark matter viable, even with electromagnetic strength interaction. This also leads to the intriguing possibility that dark matter self-interactions within small dwarf galaxies are extremely large, a relatively unexplored regime in current simulations. Such strong interactions suppress heat transfer over scales larger than the dark matter mean free path, inducing a dynamical cutoff length scale above which the system appears to have only feeble interactions. These effects must be taken into account to assess the viability of darkly-charged dark matter. Future analyses and measurements should probe a promising region of parameter space for this model.
NASA Astrophysics Data System (ADS)
Sylvia, R. T.; Kincaid, C. R.; Behn, M. D.; Zhang, N.
2014-12-01
Circulation in subduction zones involves large-scale, forced-convection by the motion of the down-going slab and small scale, buoyant diapirs of hydrated mantle or subducted sediments. Models of subduction-diapir interaction often neglect large-scale flow patterns induced by rollback, back-arc extension and slab morphology. We present results from laboratory experiments relating these parameters to styles of 4-D wedge circulation and diapir ascent. A glucose fluid is used to represent the mantle. Subducting lithosphere is modeled with continuous rubber belts moving with prescribed velocities, capable of reproducing a large range in downdip relative rollback plate rates. Differential steepening of distinct plate segments simulates the evolution of slab gaps. Back-arc extension is produced using Mylar sheeting in contact with fluid beneath the overriding plate that moves relative to the slab rollback rate. Diapirs are introduced at the slab-wedge interface in two modes: 1) distributions of low density rigid spheres and 2) injection of low viscosity, low density fluid to the base of the wedge. Results from 30 experiments with imposed along-trench (y) distributions of buoyancy, show near-vertical ascent paths only in cases with simple downdip subduction and ratios (W*) of diapir rise velocity to downdip plate rate of W*>1. For W* = 0.2-1, diapir ascent paths are complex, with large (400 km) lateral offsets between source and surfacing locations. Rollback and back-arc extension enhance these offsets, occasionally aligning diapirs from different along-trench locations into trench-normal, age-progressive linear chains beneath the overriding plate. Diapirs from different y-locations may surface beneath the same volcanic center, despite following ascent paths of very different lengths and transit times. In cases with slab gaps, diapirs from the outside edge of the steep plate move 1000 km parallel to the trench before surfacing above the shallow dipping plate. "Dead zones" resulting from lateral and vertical shear in the wedge above the slab gap, produce slow transit times. These 4-D ascent pathways are being incorporated into numerical models on the thermal and melting evolution of diapirs. Models show subduction-induced circulation significantly alters diapir ascent beneath arcs.
Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station
NASA Technical Reports Server (NTRS)
Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.
1987-01-01
The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.
Coarse-Grained Models for Automated Fragmentation and Parametrization of Molecular Databases.
Fraaije, Johannes G E M; van Male, Jan; Becherer, Paul; Serral Gracià, Rubèn
2016-12-27
We calibrate coarse-grained interaction potentials suitable for screening large data sets in top-down fashion. Three new algorithms are introduced: (i) automated decomposition of molecules into coarse-grained units (fragmentation); (ii) Coarse-Grained Reference Interaction Site Model-Hypernetted Chain (CG RISM-HNC) as an intermediate proxy for dissipative particle dynamics (DPD); and (iii) a simple top-down coarse-grained interaction potential/model based on activity coefficient theories from engineering (using COSMO-RS). We find that the fragment distribution follows Zipf and Heaps scaling laws. The accuracy in Gibbs energy of mixing calculations is a few tenths of a kilocalorie per mole. As a final proof of principle, we use full coarse-grained sampling through DPD thermodynamics integration to calculate log P OW for 4627 compounds with an average error of 0.84 log unit. The computational speeds per calculation are a few seconds for CG RISM-HNC and a few minutes for DPD thermodynamic integration.
Direct numerical simulations of three-dimensional electrokinetic flows
NASA Astrophysics Data System (ADS)
Chiam, Keng-Hwee
2006-11-01
We discuss direct numerical simulations of three-dimensional electrokinetic flows in microfluidic devices. In particular, we focus on the study of the electrokinetic instability that develops when two solutions with different electrical conductivities are coupled to an external electric field. We characterize this ``mixing'' instability as a function of the parameters of the model, namely the Reynolds number of the flow, the electric Peclet number of the electrolyte solution, and the ratio of the electroosmotic to the electroviscous time scales. Finally, we describe how this model breaks down when the length scale of the device approaches the nanoscale, where the width of the electric Debye layer is comparable to the width of the channel, and discuss solutions to overcome this.
Two-dimensional quasineutral description of particles and fields above discrete auroral arcs
NASA Technical Reports Server (NTRS)
Newman, A. L.; Chiu, Y. T.; Cornwall, J. M.
1985-01-01
Stationary hot and cool particle distributions in the auroral magnetosphere are modelled using adiabatic assumptions of particle motion in the presence of broad-scale electrostatic potential structure. The study has identified geometrical restrictions on the type of broadscale potential structure which can be supported by a multispecies plasma having specified sources and energies. Without energization of cool thermal ionospheric electrons, a substantial parallel potential drop cannot be supported down to altitudes of 2000 km or less. Observed upward-directed field-aligned currents must be closed by return currents along field lines which support little net potential drop. In such regions the plasma density appears significantly enhanced. Model details agree well with recent broad-scale implications of satellite observations.
Numerical study of dynamo action at low magnetic Prandtl numbers.
Ponty, Y; Mininni, P D; Montgomery, D C; Pinton, J-F; Politano, H; Pouquet, A
2005-04-29
We present a three-pronged numerical approach to the dynamo problem at low magnetic Prandtl numbers P(M). The difficulty of resolving a large range of scales is circumvented by combining direct numerical simulations, a Lagrangian-averaged model and large-eddy simulations. The flow is generated by the Taylor-Green forcing; it combines a well defined structure at large scales and turbulent fluctuations at small scales. Our main findings are (i) dynamos are observed from P(M)=1 down to P(M)=10(-2), (ii) the critical magnetic Reynolds number increases sharply with P(M)(-1) as turbulence sets in and then it saturates, and (iii) in the linear growth phase, unstable magnetic modes move to smaller scales as P(M) is decreased. Then the dynamo grows at large scales and modifies the turbulent velocity fluctuations.
Neutron star dynamics under time-dependent external torques
NASA Astrophysics Data System (ADS)
Gügercinoǧlu, Erbil; Alpar, M. Ali
2017-11-01
The two-component model describes neutron star dynamics incorporating the response of the superfluid interior. Conventional solutions and applications involve constant external torques, as appropriate for radio pulsars on dynamical time-scales. We present the general solution of two-component dynamics under arbitrary time-dependent external torques, with internal torques that are linear in the rotation rates, or with the extremely non-linear internal torques due to vortex creep. The two-component model incorporating the response of linear or non-linear internal torques can now be applied not only to radio pulsars but also to magnetars and to neutron stars in binary systems, with strong observed variability and noise in the spin-down or spin-up rates. Our results allow the extraction of the time-dependent external torques from the observed spin-down (or spin-up) time series, \\dot{Ω }(t). Applications are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Guang J.; Zurovac-Jevtic, Dance; Boer, Erwin R.
1999-10-01
A Lagrangian cloud classification algorithm is applied to the cloud fields in the tropical Pacific simulated by a high-resolution regional atmospheric model. The purpose of this work is to assess the model's ability to reproduce the observed spatial characteristics of the tropical cloud systems. The cloud systems are broadly grouped into three categories: deep clouds, mid-level clouds and low clouds. The deep clouds are further divided into mesoscale convective systems and non
mesoscale convective systems. It is shown that the model is able to simulate the total cloud cover for each category reasonably well. However, when the cloud cover is broken down into contributions from cloud systems of different sizes, it is shown that the simulated cloud size distribution is biased toward large cloud systems, with contribution from relatively small cloud systems significantly under-represented in the model for both deep and mid-level clouds. The number distribution and area contribution to the cloud cover from mesoscale convective systems are very well simulated compared to the satellite observations, so are low clouds as well. The dependence of the cloud physical properties on cloud scale is examined. It is found that cloud liquid water path, rainfall, and ocean surface sensible and latent heat fluxes have a clear dependence on cloud types and scale. This is of particular interest to studies of the cloud effects on surface energy budget and hydrological cycle. The diurnal variation of the cloud population and area is also examined. The model exhibits a varying degree of success in simulating the diurnal variation of the cloud number and area. The observed early morning maximum cloud cover in deep convective cloud systems is qualitatively simulated. However, the afternoon secondary maximum is missing in the model simulation. The diurnal variation of the tropospheric temperature is well reproduced by the model while simulation of the diurnal variation of the moisture field is poor. The implication of this comparison between model simulation and observations on cloud parameterization is discussed.
NASA Astrophysics Data System (ADS)
Sklar, L. S.; Mahmoudi, M.
2016-12-01
Landscape evolution models rarely represent sediment size explicitly, despite the importance of sediment size in regulating rates of bedload sediment transport, river incision into bedrock, and many other processes in channels and on hillslopes. A key limitation has been the lack of a general model for predicting the size of sediments produced on hillslopes and supplied to channels. Here we present a framework for such a model, as a first step toward building a `geomorphic transport law' that balances mechanistic realism with computational simplicity and is widely applicable across diverse landscapes. The goal is to take as inputs landscape-scale boundary conditions such as lithology, climate and tectonics, and predict the spatial variation in the size distribution of sediments supplied to channels across catchments. The model framework has two components. The first predicts the initial size distribution of particles produced by erosion of bedrock underlying hillslopes, while the second accounts for the effects of physical and chemical weathering during transport down slopes and delivery to channels. The initial size distribution can be related to the spacing and orientation of fractures within bedrock, which depend on the stresses and deformation experienced during exhumation and on rock resistance to fracture propagation. Other controls on initial size include the sizes of mineral grains in crystalline rocks, the sizes of cemented particles in clastic sedimentary rocks, and the potential for characteristic size distributions produced by tree throw, frost cracking, and other erosional processes. To model how weathering processes transform the initial size distribution we consider the effects of erosion rate and the thickness of soil and weathered bedrock on hillslope residence time. Residence time determines the extent of size reduction, for given values of model terms that represent the potential for chemical and physical weathering. Chemical weathering potential is parameterized in terms of mean annual precipitation and temperature, and the fraction of soluble minerals. Physical weathering potential can be parameterized in terms of topographic attributes, including slope, curvature and aspect. Finally, we compare model predictions with field data from Inyo Creek in the Sierra Nevada Mtns, USA.
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...
2017-09-14
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.
Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less
Challenges in Modeling of the Global Atmosphere
NASA Astrophysics Data System (ADS)
Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom
2015-04-01
The massively parallel computer architectures require that some widely adopted modeling paradigms be reconsidered in order to utilize more productively the power of parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. However, the described scenario implies that the discretization used in the model is horizontally local. The spherical geometry further complicates the problem. Various grid topologies will be discussed and examples will be shown. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of decent size. However, the polar filtering requires transpositions involving extra communications. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for a wide application of the spectral representation. With some variations, these techniques are used in most major centers. However, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with a fast Fourier transform represents a significant step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.
Establishment and assessment of code scaling capability
NASA Astrophysics Data System (ADS)
Lim, Jaehyok
In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the scaling issues of experiments and models and their applicability to the nuclear power plant transient and accidents. The major thermal-hydraulic phenomena to be analyzed were identified and the predictive models adopted in RELAP5/MOD3.3 (Patch03) code were briefly reviewed.
NASA Technical Reports Server (NTRS)
Daileda, J. J.; Marroquin, J.; Rogers, C. E.
1976-01-01
A hypersonic shock tunnel test on a 0.010 scale SSV orbital configuration was performed to determine the effects of RCS jet/flow field interactions on SSV aerodynamic stability and control characteristics at various hypersonic Mach and Reynolds numbers. Flow field interaction data were obtained using pitch and roll jets. In addition, direct impingement data were obtained at a Mach number of zero with the test section pumped down to below 10 microns of mercury pressure.
Concepts and models of coupled systems
NASA Astrophysics Data System (ADS)
Ertsen, Maurits
2017-04-01
In this paper, I will especially focus on the question of the position of human agency, social networks and complex co-evolutionary interactions in socio-hydrological models. The long term perspective of complex systems' modeling typically focuses on regional or global spatial scales and century/millennium time scales. It is still a challenge to relate correlations in outcomes defined at those longer and larger scales to the causalities at the shorter and smaller scales. How do we move today to the next 1000 years in the same way that our ancestors did move from their today to our present, in the small steps that produce reality? Please note, I am not arguing long term work is not interesting or the like. I just pose the question how to deal with the problem that we employ relations with hindsight that matter to us, but not necessarily to the agents that produced the relations we think we have observed. I would like to push the socio-hydrological community a little into rethinking how to deal with complexity, with the aim to bring together the timescales of humans and complexity. I will provide one or two examples of how larger-scale and longer-term observations on water flows and environmental loads can be broken down into smaller-scale and shorter-term production processes of these same loads.
NASA Astrophysics Data System (ADS)
Jones, D. B. A.; Deng, F.; Walker, T. W.; Keller, M.; Bowman, K. W.; Nassar, R.
2014-12-01
The upper troposphere and lower stratosphere (UTLS) represents a transition region between the more dynamically active troposphere and more stably stratified stratosphere. The processes that influence the distribution of atmospheric constituents in the UTLS occur on small vertical scales that are a challenge for models to reliably capture. As a consequence, models typically underestimate the mean age of air in the lowermost stratosphere, reflecting excessive vertical transport and/or mixing in the region. Using the GEOS-Chem global chemical transport model, we quantify the potential impact of discrepancies in vertical transport in the UTLS on inferred sources and sinks of atmospheric CO2. Comparisons of the modeled CO2 and O3 in the polar UTLS with data from the HIAPER Pole-to-Pole Observations (HIPPO) campaign show that the model overestimates CO2 and underestimates O3 in the region. Using the observed CO2/O3 correlations in the UTLS, we correct the modeled CO2 in the Arctic UTLS (primarily between the 320 K and 360 K isentropic surfaces) and quantify the impact of the CO2 correction on the flux estimates using the GEOS-Chem data assimilation system together with XCO2 data from the Greenhouse Gases Observing Satellite (GOSAT). As a result of isentropic transport, the correction is transported down into the subtropical troposphere, where it impacts the regional flux estimates. Our results suggest that discrepancies in mixing in the UTLS could bias the latitudinal distribution of the inferred CO2 fluxes.
Development of the Ghent Multidimensional Somatic Complaints Scale
ERIC Educational Resources Information Center
Beirens, Koen; Fontaine, Johnny R. J.
2010-01-01
The present study aimed at developing a new scale that operationalizes a hierarchical model of somatic complaints. First, 63 items representing a wide range of symptoms and sensations were compiled from somatic complaints scales and emotion literature. These complaints were rated by Belgian students (n = 307) and Belgian adults (n = 603).…
Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2002-01-01
A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.
Depression, mood state, and back pain during microgravity simulated by bed rest
NASA Technical Reports Server (NTRS)
Styf, J. R.; Hutchinson, K.; Carlsson, S. G.; Hargens, A. R.
2001-01-01
OBJECTIVE: The objective of this study was to develop a ground-based model for spinal adaptation to microgravity and to study the effects of spinal adaptation on depression, mood state, and pain intensity. METHODS: We investigated back pain, mood state, and depression in six subjects, all of whom were exposed to microgravity, simulated by two forms of bed rest, for 3 days. One form consisted of bed rest with 6 degrees of head-down tilt and balanced traction, and the other consisted of horizontal bed rest. Subjects had a 2-week period of recovery between the studies. The effects of bed rest on pain intensity in the lower back, depression, and mood state were investigated. RESULTS: Subjects experienced significantly more intense lower back pain, lower hemisphere abdominal pain, headache, and leg pain during head-down tilt bed rest. They had higher scores on the Beck Depression Inventory (ie, were more depressed) and significantly lower scores on the activity scale of the Bond-Lader questionnaire. CONCLUSIONS: Bed rest with 6 degrees of head-down tilt may be a better experimental model than horizontal bed rest for inducing the pain and psychosomatic reactions experienced in microgravity. Head-down tilt with balanced traction may be a useful method to induce low back pain, mood changes, and altered self-rated activity level in bed rest studies.
NASA Astrophysics Data System (ADS)
Medici, Giacomo; West, L. J.; Mountney, N. P.
2018-03-01
Fluvial sedimentary successions represent porous media that host groundwater and geothermal resources. Additionally, they overlie crystalline rocks hosting nuclear waste repositories in rift settings. The permeability characteristics of an arenaceous fluvial succession, the Triassic St Bees Sandstone Formation in England (UK), are described, from core-plug to well-test scale up to 1 km depth. Within such lithified successions, dissolution associated with the circulation of meteoric water results in increased permeability ( K 10-1-100 m/day) to depths of at least 150 m below ground level (BGL) in aquifer systems that are subject to rapid groundwater circulation. Thus, contaminant transport is likely to occur at relatively high rates. In a deeper investigation (> 150 m depth), where the aquifer has not been subjected to rapid groundwater circulation, well-test-scale hydraulic conductivity is lower, decreasing from K 10-2 m/day at 150-400 m BGL to 10-3 m/day down-dip at 1 km BGL, where the pore fluid is hypersaline. Here, pore-scale permeability becomes progressively dominant with increasing lithostatic load. Notably, this work investigates a sandstone aquifer of fluvial origin at investigation depths consistent with highly enthalpy geothermal reservoirs ( 0.7-1.1 km). At such depths, intergranular flow dominates in unfaulted areas with only minor contribution by bedding plane fractures. However, extensional faults represent preferential flow pathways, due to presence of high connective open fractures. Therefore, such faults may (1) drive nuclear waste contaminants towards the highly permeable shallow (< 150 m BGL) zone of the aquifer, and (2) influence fluid recovery in geothermal fields.
MacLeod, Miles; Nersessian, Nancy J
2015-02-01
In this paper we draw upon rich ethnographic data of two systems biology labs to explore the roles of explanation and understanding in large-scale systems modeling. We illustrate practices that depart from the goal of dynamic mechanistic explanation for the sake of more limited modeling goals. These processes use abstract mathematical formulations of bio-molecular interactions and data fitting techniques which we call top-down abstraction to trade away accurate mechanistic accounts of large-scale systems for specific information about aspects of those systems. We characterize these practices as pragmatic responses to the constraints many modelers of large-scale systems face, which in turn generate more limited pragmatic non-mechanistic forms of understanding of systems. These forms aim at knowledge of how to predict system responses in order to manipulate and control some aspects of them. We propose that this analysis of understanding provides a way to interpret what many systems biologists are aiming for in practice when they talk about the objective of a "systems-level understanding." Copyright © 2014 Elsevier Ltd. All rights reserved.
The relationship between happiness and health: evidence from Italy.
Sabatini, Fabio
2014-08-01
We test the relationship between happiness and self-rated health in Italy. The analysis relies on a unique dataset collected through the administration of a questionnaire to a representative sample (n = 817) of the population of the Italian Province of Trento in March 2011. Based on probit regressions and instrumental variables estimates, we find that happiness is strongly correlated with perceived good health, after controlling for a number of relevant socio-economic phenomena. Health inequalities based on income, work status and education are relatively contained with respect to the rest of Italy. As expected, this scales down the role of social relationships. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sculpting Mountains: Interactive Terrain Modeling Based on Subsurface Geology.
Cordonnier, Guillaume; Cani, Marie-Paule; Benes, Bedrich; Braun, Jean; Galin, Eric
2018-05-01
Most mountain ranges are formed by the compression and folding of colliding tectonic plates. Subduction of one plate causes large-scale asymmetry while their layered composition (or stratigraphy) explains the multi-scale folded strata observed on real terrains. We introduce a novel interactive modeling technique to generate visually plausible, large scale terrains that capture these phenomena. Our method draws on both geological knowledge for consistency and on sculpting systems for user interaction. The user is provided hands-on control on the shape and motion of tectonic plates, represented using a new geologically-inspired model for the Earth crust. The model captures their volume preserving and complex folding behaviors under collision, causing mountains to grow. It generates a volumetric uplift map representing the growth rate of subsurface layers. Erosion and uplift movement are jointly simulated to generate the terrain. The stratigraphy allows us to render folded strata on eroded cliffs. We validated the usability of our sculpting interface through a user study, and compare the visual consistency of the earth crust model with geological simulation results and real terrains.
Learning potential and cognitive abilities in preschool boys with fragile X and Down syndrome.
Valencia-Naranjo, Nieves; Robles-Bello, Mª Auxiliadora
2017-01-01
Enhancing cognitive abilities is relevant when devising treatment plans. This study examined the performance of preschool boys with Down syndrome and fragile X syndrome in cognitive tasks (e.g., nonverbal reasoning and short-term memory), as well as in improving cognitive functions by means of a learning potential methodology. The basic scales corresponding to the Skills and Learning Potential Preschool Scale were administered to children with Down syndrome and others with fragile X syndrome, matched for chronological age and nonverbal cognitive development level. The fragile X syndrome group showed stronger performance on short-term memory tasks than the Down syndrome group prior to intervention, with no differences recorded in nonverbal reasoning tasks. In addition, both groups' cognitive performance improved significantly between pre- and post-intervention. However, learning potential relative to auditory memory was limited in both groups, and for rule-based categorization in Down syndrome children. The scale offered the opportunity to assess young children's abilities and identify the degree of cognitive modifiability. Furthermore, factors that may potentially affect the children's performance before and during learning potential assessment are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Doherty, Paul; Rathjen, Don
This book contains scaled-down versions of Exploratorium exhibits that teachers can make using common, inexpensive, easily available materials. Each topic begins with a drawing of the original full-sized exhibit on the museum floor, a photograph of the scaled-down version which contains an introduction to the exhibit, a list of materials needed…
Coarse-grained Brownian ratchet model of membrane protrusion on cellular scale.
Inoue, Yasuhiro; Adachi, Taiji
2011-07-01
Membrane protrusion is a mechanochemical process of active membrane deformation driven by actin polymerization. Previously, Brownian ratchet (BR) was modeled on the basis of the underlying molecular mechanism. However, because the BR requires a priori load that cannot be determined without information of the cell shape, it cannot be effective in studies in which resultant shapes are to be solved. Other cellular-scale models describing the protrusion have also been suggested for modeling a whole cell; however, these models were not developed on the basis of coarse-grained physics representing the underlying molecular mechanism. Therefore, to express the membrane protrusion on the cellular scale, we propose a novel mathematical model, the coarse-grained BR (CBR), which is derived on the basis of nonequilibrium thermodynamics theory. The CBR can reproduce the BR within the limit of the quasistatic process of membrane protrusion and can estimate the protrusion velocity consistently with an effective elastic constant that represents the state of the energy of the membrane. Finally, to demonstrate the applicability of the CBR, we attempt to perform a cellular-scale simulation of migrating keratocyte in which the proposed CBR is used for the membrane protrusion model on the cellular scale. The results show that the experimentally observed shapes of the leading edge are well reproduced by the simulation. In addition, The trend of dependences of the protrusion velocity on the curvature of the leading edge, the temperature, and the substrate stiffness also agreed with the other experimental results. Thus, the CBR can be considered an appropriate cellular-scale model to express the membrane protrusion on the basis of its underlying molecular mechanism.