NASA Technical Reports Server (NTRS)
Metschan, Stephen L.; Wilden, Kurtis S.; Sharpless, Garrett C.; Andelman, Rich M.
1993-01-01
Textile manufacturing processes offer potential cost and weight advantages over traditional composite materials and processes for transport fuselage elements. In the current study, design cost modeling relationships between textile processes and element design details were developed. Such relationships are expected to help future aircraft designers to make timely decisions on the effect of design details and overall configurations on textile fabrication costs. The fundamental advantage of a design cost model is to insure that the element design is cost effective for the intended process. Trade studies on the effects of processing parameters also help to optimize the manufacturing steps for a particular structural element. Two methods of analyzing design detail/process cost relationships developed for the design cost model were pursued in the current study. The first makes use of existing databases and alternative cost modeling methods (e.g. detailed estimating). The second compares design cost model predictions with data collected during the fabrication of seven foot circumferential frames for ATCAS crown test panels. The process used in this case involves 2D dry braiding and resin transfer molding of curved 'J' cross section frame members having design details characteristic of the baseline ATCAS crown design.
Derivation of Markov processes that violate detailed balance
NASA Astrophysics Data System (ADS)
Lee, Julian
2018-03-01
Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.
Humbird, David; Trendewicz, Anna; Braun, Robert; ...
2017-01-12
A biomass fast pyrolysis reactor model with detailed reaction kinetics and one-dimensional fluid dynamics was implemented in an equation-oriented modeling environment (Aspen Custom Modeler). Portions of this work were detailed in previous publications; further modifications have been made here to improve stability and reduce execution time of the model to make it compatible for use in large process flowsheets. The detailed reactor model was integrated into a larger process simulation in Aspen Plus and was stable for different feedstocks over a range of reactor temperatures. Sample results are presented that indicate general agreement with experimental results, but with higher gasmore » losses caused by stripping of the bio-oil by the fluidizing gas in the simulated absorber/condenser. Lastly, this integrated modeling approach can be extended to other well-defined, predictive reactor models for fast pyrolysis, catalytic fast pyrolysis, as well as other processes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbird, David; Trendewicz, Anna; Braun, Robert
A biomass fast pyrolysis reactor model with detailed reaction kinetics and one-dimensional fluid dynamics was implemented in an equation-oriented modeling environment (Aspen Custom Modeler). Portions of this work were detailed in previous publications; further modifications have been made here to improve stability and reduce execution time of the model to make it compatible for use in large process flowsheets. The detailed reactor model was integrated into a larger process simulation in Aspen Plus and was stable for different feedstocks over a range of reactor temperatures. Sample results are presented that indicate general agreement with experimental results, but with higher gasmore » losses caused by stripping of the bio-oil by the fluidizing gas in the simulated absorber/condenser. Lastly, this integrated modeling approach can be extended to other well-defined, predictive reactor models for fast pyrolysis, catalytic fast pyrolysis, as well as other processes.« less
Detailed Modeling of Distillation Technologies for Closed-Loop Water Recovery Systems
NASA Technical Reports Server (NTRS)
Allada, Rama Kumar; Lange, Kevin E.; Anderson, Molly S.
2011-01-01
Detailed chemical process simulations are a useful tool in designing and optimizing complex systems and architectures for human life support. Dynamic and steady-state models of these systems help contrast the interactions of various operating parameters and hardware designs, which become extremely useful in trade-study analyses. NASA?s Exploration Life Support technology development project recently made use of such models to compliment a series of tests on different waste water distillation systems. This paper presents efforts to develop chemical process simulations for three technologies: the Cascade Distillation System (CDS), the Vapor Compression Distillation (VCD) system and the Wiped-Film Rotating Disk (WFRD) using the Aspen Custom Modeler and Aspen Plus process simulation tools. The paper discusses system design, modeling details, and modeling results for each technology and presents some comparisons between the model results and recent test data. Following these initial comparisons, some general conclusions and forward work are discussed.
Idealized simulation of the Colorado hailstorm case: comparison of bulk and detailed microphysics
NASA Astrophysics Data System (ADS)
Geresdi, I.
One of the purposes of the Fourth Cloud Modeling Workshop was to compare different microphysical treatments. In this paper, the results of a widely used bulk treatment and five versions of a detailed microphysical model are presented. Sensitivity analysis was made to investigate the effect of bulk parametrization, ice initiation technique, CCN concentration and collision efficiency of rimed ice crystal-drop collision. The results show that: (i) The mixing ratios of different species of hydrometeors calculated by bulk and one of the detailed models show some similarity. However, the processes of hail/graupel formation are different in the bulk and the detailed models. (ii) Using different ice initiation in the detailed models' different processes became important in the hail and graupel formation. (iii) In the case of higher CCN concentration, the mixing ratio of liquid water, hail and graupel were more sensitive to the value of collision efficiency of rimed ice crystal-drop collision. (iv) The Bergeron-Findeisen process does not work in the updraft core of a convective cloud. The vapor content was always over water saturation; moreover, the supersaturation gradually increased after the appearance of precipitation ice particles.
Recognition Errors Suggest Fast Familiarity and Slow Recollection in Rhesus Monkeys
ERIC Educational Resources Information Center
Basile, Benjamin M.; Hampton, Robert R.
2013-01-01
One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses…
DOT National Transportation Integrated Search
1981-07-01
The Detailed Station Model (DSM) is a discrete event model representing the interrelated queueing processes associated with vehicle and passenger activities in an AGT station. The DSM will provide operational and performance measures of alternative s...
DOT National Transportation Integrated Search
1981-07-01
The Detailed Station Model (DSM) is a discrete event model representing the interrelated queueing processes associated with vehicle and passenger activities in an AGT station. The DSM will provide operational and performance measures of alternative s...
The calculation of theoretical chromospheric models and the interpretation of the solar spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1994-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for nonradiative heating, and for solar activity in general.
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1993-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.
Using artificial neural networks to model aluminium based sheet forming processes and tools details
NASA Astrophysics Data System (ADS)
Mekras, N.
2017-09-01
In this paper, a methodology and a software system will be presented concerning the use of Artificial Neural Networks (ANNs) for modeling aluminium based sheet forming processes. ANNs models’ creation is based on the training of the ANNs using experimental, trial and historical data records of processes’ inputs and outputs. ANNs models are useful in cases that processes’ mathematical models are not accurate enough, are not well defined or are missing e.g. in cases of complex product shapes, new material alloys, new process requirements, micro-scale products, etc. Usually, after the design and modeling of the forming tools (die, punch, etc.) and before mass production, a set of trials takes place at the shop floor for finalizing processes and tools details concerning e.g. tools’ minimum radii, die/punch clearance, press speed, process temperature, etc. and in relation with the material type, the sheet thickness and the quality achieved from the trials. Using data from the shop floor trials and forming theory data, ANNs models can be trained and created, and can be used to estimate processes and tools final details, hence supporting efficient set-up of processes and tools before mass production starts. The proposed ANNs methodology and the respective software system are implemented within the EU H2020 project LoCoMaTech for the aluminium-based sheet forming process HFQ (solution Heat treatment, cold die Forming and Quenching).
Direct Scaling of Leaf-Resolving Biophysical Models from Leaves to Canopies
NASA Astrophysics Data System (ADS)
Bailey, B.; Mahaffee, W.; Hernandez Ochoa, M.
2017-12-01
Recent advances in the development of biophysical models and high-performance computing have enabled rapid increases in the level of detail that can be represented by simulations of plant systems. However, increasingly detailed models typically require increasingly detailed inputs, which can be a challenge to accurately specify. In this work, we explore the use of terrestrial LiDAR scanning data to accurately specify geometric inputs for high-resolution biophysical models that enables direct up-scaling of leaf-level biophysical processes. Terrestrial LiDAR scans generate "clouds" of millions of points that map out the geometric structure of the area of interest. However, points alone are often not particularly useful in generating geometric model inputs, as additional data processing techniques are required to provide necessary information regarding vegetation structure. A new method was developed that directly reconstructs as many leaves as possible that are in view of the LiDAR instrument, and uses a statistical backfilling technique to ensure that the overall leaf area and orientation distribution matches that of the actual vegetation being measured. This detailed structural data is used to provide inputs for leaf-resolving models of radiation, microclimate, evapotranspiration, and photosynthesis. Model complexity is afforded by utilizing graphics processing units (GPUs), which allows for simulations that resolve scales ranging from leaves to canopies. The model system was used to explore how heterogeneity in canopy architecture at various scales affects scaling of biophysical processes from leaves to canopies.
Development and testing of a fast conceptual river water quality model.
Keupers, Ingrid; Willems, Patrick
2017-04-15
Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Role of Empirical Evidence in Modeling Speech Segmentation
ERIC Educational Resources Information Center
Phillips, Lawrence
2015-01-01
Choosing specific implementational details is one of the most important aspects of creating and evaluating a model. In order to properly model cognitive processes, choices for these details must be made based on empirical research. Unfortunately, modelers are often forced to make decisions in the absence of relevant data. My work investigates the…
Devil is in the details: Using logic models to investigate program process.
Peyton, David J; Scicchitano, Michael
2017-12-01
Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamic Modeling of Process Technologies for Closed-Loop Water Recovery Systems
NASA Technical Reports Server (NTRS)
Allada, Rama Kumar; Lange, Kevin; Anderson, Molly
2011-01-01
Detailed chemical process simulations are a useful tool in designing and optimizing complex systems and architectures for human life support. Dynamic and steady-state models of these systems help contrast the interactions of various operating parameters and hardware designs, which become extremely useful in trade-study analyses. NASA s Exploration Life Support technology development project recently made use of such models to compliment a series of tests on different waste water distillation systems. This paper presents dynamic simulations of chemical process for primary processor technologies including: the Cascade Distillation System (CDS), the Vapor Compression Distillation (VCD) system, the Wiped-Film Rotating Disk (WFRD), and post-distillation water polishing processes such as the Volatiles Removal Assembly (VRA) that were developed using the Aspen Custom Modeler and Aspen Plus process simulation tools. The results expand upon previous work for water recovery technology models and emphasize dynamic process modeling and results. The paper discusses system design, modeling details, and model results for each technology and presents some comparisons between the model results and available test data. Following these initial comparisons, some general conclusions and forward work are discussed.
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
NASA Astrophysics Data System (ADS)
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Using a 3D CAD plant model to simplify process hazard reviews
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolpa, G.
A Hazard and Operability (HAZOP) review is a formal predictive procedure used to identify potential hazard and operability problems associated with certain processes and facilities. The HAZOP procedure takes place several times during the life cycle of the facility. Replacing plastic models, layout and detail drawings with a 3D CAD electronic model, provides access to process safety information and a detailed level of plant topology that approaches the visualization capability of the imagination. This paper describes the process that is used for adding the use of a 3D CAD model to flowsheets and proven computer programs for the conduct ofmore » hazard and operability reviews. Using flowsheets and study nodes as a road map for the review the need for layout and other detail drawings is all but eliminated. Using the 3D CAD model again for a post-P and ID HAZOP supports conformance to layout and safety requirements, provides superior visualization of the plant configuration and preserves the owners equity in the design. The response from the review teams are overwhelmingly in favor of this type of review over a review that uses only drawings. Over the long term the plant model serves more than just process hazards analysis. Ongoing use of the model can satisfy the required access to process safety information, OHSA documentation and other legal requirements. In this paper extensive instructions address the logic for the process hazards analysis and the preparation required to assist anyone who wishes to add the use of a 3D model to their review.« less
Integration of snow management practices into a detailed snow pack model
NASA Astrophysics Data System (ADS)
Spandre, Pierre; Morin, Samuel; Lafaysse, Matthieu; Lejeune, Yves; François, Hugues; George-Marcelpoil, Emmanuelle
2016-04-01
The management of snow on ski slopes is a key socio-economic and environmental issue in mountain regions. Indeed the winter sports industry has become a very competitive global market although this economy remains particularly sensitive to weather and snow conditions. The understanding and implementation of snow management in detailed snowpack models is a major step towards a more realistic assessment of the evolution of snow conditions in ski resorts concerning past, present and future climate conditions. Here we describe in a detailed manner the integration of snow management processes (grooming, snowmaking) into the snowpack model Crocus (Spandre et al., Cold Reg. Sci. Technol., in press). The effect of the tiller is explicitly taken into account and its effects on snow properties (density, snow microstructure) are simulated in addition to the compaction induced by the weight of the grooming machine. The production of snow in Crocus is carried out with respect to specific rules and current meteorological conditions. Model configurations and results are described in detail through sensitivity tests of the model of all parameters related to snow management processes. In-situ observations were carried out in four resorts in the French Alps during the 2014-2015 winter season considering for each resort natural, groomed only and groomed plus snowmaking conditions. The model provides realistic simulations of the snowpack properties with respect to these observations. The main uncertainty pertains to the efficiency of the snowmaking process. The observed ratio between the mass of machine-made snow on ski slopes and the water mass used for production was found to be lower than was expected from the literature, in every resort. The model now referred to as "Crocus-Resort" has been proven to provide realistic simulations of snow conditions on ski slopes and may be used for further investigations. Spandre, P., S. Morin, M. Lafaysse, Y. Lejeune, H. François and E. George-Marcelpoil, Integration of snow management processes into a detailed snowpack model, Cold Reg. Sci. Technol., in press.
Dynamic Modeling of Process Technologies for Closed-Loop Water Recovery Systems
NASA Technical Reports Server (NTRS)
Allada, Rama Kumar; Lange, Kevin E.; Anderson, Molly S.
2012-01-01
Detailed chemical process simulations are a useful tool in designing and optimizing complex systems and architectures for human life support. Dynamic and steady-state models of these systems help contrast the interactions of various operating parameters and hardware designs, which become extremely useful in trade-study analyses. NASA s Exploration Life Support technology development project recently made use of such models to compliment a series of tests on different waste water distillation systems. This paper presents dynamic simulations of chemical process for primary processor technologies including: the Cascade Distillation System (CDS), the Vapor Compression Distillation (VCD) system, the Wiped-Film Rotating Disk (WFRD), and post-distillation water polishing processes such as the Volatiles Removal Assembly (VRA). These dynamic models were developed using the Aspen Custom Modeler (Registered TradeMark) and Aspen Plus(Registered TradeMark) process simulation tools. The results expand upon previous work for water recovery technology models and emphasize dynamic process modeling and results. The paper discusses system design, modeling details, and model results for each technology and presents some comparisons between the model results and available test data. Following these initial comparisons, some general conclusions and forward work are discussed.
Clinical professional governance for detailed clinical models.
Goossen, William; Goossen-Baremans, Anneke
2013-01-01
This chapter describes the need for Detailed Clinical Models for contemporary Electronic Health Systems, data exchange and data reuse. It starts with an explanation of the components related to Detailed Clinical Models with a brief summary of knowledge representation, including terminologies representing clinic relevant "things" in the real world, and information models that abstract these in order to let computers process data about these things. Next, Detailed Clinical Models are defined and their purpose is described. It builds on existing developments around the world and accumulates in current work to create a technical specification at the level of the International Standards Organization. The core components of properly expressed Detailed Clinical Models are illustrated, including clinical knowledge and context, data element specification, code bindings to terminologies and meta-information about authors, versioning among others. Detailed Clinical Models to date are heavily based on user requirements and specify the conceptual and logical levels of modelling. It is not precise enough for specific implementations, which requires an additional step. However, this allows Detailed Clinical Models to serve as specifications for many different kinds of implementations. Examples of Detailed Clinical Models are presented both in text and in Unified Modelling Language. Detailed Clinical Models can be positioned in health information architectures, where they serve at the most detailed granular level. The chapter ends with examples of projects that create and deploy Detailed Clinical Models. All have in common that they can often reuse materials from earlier projects, and that strict governance of these models is essential to use them safely in health care information and communication technology. Clinical validation is one point of such governance, and model testing another. The Plan Do Check Act cycle can be applied for governance of Detailed Clinical Models. Finally, collections of clinical models do require a repository in which they can be stored, searched, and maintained. Governance of Detailed Clinical Models is required at local, national, and international levels.
NASA Technical Reports Server (NTRS)
Mock, W. D.; Latham, R. A.
1982-01-01
The NASTRAN model plan for the wing structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The wing substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.
NASA Technical Reports Server (NTRS)
Mock, W. D.; Latham, R. A.; Tisher, E. D.
1982-01-01
The NASTRAN model plans for the horizontal stabilizer, vertical stabilizer, and nacelle structure were expanded in detail to generate the NASTRAN model for each of these substructures. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. Each substructure model was thoroughly checked out for continuity, connectivity, and constraints. These substructures were processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail models. Finally, a demonstration and validation processing of these substructures was accomplished using the NASTRAN finite element program installed at NASA/DFRC facility.
NASA Technical Reports Server (NTRS)
Mock, W. D.; Latham, R. A.
1982-01-01
The NASTRAN model plan for the fuselage structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The fuselage substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.
van der Wegen, M.; Jaffe, B.E.; Roelvink, J.A.
2011-01-01
This study investigates the possibility of hindcasting-observed decadal-scale morphologic change in San Pablo Bay, a subembayment of the San Francisco Estuary, California, USA, by means of a 3-D numerical model (Delft3D). The hindcast period, 1856-1887, is characterized by upstream hydraulic mining that resulted in a high sediment input to the estuary. The model includes wind waves, salt water and fresh water interactions, and graded sediment transport, among others. Simplified initial conditions and hydrodynamic forcing were necessary because detailed historic descriptions were lacking. Model results show significant skill. The river discharge and sediment concentration have a strong positive influence on deposition volumes. Waves decrease deposition rates and have, together with tidal movement, the greatest effect on sediment distribution within San Pablo Bay. The applied process-based (or reductionist) modeling approach is valuable once reasonable values for model parameters and hydrodynamic forcing are obtained. Sensitivity analysis reveals the dominant forcing of the system and suggests that the model planform plays a dominant role in the morphodynamic development. A detailed physical explanation of the model outcomes is difficult because of the high nonlinearity of the processes. Process formulation refinement, a more detailed description of the forcing, or further model parameter variations may lead to an enhanced model performance, albeit to a limited extent. The approach potentially provides a sound basis for prediction of future developments. Parallel use of highly schematized box models and a process-based approach as described in the present work is probably the most valuable method to assess decadal morphodynamic development. Copyright ?? 2011 by the American Geophysical Union.
ERIC Educational Resources Information Center
Churchman, Kris
2002-01-01
Explains how students can be guided to model the invention process using potatoes. Details the steps and the materials used in the modeling, including the phases of the invention process. Presents this activity as preparation for the Invent America program. (DDR)
Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul
NASA Astrophysics Data System (ADS)
Buyuksalih, I.; Isikdag, U.; Zlatanova, S.
2013-08-01
3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.
NASA Astrophysics Data System (ADS)
Yu, Yang; Zeng, Zheng
2009-10-01
By discussing the causes behind the high amendments ratio in the implementation of urban regulatory detailed plans in China despite its law-ensured status, the study aims to reconcile conflict between the legal authority of regulatory detailed planning and the insufficient scientific support in its decision-making and compilation by introducing into the process spatial analysis based on GIS technology and 3D modeling thus present a more scientific and flexible approach to regulatory detailed planning in China. The study first points out that the current compilation process of urban regulatory detailed plan in China employs mainly an empirical approach which renders it constantly subjected to amendments; the study then discusses the need and current utilization of GIS in the Chinese system and proposes the framework of a GIS-assisted 3D spatial analysis process from the designer's perspective which can be regarded as an alternating processes between the descriptive codes and physical design in the compilation of regulatory detailed planning. With a case study of the processes and results from the application of the framework, the paper concludes that the proposed framework can be an effective instrument which provides more rationality, flexibility and thus more efficiency to the compilation and decision-making process of urban regulatory detailed plan in China.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Klein, Michael T; Hou, Gang; Quann, Richard J; Wei, Wei; Liao, Kai H; Yang, Raymond S H; Campain, Julie A; Mazurek, Monica A; Broadbelt, Linda J
2002-01-01
A chemical engineering approach for the rigorous construction, solution, and optimization of detailed kinetic models for biological processes is described. This modeling capability addresses the required technical components of detailed kinetic modeling, namely, the modeling of reactant structure and composition, the building of the reaction network, the organization of model parameters, the solution of the kinetic model, and the optimization of the model. Even though this modeling approach has enjoyed successful application in the petroleum industry, its application to biomedical research has just begun. We propose to expand the horizons on classic pharmacokinetics and physiologically based pharmacokinetics (PBPK), where human or animal bodies were often described by a few compartments, by integrating PBPK with reaction network modeling described in this article. If one draws a parallel between an oil refinery, where the application of this modeling approach has been very successful, and a human body, the individual processing units in the oil refinery may be considered equivalent to the vital organs of the human body. Even though the cell or organ may be much more complicated, the complex biochemical reaction networks in each organ may be similarly modeled and linked in much the same way as the modeling of the entire oil refinery through linkage of the individual processing units. The integrated chemical engineering software package described in this article, BioMOL, denotes the biological application of molecular-oriented lumping. BioMOL can build a detailed model in 1-1,000 CPU sec using standard desktop hardware. The models solve and optimize using standard and widely available hardware and software and can be presented in the context of a user-friendly interface. We believe this is an engineering tool with great promise in its application to complex biological reaction networks. PMID:12634134
Modeling of the HiPco process for carbon nanotube production. I. Chemical kinetics
NASA Technical Reports Server (NTRS)
Dateo, Christopher E.; Gokcen, Tahir; Meyyappan, M.
2002-01-01
A chemical kinetic model is developed to help understand and optimize the production of single-walled carbon nanotubes via the high-pressure carbon monoxide (HiPco) process, which employs iron pentacarbonyl as the catalyst precursor and carbon monoxide as the carbon feedstock. The model separates the HiPco process into three steps, precursor decomposition, catalyst growth and evaporation, and carbon nanotube production resulting from the catalyst-enhanced disproportionation of carbon monoxide, known as the Boudouard reaction: 2 CO(g)-->C(s) + CO2(g). The resulting detailed model contains 971 species and 1948 chemical reactions. A second model with a reduced reaction set containing 14 species and 22 chemical reactions is developed on the basis of the detailed model and reproduces the chemistry of the major species. Results showing the parametric dependence of temperature, total pressure, and initial precursor partial pressures are presented, with comparison between the two models. The reduced model is more amenable to coupled reacting flow-field simulations, presented in the following article.
USDA-ARS?s Scientific Manuscript database
Process-based modeling provides detailed spatial and temporal information of the soil environment in the shallow seedling recruitment zone across field topography where measurements of soil temperature and water may not sufficiently describe the zone. Hourly temperature and water profiles within the...
Computer model for economic study of unbleached kraft paperboard production
Peter J. Ince
1984-01-01
Unbleached kraft paperboard is produced from wood fiber in an industrial papermaking process. A highly specific and detailed model of the process is presented. The model is also presented as a working computer program. A user of the computer program will provide data on physical parameters of the process and on prices of material inputs and outputs. The program is then...
Three dimensional modeling of cirrus during the 1991 FIRE IFO 2: Detailed process study
NASA Technical Reports Server (NTRS)
Jensen, Eric J.; Toon, Owen B.; Westphal, Douglas L.
1993-01-01
A three-dimensional model of cirrus cloud formation and evolution, including microphysical, dynamical, and radiative processes, was used to simulate cirrus observed in the FIRE Phase 2 Cirrus field program (13 Nov. - 7 Dec. 1991). Sulfate aerosols, solution drops, ice crystals, and water vapor are all treated as interactive elements in the model. Ice crystal size distributions are fully resolved based on calculations of homogeneous freezing of solution drops, growth by water vapor deposition, evaporation, aggregation, and vertical transport. Visible and infrared radiative fluxes, and radiative heating rates are calculated using the two-stream algorithm described by Toon et al. Wind velocities, diffusion coefficients, and temperatures were taken from the MAPS analyses and the MM4 mesoscale model simulations. Within the model, moisture is transported and converted to liquid or vapor by the microphysical processes. The simulated cloud bulk and microphysical properties are shown in detail for the Nov. 26 and Dec. 5 case studies. Comparisons with lidar, radar, and in situ data are used to determine how well the simulations reproduced the observed cirrus. The roles played by various processes in the model are described in detail. The potential modes of nucleation are evaluated, and the importance of small-scale variations in temperature and humidity are discussed. The importance of competing ice crystal growth mechanisms (water vapor deposition and aggregation) are evaluated based on model simulations. Finally, the importance of ice crystal shape for crystal growth and vertical transport of ice are discussed.
Simple model of inhibition of chain-branching combustion processes
NASA Astrophysics Data System (ADS)
Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.
2017-11-01
A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.
To acquire more detailed radiation drive by use of ``quasi-steady'' approximation in atomic kinetics
NASA Astrophysics Data System (ADS)
Ren, Guoli; Pei, Wenbing; Lan, Ke; Gu, Peijun; Li, Xin
2012-10-01
In current routine 2D simulation of hohlraum physics, we adopt the principal-quantum- number(n-level) average atom model(AAM) in NLTE plasma description. However, the detailed experimental frequency-dependant radiative drive differs from our n-level simulated drive, which reminds us the need of a more detailed atomic kinetics description. The orbital-quantum- number(nl-level) average atom model is a natural consideration, however the nl-level in-line calculation needs much more computational resource. By distinguishing the rapid bound-bound atomic processes from the relative slow bound-free atomic processes, we found a method to build up a more detailed bound electron distribution(nl-level even nlm-level) using in-line n-level calculated plasma conditions(temperature, density, and average ionization degree). We name this method ``quasi-steady approximation'' in atomic kinetics. Using this method, we re-build the nl-level bound electron distribution (Pnl), and acquire a new hohlraum radiative drive by post-processing. Comparison with the n-level post-processed hohlraum drive shows that we get an almost identical radiation flux but with more fine frequency-denpending spectrum structure which appears only in nl-level transition with same n number(n=0) .
State of the art in pathology business process analysis, modeling, design and optimization.
Schrader, Thomas; Blobel, Bernd; García-Rojo, Marcial; Daniel, Christel; Słodkowska, Janina
2012-01-01
For analyzing current workflows and processes, for improving them, for quality management and quality assurance, for integrating hardware and software components, but also for education, training and communication between different domains' experts, modeling business process in a pathology department is inevitable. The authors highlight three main processes in pathology: general diagnostic, cytology diagnostic, and autopsy. In this chapter, those processes are formally modeled and described in detail. Finally, specialized processes such as immunohistochemistry and frozen section have been considered.
Toward a Stress Process Model of Children's Exposure to Physical Family and Community Violence
ERIC Educational Resources Information Center
Foster, Holly; Brooks-Gunn, Jeanne
2009-01-01
Theoretically informed models are required to further the comprehensive understanding of children's ETV. We draw on the stress process paradigm to forward an overall conceptual model of ETV (ETV) in childhood and adolescence. Around this conceptual model, we synthesize research in four dominant areas of the literature which are detailed but often…
Review of modelling air pollution from traffic at street-level - The state of the science.
Forehead, H; Huynh, N
2018-06-13
Traffic emissions are a complex and variable cocktail of toxic chemicals. They are the major source of atmospheric pollution in the parts of cities where people live, commute and work. Reducing exposure requires information about the distribution and nature of emissions. Spatially and temporally detailed data are required, because both the rate of production and the composition of emissions vary significantly with time of day and with local changes in wind, traffic composition and flow. Increasing computer processing power means that models can accept highly detailed inputs of fleet, fuels and road networks. The state of the science models can simulate the behaviour and emissions of all the individual vehicles on a road network, with resolution of a second and tens of metres. The chemistry of the simulated emissions is also highly resolved, due to consideration of multiple engine processes, fuel evaporation and tyre wear. Good results can be achieved with both commercially available and open source models. The extent of a simulation is usually limited by processing capacity; the accuracy by the quality of traffic data. Recent studies have generated real time, detailed emissions data by using inputs from novel traffic sensing technologies and data from intelligent traffic systems (ITS). Increasingly, detailed pollution data is being combined with spatially resolved demographic or epidemiological data for targeted risk analyses. Copyright © 2018 Elsevier Ltd. All rights reserved.
Baseline process description for simulating plutonium oxide production for precalc project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, J. A.
Savannah River National Laboratory (SRNL) started a multi-year project, the PreCalc Project, to develop a computational simulation of a plutonium oxide (PuO 2) production facility with the objective to study the fundamental relationships between morphological and physicochemical properties. This report provides a detailed baseline process description to be used by SRNL personnel and collaborators to facilitate the initial design and construction of the simulation. The PreCalc Project team selected the HB-Line Plutonium Finishing Facility as the basis for a nominal baseline process since the facility is operational and significant model validation data can be obtained. The process boundary as wellmore » as process and facility design details necessary for multi-scale, multi-physics models are provided.« less
The Cognitive Spiral: Creative Thinking and Cognitive Processing.
ERIC Educational Resources Information Center
Ebert, Edward S., II
1994-01-01
The lack of a common understanding of the construct of creative thinking is noted, and the cognitive spiral model is presented, which conceptualizes creative thinking as an integral component of all cognitive processing. This article details the synthesis of a definition and the structure of a model of cognitive processing. (Author/DB)
A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model
ERIC Educational Resources Information Center
Sadoski, Mark; McTigue, Erin M.; Paivio, Allan
2012-01-01
In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…
A Measurable Model of the Creative Process in the Context of a Learning Process
ERIC Educational Resources Information Center
Ma, Min; Van Oystaeyen, Fred
2016-01-01
The authors' aim was to arrive at a measurable model of the creative process by putting creativity in the context of a learning process. The authors aimed to provide a rather detailed description of how creative thinking fits in a general description of the learning process without trying to go into an analysis of a biological description of the…
Recognition errors suggest fast familiarity and slow recollection in rhesus monkeys
Basile, Benjamin M.; Hampton, Robert R.
2013-01-01
One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses can demonstrate dual processes has been repeatedly challenged. Here, we present independent converging evidence for the dual-process model from analyses of recognition errors made by rhesus monkeys. Recognition choices were made in three different ways depending on processing duration. Short-latency errors were disproportionately false alarms to familiar lures, suggesting control by familiarity. Medium-latency responses were less likely to be false alarms and were more accurate, suggesting onset of a recollective process that could correctly reject familiar lures. Long-latency responses were guesses. A response deadline increased false alarms, suggesting that limiting processing time weakened the contribution of recollection and strengthened the contribution of familiarity. Together, these findings suggest fast familiarity and slow recollection in monkeys, that monkeys use a “recollect to reject” strategy to countermand false familiarity, and that primate recognition performance is well-characterized by a dual-process model consisting of recollection and familiarity. PMID:23864646
Chapter 8: Planning Tools to Simulate and Optimize Neighborhood Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhivov, Alexander Michael; Case, Michael Patrick; Jank, Reinhard
This section introduces different energy modeling tools available in Europe and the USA for community energy master planning process varying from strategic Urban Energy Planning to more detailed Local Energy Planning. Two modeling tools used for Energy Master Planning of primarily residential communities, the 3D city model with CityGML, and the Net Zero Planner tool developed for the US Department of Defense installations are described in more details.
Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory
2017-05-01
The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gaia DR2 documentation Chapter 3: Astrometry
NASA Astrophysics Data System (ADS)
Hobbs, D.; Lindegren, L.; Bastian, U.; Klioner, S.; Butkevich, A.; Stephenson, C.; Hernandez, J.; Lammers, U.; Bombrun, A.; Mignard, F.; Altmann, M.; Davidson, M.; de Bruijne, J. H. J.; Fernández-Hernández, J.; Siddiqui, H.; Utrilla Molina, E.
2018-04-01
This chapter of the Gaia DR2 documentation describes the models and processing steps used for the astrometric core solution, namely, the Astrometric Global Iterative Solution (AGIS). The inputs to this solution rely heavily on the basic observables (or astrometric elementaries) which have been pre-processed and discussed in Chapter 2, the results of which were published in Fabricius et al. (2016). The models consist of reference systems and time scales; assumed linear stellar motion and relativistic light deflection; in addition to fundamental constants and the transformation of coordinate systems. Higher level inputs such as: planetary and solar system ephemeris; Gaia tracking and orbit information; initial quasar catalogues and BAM data are all needed for the processing described here. The astrometric calibration models are outlined followed by the details processing steps which give AGIS its name. We also present a basic quality assessment and validation of the scientific results (for details, see Lindegren et al. 2018).
Techno-economic analysis Process model development for existing and conceptual processes Detailed heat integration Economic analysis of integrated processes Integration of process simulation learnings into control ;Conceptual Process Design and Techno-Economic Assessment of Ex Situ Catalytic Fast Pyrolysis of Biomass: A
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Pascual, Carlos; Luján, Marcos; Mora, José Ramón; Chiva, Vicente; Gamarra, Manuela
2015-01-01
The implantation of total quality management models in clinical departments can better adapt to the 2009 ISO 9004 model. An essential part of implantation of these models is the establishment of processes and their stabilization. There are four types of processes: key, management, support and operative (clinical). Management processes have four parts: process stabilization form, process procedures form, medical activities cost estimation form and, process flow chart. In this paper we will detail the creation of an essential process in a surgical department, such as the process of management of the surgery waiting list.
A statistical approach to develop a detailed soot growth model using PAH characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael
A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. Themore » KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)« less
Soft 3D-Printed Phantom of the Human Kidney with Collecting System.
Adams, Fabian; Qiu, Tian; Mark, Andrew; Fritz, Benjamin; Kramer, Lena; Schlager, Daniel; Wetterauer, Ulrich; Miernik, Arkadiusz; Fischer, Peer
2017-04-01
Organ models are used for planning and simulation of operations, developing new surgical instruments, and training purposes. There is a substantial demand for in vitro organ phantoms, especially in urological surgery. Animal models and existing simulator systems poorly mimic the detailed morphology and the physical properties of human organs. In this paper, we report a novel fabrication process to make a human kidney phantom with realistic anatomical structures and physical properties. The detailed anatomical structure was directly acquired from high resolution CT data sets of human cadaveric kidneys. The soft phantoms were constructed using a novel technique that combines 3D wax printing and polymer molding. Anatomical details and material properties of the phantoms were validated in detail by CT scan, ultrasound, and endoscopy. CT reconstruction, ultrasound examination, and endoscopy showed that the designed phantom mimics a real kidney's detailed anatomy and correctly corresponds to the targeted human cadaver's upper urinary tract. Soft materials with a tensile modulus of 0.8-1.5 MPa as well as biocompatible hydrogels were used to mimic human kidney tissues. We developed a method of constructing 3D organ models from medical imaging data using a 3D wax printing and molding process. This method is cost-effective means for obtaining a reproducible and robust model suitable for surgical simulation and training purposes.
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
Evaluating crown fire rate of spread predictions from physics-based models
C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont
2015-01-01
Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...
Simulation Framework for Teaching in Modeling and Simulation Areas
ERIC Educational Resources Information Center
De Giusti, Marisa Raquel; Lira, Ariel Jorge; Villarreal, Gonzalo Lujan
2008-01-01
Simulation is the process of executing a model that describes a system with enough detail; this model has its entities, an internal state, some input and output variables and a list of processes bound to these variables. Teaching a simulation language such as general purpose simulation system (GPSS) is always a challenge, because of the way it…
Allen, Johnie J; Anderson, Craig A; Bushman, Brad J
2018-02-01
The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence cognitions, feelings, and arousal, which in turn affect appraisal and decision processes, which in turn influence aggressive or nonaggressive behavioral outcomes. Each cycle of the proximate processes serves as a learning trial that affects the development and accessibility of aggressive knowledge structures. Distal processes of GAM detail how biological and persistent environmental factors can influence personality through changes in knowledge structures. GAM has been applied to understand aggression in many contexts including media violence effects, domestic violence, intergroup violence, temperature effects, pain effects, and the effects of global climate change. Copyright © 2017 Elsevier Ltd. All rights reserved.
Citygml and the Streets of New York - a Proposal for Detailed Street Space Modelling
NASA Astrophysics Data System (ADS)
Beil, C.; Kolbe, T. H.
2017-10-01
Three-dimensional semantic city models are increasingly used for the analysis of large urban areas. Until now the focus has mostly been on buildings. Nonetheless many applications could also benefit from detailed models of public street space for further analysis. However, there are only few guidelines for representing roads within city models. Therefore, related standards dealing with street modelling are examined and discussed. Nearly all street representations are based on linear abstractions. However, there are many use cases that require or would benefit from the detailed geometrical and semantic representation of street space. A variety of potential applications for detailed street space models are presented. Subsequently, based on related standards as well as on user requirements, a concept for a CityGML-compliant representation of street space in multiple levels of detail is developed. In the course of this process, the CityGML Transportation model of the currently valid OGC standard CityGML2.0 is examined to discover possibilities for further developments. Moreover, a number of improvements are presented. Finally, based on open data sources, the proposed concept is implemented within a semantic 3D city model of New York City generating a detailed 3D street space model for the entire city. As a result, 11 thematic classes, such as roadbeds, sidewalks or traffic islands are generated and enriched with a large number of thematic attributes.
Flight crew aiding for recovery from subsystem failures
NASA Technical Reports Server (NTRS)
Hudlicka, E.; Corker, K.; Schudy, R.; Baron, Sheldon
1990-01-01
Some of the conceptual issues associated with pilot aiding systems are discussed and an implementation of one component of such an aiding system is described. It is essential that the format and content of the information the aiding system presents to the crew be compatible with the crew's mental models of the task. It is proposed that in order to cooperate effectively, both the aiding system and the flight crew should have consistent information processing models, especially at the point of interface. A general information processing strategy, developed by Rasmussen, was selected to serve as the bridge between the human and aiding system's information processes. The development and implementation of a model-based situation assessment and response generation system for commercial transport aircraft are described. The current implementation is a prototype which concentrates on engine and control surface failure situations and consequent flight emergencies. The aiding system, termed Recovery Recommendation System (RECORS), uses a causal model of the relevant subset of the flight domain to simulate the effects of these failures and to generate appropriate responses, given the current aircraft state and the constraints of the current flight phase. Since detailed information about the aircraft state may not always be available, the model represents the domain at varying levels of abstraction and uses the less detailed abstraction levels to make inferences when exact information is not available. The structure of this model is described in detail.
PID-based error signal modeling
NASA Astrophysics Data System (ADS)
Yohannes, Tesfay
1997-10-01
This paper introduces a PID based signal error modeling. The error modeling is based on the betterment process. The resulting iterative learning algorithm is introduced and a detailed proof is provided for both linear and nonlinear systems.
Modeling of the HiPco process for carbon nanotube production. II. Reactor-scale analysis
NASA Technical Reports Server (NTRS)
Gokcen, Tahir; Dateo, Christopher E.; Meyyappan, M.
2002-01-01
The high-pressure carbon monoxide (HiPco) process, developed at Rice University, has been reported to produce single-walled carbon nanotubes from gas-phase reactions of iron carbonyl in carbon monoxide at high pressures (10-100 atm). Computational modeling is used here to develop an understanding of the HiPco process. A detailed kinetic model of the HiPco process that includes of the precursor, decomposition metal cluster formation and growth, and carbon nanotube growth was developed in the previous article (Part I). Decomposition of precursor molecules is necessary to initiate metal cluster formation. The metal clusters serve as catalysts for carbon nanotube growth. The diameter of metal clusters and number of atoms in these clusters are some of the essential information for predicting carbon nanotube formation and growth, which is then modeled by the Boudouard reaction with metal catalysts. Based on the detailed model simulations, a reduced kinetic model was also developed in Part I for use in reactor-scale flowfield calculations. Here this reduced kinetic model is integrated with a two-dimensional axisymmetric reactor flow model to predict reactor performance. Carbon nanotube growth is examined with respect to several process variables (peripheral jet temperature, reactor pressure, and Fe(CO)5 concentration) with the use of the axisymmetric model, and the computed results are compared with existing experimental data. The model yields most of the qualitative trends observed in the experiments and helps to understanding the fundamental processes in HiPco carbon nanotube production.
Basic Modeling of the Solar Atmosphere and Spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.; Wagner, William J. (Technical Monitor)
2000-01-01
During the last three years we have continued the development of extensive computer programs for constructing realistic models of the solar atmosphere and for calculating detailed spectra to use in the interpretation of solar observations. This research involves two major interrelated efforts: work by Avrett and Loeser on the Pandora computer program for optically thick non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed high-resolution synthesis of the solar spectrum using data for over 58 million atomic and molecular lines. Our objective is to construct atmospheric models from which the calculated spectra agree as well as possible with high-and low-resolution observations over a wide wavelength range. Such modeling leads to an improved understanding of the physical processes responsible for the structure and behavior of the atmosphere.
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid
2017-07-01
The VIMAP model presented in this review [1] is an interesting and detailed model of neural mechanisms of aesthetic perception. In this Comment I address one deficiency of this model: it does not address in details the fundamental notions of the VIMAP, beauty and sublime. In this regard VIMAP is similar to other publications on aesthetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dverstorp, B.; Andersson, J.
1995-12-01
Performance Assessment of a nuclear waste repository implies an analysis of a complex system with many interacting processes. Even if some of these processes may be known to large detail, problems arise when combining all information, and means of abstracting information from complex detailed models into models that couple different processes are needed. Clearly, one of the major objectives of performance assessment, to calculate doses or other performance indicators, implies an enormous abstraction of information compared to all information that is used as input. Other problems are that the knowledge of different parts or processes is strongly variable and adjustments,more » interpretations, are needed when combining models from different disciplines. In addition, people as well as computers, even today, always have a limited capacity to process information and choices have to be made. However, because abstraction of information clearly is unavoidable in performance assessment the validity of choices made, always need to be scrutinized and judgements made need to be updated in an iterative process.« less
Toward mechanistic models of action-oriented and detached cognition.
Pezzulo, Giovanni
2016-01-01
To be successful, the research agenda for a novel control view of cognition should foresee more detailed, computationally specified process models of cognitive operations including higher cognition. These models should cover all domains of cognition, including those cognitive abilities that can be characterized as online interactive loops and detached forms of cognition that depend on internally generated neuronal processing.
Review of the Global Models Used Within Phase 1 of the Chemistry-Climate Model Initiative (CCMI)
NASA Technical Reports Server (NTRS)
Morgenstern, Olaf; Hegglin, Michaela I.; Rozanov, Eugene; O’Connor, Fiona M.; Abraham, N. Luke; Akiyoshi, Hideharu; Archibald, Alexander T.; Bekki, Slimane; Butchart, Neal; Chipperfield, Martyn P.;
2017-01-01
We present an overview of state-of-the-art chemistry-climate and chemistry transport models that are used within phase 1 of the Chemistry-Climate Model Initiative (CCMI-1). The CCMI aims to conduct a detailed evaluation of participating models using process-oriented diagnostics derived from observations in order to gain confidence in the models' projections of the stratospheric ozone layer, tropospheric composition, air quality, where applicable global climate change, and the interactions between them. Interpretation of these diagnostics requires detailed knowledge of the radiative, chemical, dynamical, and physical processes incorporated in the models. Also an understanding of the degree to which CCMI-1 recommendations for simulations have been followed is necessary to understand model responses to anthropogenic and natural forcing and also to explain inter-model differences. This becomes even more important given the ongoing development and the ever-growing complexity of these models. This paper also provides an overview of the available CCMI-1 simulations with the aim of informing CCMI data users.
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
Physics of Accretion in X-Ray Binaries
NASA Technical Reports Server (NTRS)
Vrtilek, Saeqa D.
2004-01-01
This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of {\\it simultaneous} X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these Observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.
The Physics of Accretion in X-Ray Binaries
NASA Technical Reports Server (NTRS)
Vrtilek, S.; Oliversen, Ronald (Technical Monitor)
2001-01-01
This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of simultaneous X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.
J-2X Test Articles Using FDM Process
NASA Technical Reports Server (NTRS)
Anderson, Ted; Ruf, Joe; Steele, Phil
2010-01-01
This viewgraph presentation gives a brief history of the J-2X engine, along with detailed description of the material demonstrator and test articles that were created using Fused Deposition Modeling (FDM) process.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
NASA Technical Reports Server (NTRS)
Solloway, C. B.; Wakeland, W.
1976-01-01
First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.
Analyzing Single-Molecule Protein Transportation Experiments via Hierarchical Hidden Markov Models
Chen, Yang; Shen, Kuang
2017-01-01
To maintain proper cellular functions, over 50% of proteins encoded in the genome need to be transported to cellular membranes. The molecular mechanism behind such a process, often referred to as protein targeting, is not well understood. Single-molecule experiments are designed to unveil the detailed mechanisms and reveal the functions of different molecular machineries involved in the process. The experimental data consist of hundreds of stochastic time traces from the fluorescence recordings of the experimental system. We introduce a Bayesian hierarchical model on top of hidden Markov models (HMMs) to analyze these data and use the statistical results to answer the biological questions. In addition to resolving the biological puzzles and delineating the regulating roles of different molecular complexes, our statistical results enable us to propose a more detailed mechanism for the late stages of the protein targeting process. PMID:28943680
Hudjetz, Silvana; Lennartz, Gottfried; Krämer, Klara; Roß-Nickoll, Martina; Gergs, André; Preuss, Thomas G.
2014-01-01
The degradation of natural and semi-natural landscapes has become a matter of global concern. In Germany, semi-natural grasslands belong to the most species-rich habitat types but have suffered heavily from changes in land use. After abandonment, the course of succession at a specific site is often difficult to predict because many processes interact. In order to support decision making when managing semi-natural grasslands in the Eifel National Park, we built the WoodS-Model (Woodland Succession Model). A multimodeling approach was used to integrate vegetation dynamics in both the herbaceous and shrub/tree layer. The cover of grasses and herbs was simulated in a compartment model, whereas bushes and trees were modelled in an individual-based manner. Both models worked and interacted in a spatially explicit, raster-based landscape. We present here the model description, parameterization and testing. We show highly detailed projections of the succession of a semi-natural grassland including the influence of initial vegetation composition, neighborhood interactions and ungulate browsing. We carefully weighted the single processes against each other and their relevance for landscape development under different scenarios, while explicitly considering specific site conditions. Model evaluation revealed that the model is able to emulate successional patterns as observed in the field as well as plausible results for different population densities of red deer. Important neighborhood interactions such as seed dispersal, the protection of seedlings from browsing ungulates by thorny bushes, and the inhibition of wood encroachment by the herbaceous layer, have been successfully reproduced. Therefore, not only a detailed model but also detailed initialization turned out to be important for spatially explicit projections of a given site. The advantage of the WoodS-Model is that it integrates these many mutually interacting processes of succession. PMID:25494057
An assembly process model based on object-oriented hierarchical time Petri Nets
NASA Astrophysics Data System (ADS)
Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui
2017-04-01
In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Kachalo, Sëma; Naveed, Hammad; Cao, Youfang; Zhao, Jieling; Liang, Jie
2015-01-01
Geometric and mechanical properties of individual cells and interactions among neighboring cells are the basis of formation of tissue patterns. Understanding the complex interplay of cells is essential for gaining insight into embryogenesis, tissue development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating properties of individual cells as well as detailed description of the dynamic growth process, with all topological changes accounted for. Cell size, shape, and division plane orientation are modeled realistically. In addition, cell birth, cell growth, cell shrinkage, cell death, cell division, cell collision, and cell rearrangements are now fully accounted for. Different models of cell-cell interactions, such as lateral inhibition during the process of growth, can be studied in detail. Cellular pattern formation for monolayered tissues from arbitrary initial conditions, including that of a single cell, can also be studied in detail. Computational efficiency is achieved through the employment of a special data structure that ensures access to neighboring cells in constant time, without additional space requirement. We have successfully generated tissues consisting of more than 20,000 cells starting from 2 cells within 1 hour. We show that our model can be used to study embryogenesis, tissue fusion, and cell apoptosis. We give detailed study of the classical developmental process of bristle formation on the epidermis of D. melanogaster and the fundamental problem of homeostatic size control in epithelial tissues. Simulation results reveal significant roles of solubility of secreted factors in both the bristle formation and the homeostatic control of tissue size. Our method can be used to study broad problems in monolayered tissue formation. Our software is publicly available. PMID:25974182
TLS for generating multi-LOD of 3D building model
NASA Astrophysics Data System (ADS)
Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.
2014-02-01
The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.
A Combined Experimental and Analytical Modeling Approach to Understanding Friction Stir Welding
NASA Technical Reports Server (NTRS)
Nunes, Arthur C., Jr.; Stewart, Michael B.; Adams, Glynn P.; Romine, Peter
1998-01-01
In the Friction Stir Welding (FSW) process a rotating pin tool joins the sides of a seam by stirring them together. This solid state welding process avoids problems with melting and hot-shortness presented by some difficult-to weld high-performance light alloys. The details of the plastic flow during the process are not well understood and are currently a subject of research. Two candidate models of the FSW process, the Mixed Zone (MZ) and the Single Slip Surface (S3) model are presented and their predictions compared to experimental data.
NASA Technical Reports Server (NTRS)
Poole, L. R.; Huckins, E. K., III
1972-01-01
A general theory on mathematical modeling of elastic parachute suspension lines during the unfurling process was developed. Massless-spring modeling of suspension-line elasticity was evaluated in detail. For this simple model, equations which govern the motion were developed and numerically integrated. The results were compared with flight test data. In most regions, agreement was satisfactory. However, poor agreement was obtained during periods of rapid fluctuations in line tension.
The Interaction of Global Biochemical Cycles
NASA Technical Reports Server (NTRS)
Moore, B., III; Dastoor, M. N.
1984-01-01
The global biosphere in an exceedingly complex system. To gain an understanding of its structure and dynamic features, it is necessary not only to increase the knowledge about the detailed processes but also to develop models of how global interactions take place. Attempts to analyze the detailed physical, chemical and biological processes in this context need to be guided by an advancement of understanding of the latter. It is necessary to develop a strategy of data gathering that severs both these purposes simultaneously. The following papers deal with critical aspects in the global cycles of carbon, nitrogen, phosphorus and sulfur in details as well as the cycle of water and the flow of energy in the Earth's environment. The objective is to set partly the foundation for the development of mathematical models that allow exploration of the coupled dynamics of the global cycles of carbon, nitrogen, phosphorus, sulfur, as well as energy and water flux.
Model-based pH monitor for sensor assessment.
van Schagen, Kim; Rietveld, Luuk; Veersma, Alex; Babuska, Robert
2009-01-01
Owing to the nature of the treatment processes, monitoring the processes based on individual online measurements is difficult or even impossible. However, the measurements (online and laboratory) can be combined with a priori process knowledge, using mathematical models, to objectively monitor the treatment processes and measurement devices. The pH measurement is a commonly used measurement at different stages in the drinking water treatment plant, although it is a unreliable instrument, requiring significant maintenance. It is shown that, using a grey-box model, it is possible to assess the measurement devices effectively, even if detailed information of the specific processes is unknown.
Modeling greenhouse gas emissions from dairy farms.
Rotz, C Alan
2017-11-15
Dairy farms have been identified as an important source of greenhouse gas emissions. Within the farm, important emissions include enteric CH 4 from the animals, CH 4 and N 2 O from manure in housing facilities during long-term storage and during field application, and N 2 O from nitrification and denitrification processes in the soil used to produce feed crops and pasture. Models using a wide range in level of detail have been developed to represent or predict these emissions. They include constant emission factors, variable process-related emission factors, empirical or statistical models, mechanistic process simulations, and life cycle assessment. To fully represent farm emissions, models representing the various emission sources must be integrated to capture the combined effects and interactions of all important components. Farm models have been developed using relationships across the full scale of detail, from constant emission factors to detailed mechanistic simulations. Simpler models, based upon emission factors and empirical relationships, tend to provide better tools for decision support, whereas more complex farm simulations provide better tools for research and education. To look beyond the farm boundaries, life cycle assessment provides an environmental accounting tool for quantifying and evaluating emissions over the full cycle, from producing the resources used on the farm through processing, distribution, consumption, and waste handling of the milk and dairy products produced. Models are useful for improving our understanding of farm processes and their interacting effects on greenhouse gas emissions. Through better understanding, they assist in the development and evaluation of mitigation strategies for reducing emissions and improving overall sustainability of dairy farms. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
NASA Technical Reports Server (NTRS)
Rohatgi, Naresh K.; Ingham, John D.
1992-01-01
An assessment approach for accurate evaluation of bioprocesses for large-scale production of industrial chemicals is presented. Detailed energy-economic assessments of a potential esterification process were performed, where ethanol vapor in the presence of water from a bioreactor is catalytically converted to ethyl acetate. Results show that such processes are likely to become more competitive as the cost of substrates decreases relative to petrolium costs. A commercial ASPEN process simulation provided a reasonably consistent comparison with energy economics calculated using JPL developed software. Detailed evaluations of the sensitivity of production cost to material costs and annual production rates are discussed.
Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchheit, Thomas E.; Wilcox, Ian Zachary; Sandoval, Andrew J
This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction andmore » portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.« less
Wallas' Four-Stage Model of the Creative Process: More than Meets the Eye?
ERIC Educational Resources Information Center
Sadler-Smith, Eugene
2015-01-01
Based on a detailed reading of Graham Wallas' "Art of Thought" (1926) it is argued that his four-stage model of the creative process (Preparation, Incubation, Illumination, Verification), in spite of holding sway as a conceptual anchor for many creativity researchers, does not reflect accurately Wallas' full account of the creative…
The Use of AMET & Automated Scripts for Model Evaluation
Brief overview of EPA’s new CMAQ website to be launched publically in June, 2017. Details on the upcoming release of the Atmospheric Model Evaluation Tool (AMET) and the creation of automated scripts for post-processing and evaluating air quality model data.
NASA Astrophysics Data System (ADS)
Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard
2018-07-01
This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.
Multiscale Numerical Methods for Non-Equilibrium Plasma
2015-08-01
current paper reports on the implementation of a numerical solver on the Graphic Processing Units (GPUs) to model reactive gas mixtures with detailed...Governing equations The flow ismodeled as amixture of gas specieswhile neglecting viscous effects. The chemical reactions taken place between the gas ...components are to be modeled in great detail. The set of the Euler equations for a reactive gas mixture can be written as: ∂Q ∂t + ∇ · F̄ = Ω̇ (1) where Q
NASA Astrophysics Data System (ADS)
Tokarczyk, Piotr; Leitao, Joao Paulo; Rieckermann, Jörg; Schindler, Konrad; Blumensaat, Frank
2015-04-01
Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual sub-catchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated using UAV imagery processed with modern classification methods achieve accuracy comparable with standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on modelled surface runoff and pipe flows. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility to flexibly acquire up-to-date aerial images at a superior quality and a competitive price. Our analyses furthermore suggest that spatially more detailed urban drainage models can even better benefit from the full detail of UAV imagery.
High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.
2015-01-01
Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated using UAV imagery processed with modern classification methods achieve accuracy comparable with standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on modelled surface runoff and pipe flows. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility to flexibly acquire up-to-date aerial images at a superior quality and a competitive price. Our analyses furthermore suggest that spatially more detailed urban drainage models can even better benefit from the full detail of UAV imagery.
Signatures of Heavy Element Production in Neutron Star Mergers
NASA Astrophysics Data System (ADS)
Barnes, Jennifer
2018-06-01
Compact object mergers involving at least one neutron star have long been theorized to be sites of astrophysical nucleosynthesis via rapid neutron capture (the r-process). The observation in light and gravitational waves of the first neutron star merger (GW1701817) this past summer provided a stunning confirmation of this theory. Electromagnetic emission powered by the radioactive decay of freshly synthesized nuclei from mergers encodes information about the composition burned by the r-process, including whether a particular merger event synthesized the heaviest nuclei along the r-process path, or froze out at lower mass number. However, efforts to model the emission in detail must still contend with many uncertainties. For instance, the uncertain nuclear masses far from the valley of stability influence the final composition burned by the r-process, as will weak interactions operating in the merger’s immediate aftermath. This in turn can affect the color electromagnetic emission. Understanding the details of these transients’ spectra will also require a detailed accounting the electronic transitions of r-process elements and ions, in order to identify the strong transitions that underlie spectral formation. This talk will provide an overview of our current understanding of radioactive transients from mergers, with an emphasis on the role of experiment in providing critical inputs for models and reducing uncertainty.
Computational Biochemistry-Enzyme Mechanisms Explored.
Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias
2017-01-01
Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.
The Interpersonal Conflict Episode: A Systems Model.
ERIC Educational Resources Information Center
Slawski, Carl
A detailed systems diagram elaborates the process of dealing with a single conflict episode between two parties or persons. Hypotheses are fully stated to lead the reader through the flow diagram. A concrete example illustrates its use. Detail is provided in an accounting scheme of virtually all possible variables to consider in analyzing a…
NASA Technical Reports Server (NTRS)
Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.
1974-01-01
A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.
Numerical simulation of turbulent combustion: Scientific challenges
NASA Astrophysics Data System (ADS)
Ren, ZhuYin; Lu, Zhen; Hou, LingYun; Lu, LiuYan
2014-08-01
Predictive simulation of engine combustion is key to understanding the underlying complicated physicochemical processes, improving engine performance, and reducing pollutant emissions. Critical issues as turbulence modeling, turbulence-chemistry interaction, and accommodation of detailed chemical kinetics in complex flows remain challenging and essential for high-fidelity combustion simulation. This paper reviews the current status of the state-of-the-art large eddy simulation (LES)/prob-ability density function (PDF)/detailed chemistry approach that can address the three challenging modelling issues. PDF as a subgrid model for LES is formulated and the hybrid mesh-particle method for LES/PDF simulations is described. Then the development need in micro-mixing models for the PDF simulations of turbulent premixed combustion is identified. Finally the different acceleration methods for detailed chemistry are reviewed and a combined strategy is proposed for further development.
Developing a model for the adequate description of electronic communication in hospitals.
Saboor, Samrend; Ammenwerth, Elske
2011-01-01
Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach.
NASA Technical Reports Server (NTRS)
Bose, Deepak
2012-01-01
The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above
The cost of uniqueness in groundwater model calibration
NASA Astrophysics Data System (ADS)
Moore, Catherine; Doherty, John
2006-04-01
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.
Modeling of Electrochemical Process for the Treatment of Wastewater Containing Organic Pollutants
NASA Astrophysics Data System (ADS)
Rodrigo, Manuel A.; Cañizares, Pablo; Lobato, Justo; Sáez, Cristina
Electrocoagulation and electrooxidation are promising electrochemical technologies that can be used to remove organic pollutants contained in wastewaters. To make these technologies competitive with the conventional technologies that are in use today, a better understanding of the processes involved must be achieved. In this context, the development of mathematical models that are consistent with the processes occurring in a physical system is a relevant advance, because such models can help to understand what is happening in the treatment process. In turn, a more detailed knowledge of the physical system can be obtained, and tools for a proper design of the processes, or for the analysis of operating problems, are attained. The modeling of these technologies can be carried out using single-variable or multivariable models. Likewise, the position dependence of the model species can be described with different approaches. In this work, a review of the basics of the modeling of these processes and a description of several representative models for electrochemical oxidation and coagulation are carried out. Regarding electrooxidation, two models are described: one which summarizes the pollution of a wastewater in only one model species and that considers a macroscopic approach to formulate the mass balances and other that considers more detailed profile of concentration to describe the time course of pollutants and intermediates through a mixed maximum gradient/macroscopic approach. On the topic of electrochemical coagulation, two different approaches are also described in this work: one that considers the hydrodynamic conditions as the main factor responsible for the electrochemical coagulation processes and the other that considers the chemical interaction of the reagents and the pollutants as the more significant processes in the description of the electrochemical coagulation of organic compounds. In addition, in this work it is also described a multivariable model for the electrodissolution of anodes (first stage in electrocoagulation processes). This later model use a mixed macroscopic/maximum gradient approach to describe the chemical and electrochemical processes and it also assumes that the rates of all processes are very high, and that they can be successfully modeled using pseudoequilibrium approaches.
HESS Opinions: The complementary merits of competing modelling philosophies in hydrology
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Clark, Martyn P.
2017-08-01
In hydrology, two somewhat competing philosophies form the basis of most process-based models. At one endpoint of this continuum are detailed, high-resolution descriptions of small-scale processes that are numerically integrated to larger scales (e.g. catchments). At the other endpoint of the continuum are spatially lumped representations of the system that express the hydrological response via, in the extreme case, a single linear transfer function. Many other models, developed starting from these two contrasting endpoints, plot along this continuum with different degrees of spatial resolutions and process complexities. A better understanding of the respective basis as well as the respective shortcomings of different modelling philosophies has the potential to improve our models. In this paper we analyse several frequently communicated beliefs and assumptions to identify, discuss and emphasize the functional similarity of the seemingly competing modelling philosophies. We argue that deficiencies in model applications largely do not depend on the modelling philosophy, although some models may be more suitable for specific applications than others and vice versa, but rather on the way a model is implemented. Based on the premises that any model can be implemented at any desired degree of detail and that any type of model remains to some degree conceptual, we argue that a convergence of modelling strategies may hold some value for advancing the development of hydrological models.
WEST-3 wind turbine simulator development
NASA Technical Reports Server (NTRS)
Hoffman, J. A.; Sridhar, S.
1985-01-01
The software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator is given. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software generated executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particulr, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included are: details of the aeroelastic rotor analysis, which is the center of a wind turbine simulation model, analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
NASA Astrophysics Data System (ADS)
Zachary, Wayne; Eggleston, Robert; Donmoyer, Jason; Schremmer, Serge
2003-09-01
Decision-making is strongly shaped and influenced by the work context in which decisions are embedded. This suggests that decision support needs to be anchored by a model (implicit or explicit) of the work process, in contrast to traditional approaches that anchor decision support to either context free decision models (e.g., utility theory) or to detailed models of the external (e.g., battlespace) environment. An architecture for cognitively-based, work centered decision support called the Work-centered Informediary Layer (WIL) is presented. WIL separates decision support into three overall processes that build and dynamically maintain an explicit context model, use the context model to identify opportunities for decision support and tailor generic decision-support strategies to the current context and offer them to the system-user/decision-maker. The generic decision support strategies include such things as activity/attention aiding, decision process structuring, work performance support (selective, contextual automation), explanation/ elaboration, infosphere data retrieval, and what if/action-projection and visualization. A WIL-based application is a work-centered decision support layer that provides active support without intent inferencing, and that is cognitively based without requiring classical cognitive task analyses. Example WIL applications are detailed and discussed.
Measuring health care process quality with software quality measures.
Yildiz, Ozkan; Demirörs, Onur
2012-01-01
Existing quality models focus on some specific diseases, clinics or clinical areas. Although they contain structure, process, or output type measures, there is no model which measures quality of health care processes comprehensively. In addition, due to the not measured overall process quality, hospitals cannot compare quality of processes internally and externally. To bring a solution to above problems, a new model is developed from software quality measures. We have adopted the ISO/IEC 9126 software quality standard for health care processes. Then, JCIAS (Joint Commission International Accreditation Standards for Hospitals) measurable elements were added to model scope for unifying functional requirements. Assessment (diagnosing) process measurement results are provided in this paper. After the application, it was concluded that the model determines weak and strong aspects of the processes, gives a more detailed picture for the process quality, and provides quantifiable information to hospitals to compare their processes with multiple organizations.
Models and signal processing for an implanted ethanol bio-sensor.
Han, Jae-Joon; Doerschuk, Peter C; Gelfand, Saul B; O'Connor, Sean J
2008-02-01
The understanding of drinking patterns leading to alcoholism has been hindered by an inability to unobtrusively measure ethanol consumption over periods of weeks to months in the community environment. An implantable ethanol sensor is under development using microelectromechanical systems technology. For safety and user acceptability issues, the sensor will be implanted subcutaneously and, therefore, measure peripheral-tissue ethanol concentration. Determining ethanol consumption and kinetics in other compartments from the time course of peripheral-tissue ethanol concentration requires sophisticated signal processing based on detailed descriptions of the relevant physiology. A statistical signal processing system based on detailed models of the physiology and using extended Kalman filtering and dynamic programming tools is described which can estimate the time series of ethanol concentration in blood, liver, and peripheral tissue and the time series of ethanol consumption based on peripheral-tissue ethanol concentration measurements.
Analysis of the packet formation process in packet-switched networks
NASA Astrophysics Data System (ADS)
Meditch, J. S.
Two new queueing system models for the packet formation process in packet-switched telecommunication networks are developed, and their applications in process stability, performance analysis, and optimization studies are illustrated. The first, an M/M/1 queueing system characterization of the process, is a highly aggregated model which is useful for preliminary studies. The second, a marked extension of an earlier M/G/1 model, permits one to investigate stability, performance characteristics, and design of the packet formation process in terms of the details of processor architecture, and hardware and software implementations with processor structure and as many parameters as desired as variables. The two new models together with the earlier M/G/1 characterization span the spectrum of modeling complexity for the packet formation process from basic to advanced.
DOT National Transportation Integrated Search
2014-05-12
This document details the process that the Volpe National Transportation Systems Center (Volpe) used to develop travel forecasting models for the Federal Highway Administration (FHWA). The purpose of these models is to allow FHWA to forecast future c...
Modeling of Inelastic Collisions in a Multifluid Plasma: Excitation and Deexcitation
2016-05-31
AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES For publication in Physics of Plasma Vol #22, Issue...the fundamental physical processes may be individually known, it is not always clear how their combination affects the overall operation, or at what...arises from the complexity of the physical processes needed to be captured in the model. The required level of detail of the CR model is typically not
Modeling of Inelastic Collisions in a Multifluid Plasma: Excitation and Deexcitation (Preprint)
2015-06-01
AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES For publication in Physics of Plasma PA Case...the fundamental physical processes may be individually known, it is not always clear how their combination affects the overall operation, or at what...arises from the complexity of the physical processes needed to be captured in the model. The required level of detail of the CR model is typically not
Accurate 3d Scanning of Damaged Ancient Greek Inscriptions for Revealing Weathered Letters
NASA Astrophysics Data System (ADS)
Papadaki, A. I.; Agrafiotis, P.; Georgopoulos, A.; Prignitz, S.
2015-02-01
In this paper two non-invasive non-destructive alternative techniques to the traditional and invasive technique of squeezes are presented alongside with specialized developed processing methods, aiming to help the epigraphists to reveal and analyse weathered letters in ancient Greek inscriptions carved in masonry or marble. The resulting 3D model would serve as a detailed basis for the epigraphists to try to decipher the inscription. The data were collected by using a Structured Light scanner. The creation of the final accurate three dimensional model is a complicated procedure requiring large computation cost and human effort. It includes the collection of geometric data in limited space and time, the creation of the surface, the noise filtering and the merging of individual surfaces. The use of structured light scanners is time consuming and requires costly hardware and software. Therefore an alternative methodology for collecting 3D data of the inscriptions was also implemented for reasons of comparison. Hence, image sequences from varying distances were collected using a calibrated DSLR camera aiming to reconstruct the 3D scene through SfM techniques in order to evaluate the efficiency and the level of precision and detail of the obtained reconstructed inscriptions. Problems in the acquisition processes as well as difficulties in the alignment step and mesh optimization are also encountered. A meta-processing framework is proposed and analysed. Finally, the results of processing and analysis and the different 3D models are critically inspected and then evaluated by a specialist in terms of accuracy, quality and detail of the model and the capability of revealing damaged and "hidden" letters.
Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...
2017-08-05
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less
Nonlinear Growth Curves in Developmental Research
Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki
2011-01-01
Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131
A modeling analysis program for the JPL table mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, W. H.; Goldberg, B. A.
1984-01-01
A detailed review of 110 of the 263 Region B/C images of the 1981 data set is undertaken and a preliminary assessment of 39 images of the 1976-79 data set is presented. The basic spatial characteristics of these images are discussed. Modeling analysis of these images after further data processing will provide useful information about Io and the planetary magnetosphere. Plans for data processing and modeling analysis are outlined. Results of very preliminary modeling activities are presented.
The Use of Modeling Approach for Teaching Exponential Functions
NASA Astrophysics Data System (ADS)
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
NASA Astrophysics Data System (ADS)
Ribeiro, José B.; Silva, Cristóvão; Mendes, Ricardo; Plaksin, I.; Campos, Jose
2012-03-01
The use of emulsion explosives [EEx] for processing materials (compaction, welding and forming) requires the ability to perform detailed simulations of its detonation process [DP]. Detailed numerical simulations of the DP of this kind of explosives, characterized by having a finite reaction zone thickness, are thought to be suitably performed using the Lee-Tarver reactive flow model. In this work a real coded genetic algorithm methodology was used to estimate the 15 parameters of the reaction rate equation [RRE] of that model for a particular EEx. This methodology allows, in a single optimization procedure, using only one experimental result and without the need of any starting solution, to seek for the 15 parameters of the RRE that fit the numerical to the experimental results. Mass averaging and the Plate-Gap Model have been used for the determination of the shock data used in the unreacted explosive JWL EoS assessment, and the thermochemical code THOR retrieved the data used in the detonation products JWL EoS assessment. The obtained parameters allow a reasonable description of the experimental data.
Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Richard Yorg
2011-03-01
The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it wouldmore » have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.« less
Influence of heat transfer rates on pressurization of liquid/slush hydrogen propellant tanks
NASA Technical Reports Server (NTRS)
Sasmal, G. P.; Hochstein, J. I.; Hardy, T. L.
1993-01-01
A multi-dimensional computational model of the pressurization process in liquid/slush hydrogen tank is developed and used to study the influence of heat flux rates at the ullage boundaries on the process. The new model computes these rates and performs an energy balance for the tank wall whereas previous multi-dimensional models required a priori specification of the boundary heat flux rates. Analyses of both liquid hydrogen and slush hydrogen pressurization were performed to expose differences between the two processes. Graphical displays are presented to establish the dependence of pressurization time, pressurant mass required, and other parameters of interest on ullage boundary heat flux rates and pressurant mass flow rate. Detailed velocity fields and temperature distributions are presented for selected cases to further illuminate the details of the pressurization process. It is demonstrated that ullage boundary heat flux rates do significantly effect the pressurization process and that minimizing heat loss from the ullage and maximizing pressurant flow rate minimizes the mass of pressurant gas required to pressurize the tank. It is further demonstrated that proper dimensionless scaling of pressure and time permit all the pressure histories examined during this study to be displayed as a single curve.
Petroleum Market Model of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-01-01
The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcoholsmore » and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.« less
An Allocation Model for Teaching and Nonteaching Staff in a Decentralized Institution.
ERIC Educational Resources Information Center
Dijkman, Frank G
1985-01-01
An allocation model for teaching and nonteaching staff developed at the University of Utrecht is characterized as highly normative, leading to lump sums to be allocated to academic departments. Details are given regarding the reasons for designing the new model and the process of implementation. (Author/MLW)
High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Leitao, J. P.; Rieckermann, J.; Schindler, K.; Blumensaat, F.
2015-10-01
Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment, particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the catchment area as model input. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increases as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data are often unavailable. Modern unmanned aerial vehicles (UAVs) allow one to acquire high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility of deriving high-resolution imperviousness maps for urban areas from UAV imagery and of using this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is proposed and evaluated in a state-of-the-art urban drainage modelling exercise. In a real-life case study (Lucerne, Switzerland), we compare imperviousness maps generated using a fixed-wing consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their overall accuracy, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyse the surface runoff of the 307 individual subcatchments regarding relevant attributes, such as peak runoff and runoff volume. Finally, we evaluate the model's channel flow prediction performance through a cross-comparison with reference flow measured at the catchment outlet. We show that imperviousness maps generated from UAV images processed with modern classification methods achieve an accuracy comparable to standard, off-the-shelf aerial imagery. In the examined case study, we find that the different imperviousness maps only have a limited influence on predicted surface runoff and pipe flows, when traditional workflows are used. We expect that they will have a substantial influence when more detailed modelling approaches are employed to characterize land use and to predict surface runoff. We conclude that UAV imagery represents a valuable alternative data source for urban drainage model applications due to the possibility of flexibly acquiring up-to-date aerial images at a quality compared with off-the-shelf image products and a competitive price at the same time. We believe that in the future, urban drainage models representing a higher degree of spatial detail will fully benefit from the strengths of UAV imagery.
NASA Astrophysics Data System (ADS)
Bell, Kevin D.; Dafesh, Philip A.; Hsu, L. A.; Tsuda, A. S.
1995-12-01
Current architectural and design trade techniques often carry unaffordable alternatives late into the decision process. Early decisions made during the concept exploration and development (CE&D) phase will drive the cost of a program more than any other phase of development; thus, designers must be able to assess both the performance and cost impacts of their early choices. The Space Based Infrared System (SBIRS) cost engineering model (CEM) described in this paper is an end-to-end process integrating engineering and cost expertise through commonly available spreadsheet software, allowing for concurrent design engineering and cost estimation to identify and balance system drives to reduce acquisition costs. The automated interconnectivity between subsystem models using spreadsheet software allows for the quick and consistent assessment of the system design impacts and relative cost impacts due to requirement changes. It is different from most CEM efforts attempted in the past as it incorporates more detailed spacecraft and sensor payload models, and has been applied to determine the cost drivers for an advanced infrared satellite system acquisition. The CEM is comprised of integrated detailed engineering and cost estimating relationships describing performance, design, and cost parameters. Detailed models have been developed to evaluate design parameters for the spacecraft bus and sensor; both step-starer and scanner sensor types incorporate models of focal plane array, optics, processing, thermal, communications, and mission performance. The current CEM effort has provided visibility to requirements, design, and cost drivers for system architects and decision makers to determine the configuration of an infrared satellite architecture that meets essential requirements cost effectively. In general, the methodology described in this paper consists of process building blocks that can be tailored to the needs of many applications. Descriptions of the spacecraft and payload subsystem models provide insight into The Aerospace Corporation expertise and scope of the SBIRS concept development effort.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
SCIENCE VERSION OF PM CHEMISTRY MODEL
PM chemistry models containing detailed treatments of key chemical processes controlling ambient concentrations of inorganic and organic compounds in PM2.5 are needed to develop strategies for reducing PM2.5 concentrations. This task, that builds on previous research conducted i...
Nutrient Dynamics In Flooded Wetlands. I: Model Development
Wetlands are rich ecosystems recognized for ameliorating floods, improving water quality and providing other ecosystem benefits. In this part of a two-paper sequel, we present a relatively detailed process-based model for nitrogen and phosphorus retention, cycling and removal in...
Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).
Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco
2013-10-01
In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fisk, J.; Hurtt, G. C.; le page, Y.; Patel, P. L.; Chini, L. P.; Sahajpal, R.; Dubayah, R.; Thomson, A. M.; Edmonds, J.; Janetos, A. C.
2013-12-01
Integrated assessment models (IAMs) simulate the interactions between human and natural systems at a global scale, representing a broad suite of phenomena across the global economy, energy system, land-use, and carbon cycling. Most proposed climate mitigation strategies rely on maintaining or enhancing the terrestrial carbon sink as a substantial contribution to restrain the concentration of greenhouse gases in the atmosphere, however most IAMs rely on simplified regional representations of terrestrial carbon dynamics. Our research aims to reduce uncertainties associated with forest modeling within integrated assessments, and to quantify the impacts of climate change on forest growth and productivity for integrated assessments of terrestrial carbon management. We developed the new Integrated Ecosystem Demography (iED) to increase terrestrial ecosystem process detail, resolution, and the utilization of remote sensing in integrated assessments. iED brings together state-of-the-art models of human society (GCAM), spatial land-use patterns (GLM) and terrestrial ecosystems (ED) in a fully coupled framework. The major innovative feature of iED is a consistent, process-based representation of ecosystem dynamics and carbon cycle throughout the human, terrestrial, land-use, and atmospheric components. One of the most challenging aspects of ecosystem modeling is to provide accurate initialization of land surface conditions to reflect non-equilibrium conditions, i.e., the actual successional state of the forest. As all plants in ED have an explicit height, it is one of the few ecosystem models that can be initialized directly with vegetation height data. Previous work has demonstrated that ecosystem model resolution and initialization data quality have a large effect on flux predictions at continental scales. Here we use a factorial modeling experiment to quantify the impacts of model integration, process detail, model resolution, and initialization data on projections of future climate mitigation strategies. We find substantial effects on key integrated assessment projections including the magnitude of emissions to mitigate, the economic value of ecosystem carbon storage, future land-use patterns, food prices and energy technology.
Modelling non-hydrostatic processes in sill regions
NASA Astrophysics Data System (ADS)
Souza, A.; Xing, J.; Davies, A.; Berntsen, J.
2007-12-01
We use a non-hydrostatic model to compute tidally induced flow and mixing in the region of bottom topography representing the sill at the entrance to Loch Etive (Scotland). This site is chosen since detailed measurements were recently made there. With non-hydrostatic dynamics in the model our results showed that the model could reproduce the observed flow characteristics, e.g., hydraulic transition, flow separation and internal waves. However, when calculations were performed using the model in the hydrostatic form, significant artificial convective mixing occurred. This influenced the computed temperature and flow field. We will discuss in detail the effects of non-hydrostatic dynamics on flow over the sill, especially investigate non-linear and non-hydrostatic contributions to modelled internal waves and internal wave energy fluxes.
Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner
2015-01-01
Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...
Modeling Post-Accident Vehicle Egress
2013-01-01
interest for military situations may involve rolled-over vehicles for which detailed movement data are not available. In the current design process...test trials. These evaluations are expensive and time-consuming, and are often performed late in the design process when it is too difficult to...alter the design if weaknesses are discovered. Yet, due to the limitations of current software tools, digital human models (DHMs) are not yet widely
Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Briggs, Jeffery L.
2008-01-01
The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.
eSPEM - A SPEM Extension for Enactable Behavior Modeling
NASA Astrophysics Data System (ADS)
Ellner, Ralf; Al-Hilank, Samir; Drexler, Johannes; Jung, Martin; Kips, Detlef; Philippsen, Michael
OMG's SPEM - by means of its (semi-)formal notation - allows for a detailed description of development processes and methodologies, but can only be used for a rather coarse description of their behavior. Concepts for a more fine-grained behavior model are considered out of scope of the SPEM standard and have to be provided by other standards like BPDM/BPMN or UML. However, a coarse granularity of the behavior model often impedes a computer-aided enactment of a process model. Therefore, in this paper we present eSPEM, an extension of SPEM, that is based on the UML meta-model and focused on fine-grained behavior and life-cycle modeling and thereby supports automated enactment of development processes.
NASA Astrophysics Data System (ADS)
Druzhinin, O.; Troitskaya, Yu; Zilitinkevich, S.
2018-01-01
The detailed knowledge of turbulent exchange processes occurring in the atmospheric marine boundary layer are of primary importance for their correct parameterization in large-scale prognostic models. These processes are complicated, especially at sufficiently strong wind forcing conditions, by the presence of sea-spray drops which are torn off the crests of sufficiently steep surface waves by the wind gusts. Natural observations indicate that mass fraction of sea-spray drops increases with wind speed and their impact on the dynamics of the air in the vicinity of the sea surface can become quite significant. Field experiments, however, are limited by insufficient accuracy of the acquired data and are in general costly and difficult. Laboratory modeling presents another route to investigate the spray-mediated exchange processes in much more detail as compared to the natural experiments. However, laboratory measurements, contact as well as Particle Image Velocimetry (PIV) methods, also suffer from inability to resolve the dynamics of the near-surface air-flow, especially in the surface wave troughs. In this report, we present a first attempt to use Direct Numerical Simulation (DNS) as tool for investigation of the drops-mediated momentum, heat and moisture transfer in a turbulent, droplet-laden air flow over a wavy water surface. DNS is capable of resolving the details of the transfer processes and do not involve any closure assumptions typical of Large-Eddy and Reynolds Averaged Navier-Stokes (LES and RANS) simulations. Thus DNS provides a basis for improving parameterizations in LES and RANS closure models and further development of large-scale prognostic models. In particular, we discuss numerical results showing the details of the modification of the air flow velocity, temperature and relative humidity fields by multidisperse, evaporating drops. We use Eulerian-Lagrangian approach where the equations for the air-flow fields are solved in a Eulerian frame whereas the drops dymanics equations are solved in a Largangain frame. The effects of air flow and drops on the water surface wave are neglected. A point-force approximation is employed to model the feed-back contributions by the drops to the air momentum, heat and moisture transfer.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A Model Description Document for the Emulation Simulation Computer Model was already published. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation simulation combination in the design, development, and test of a piece of ARS hardware, SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from test. Also, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system. The second consists of a potential air revitalization system.
Branching processes in disease epidemics
NASA Astrophysics Data System (ADS)
Singh, Sarabjeet
Branching processes have served as a model for chemical reactions, biological growth processes and contagion (of disease, information or fads). Through this connection, these seemingly different physical processes share some common universalities that can be elucidated by analyzing the underlying branching process. In this thesis, we focus on branching processes as a model for infectious diseases spreading between individuals belonging to different populations. The distinction between populations can arise from species separation (as in the case of diseases which jump across species) or spatial separation (as in the case of disease spreading between farms, cities, urban centers, etc). A prominent example of the former is zoonoses -- infectious diseases that spill from animals to humans -- whose specific examples include Nipah virus, monkeypox, HIV and avian influenza. A prominent example of the latter is infectious diseases of animals such as foot and mouth disease and bovine tuberculosis that spread between farms or cattle herds. Another example of the latter is infectious diseases of humans such as H1N1 that spread from one city to another through migration of infectious hosts. This thesis consists of three main chapters, an introduction and an appendix. The introduction gives a brief history of mathematics in modeling the spread of infectious diseases along with a detailed description of the most commonly used disease model -- the Susceptible-Infectious-Recovered (SIR) model. The introduction also describes how the stochastic formulation of the model reduces to a branching process in the limit of large population which is analyzed in detail. The second chapter describes a two species model of zoonoses with coupled SIR processes and proceeds into the calculation of statistics pertinent to cross species infection using multitype branching processes. The third chapter describes an SIR process driven by a Poisson process of infection spillovers. This is posed as a model of infectious diseases where a `reservoir' of infection exists that infects a susceptible host population at a constant rate. The final chapter of the thesis describes a general framework of modeling infectious diseases in a network of populations using multitype branching processes.
Quantification of transendothelial migration using three-dimensional confocal microscopy.
Cain, Robert J; d'Água, Bárbara Borda; Ridley, Anne J
2011-01-01
Migration of cells across endothelial barriers, termed transendothelial migration (TEM), is an important cellular process that underpins the pathology of many disease states including chronic inflammation and cancer metastasis. While this process can be modeled in vitro using cultured cells, many model systems are unable to provide detailed visual information of cell morphologies and distribution of proteins such as junctional markers, as well as quantitative data on the rate of TEM. Improvements in imaging techniques have made microscopy-based assays an invaluable tool for studying this type of detailed cell movement in physiological processes. In this chapter, we describe a confocal microscopy-based method that can be used to assess TEM of both leukocytes and cancer cells across endothelial barriers in response to a chemotactic gradient, as well as providing information on their migration into a subendothelial extracellular matrix, designed to mimic that found in vivo.
Toth, Tibor Istvan; Grabowska, Martyna; Schmidt, Joachim; Büschges, Ansgar; Daun-Gruhn, Silvia
2013-01-01
Stop and start of stepping are two basic actions of the musculo-skeletal system of a leg. Although they are basic phenomena, they require the coordinated activities of the leg muscles. However, little is known of the details of how these activities are generated by the interactions between the local neuronal networks controlling the fast and slow muscle fibres at the individual leg joints. In the present work, we aim at uncovering some of those details using a suitable neuro-mechanical model. It is an extension of the model in the accompanying paper and now includes all three antagonistic muscle pairs of the main joints of an insect leg, together with their dedicated neuronal control, as well as common inhibitory motoneurons and the residual stiffness of the slow muscles. This model enabled us to study putative processes of intra-leg coordination during stop and start of stepping. We also made use of the effects of sensory signals encoding the position and velocity of the leg joints. Where experimental observations are available, the corresponding simulation results are in good agreement with them. Our model makes detailed predictions as to the coordination processes of the individual muscle systems both at stop and start of stepping. In particular, it reveals a possible role of the slow muscle fibres at stop in accelerating the convergence of the leg to its steady-state position. These findings lend our model physiological relevance and can therefore be used to elucidate details of the stop and start of stepping in insects, and perhaps in other animals, too. PMID:24278108
NASA Astrophysics Data System (ADS)
Bach, T.; Pallesen, T. M.; Jensen, N. P.; Mielby, S.; Sandersen, P.; Kristensen, M.
2015-12-01
This case demonstrates a practical example from the city of Odense (DK) where new geological modeling techniques has been developed and used in the software GeoScene3D, to create a detailed voxel model of the anthropogenic layer. The voxel model has been combined with a regional hydrostratigraphic layer model. The case is part of a pilot project partly financed by VTU (Foundation for Development of Technology in the Danish Water Sector) and involves many different datatypes such as borehole information, geophysical data, human related elements (landfill, pipelines, basements, roadbeds etc). In the last few years, there has been increased focus on detailed geological modeling in urban areas. The models serve as important input to hydrological models. This focus is partly due to climate changes as high intensity rainfalls are seen more often than in the past, and water recharge is a topic too. In urban areas, this arises new challenges. There is a need of a high level of detailed geological knowledge for the uppermost zone of the soil, which typically are problematic due to practically limitations, especially when using geological layer models. Furthermore, to accommodate the need of a high detail, all relevant available data has to be used in the modeling process. Human activity has deeply changed the soil layers, e.g. by constructions as roadbeds, buildings with basements, pipelines, landfill etc. These elements can act as barriers or pathways regarding surface near groundwater flow and can attribute to local flooding or mobilization and transport of contaminants etc. A geological voxel model is built by small boxes (a voxel). Each box can contain several parameters, ex. lithology, transmissivity or contaminant concentration. Human related elements can be implemented using tools, which gives the modeler advanced options for making detailed small-scale models. This case demonstrates the workflow and the resulting geological model for the pilot area.
Process Modeling and Dynamic Simulation for EAST Helium Refrigerator
NASA Astrophysics Data System (ADS)
Lu, Xiaofei; Fu, Peng; Zhuang, Ming; Qiu, Lilong; Hu, Liangbing
2016-06-01
In this paper, the process modeling and dynamic simulation for the EAST helium refrigerator has been completed. The cryogenic process model is described and the main components are customized in detail. The process model is controlled by the PLC simulator, and the realtime communication between the process model and the controllers is achieved by a customized interface. Validation of the process model has been confirmed based on EAST experimental data during the cool down process of 300-80 K. Simulation results indicate that this process simulator is able to reproduce dynamic behaviors of the EAST helium refrigerator very well for the operation of long pulsed plasma discharge. The cryogenic process simulator based on control architecture is available for operation optimization and control design of EAST cryogenic systems to cope with the long pulsed heat loads in the future. supported by National Natural Science Foundation of China (No. 51306195) and Key Laboratory of Cryogenics, Technical Institute of Physics and Chemistry, CAS (No. CRYO201408)
The Modular Modeling System (MMS): User's Manual
Leavesley, G.H.; Restrepo, Pedro J.; Markstrom, S.L.; Dixon, M.; Stannard, L.G.
1996-01-01
The Modular Modeling System (MMS) is an integrated system of computer software that has been developed to provide the research and operational framework needed to support development, testing, and evaluation of physical-process algorithms and to facilitate integration of user-selected sets of algorithms into operational physical-process models. MMS uses a module library that contains modules for simulating a variety of water, energy, and biogeochemical processes. A model is created by selectively coupling the most appropriate modules from the library to create a 'suitable' model for the desired application. Where existing modules do not provide appropriate process algorithms, new modules can be developed. The MMS user's manual provides installation instructions and a detailed discussion of system concepts, module development, and model development and application using the MMS graphical user interface.
Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro
2012-06-01
We developed a quantum-like model describing the gene regulation of glucose/lactose metabolism in a bacterium, Escherichia coli. Our quantum-like model can be considered as a kind of the operational formalism for microbiology and genetics. Instead of trying to describe processes in a cell in the very detail, we propose a formal operator description. Such a description may be very useful in situation in which the detailed description of processes is impossible or extremely complicated. We analyze statistical data obtained from experiments, and we compute the degree of E. coli's preference within adaptive dynamics. It is known that there are several types of E. coli characterized by the metabolic system. We demonstrate that the same type of E. coli can be described by the well determined operators; we find invariant operator quantities characterizing each type. Such invariant quantities can be calculated from the obtained statistical data.
The weak coherence account: detail-focused cognitive style in autism spectrum disorders.
Happé, Francesca; Frith, Uta
2006-01-01
"Weak central coherence" refers to the detail-focused processing style proposed to characterise autism spectrum disorders (ASD). The original suggestion of a core deficit in central processing resulting in failure to extract global form/meaning, has been challenged in three ways. First, it may represent an outcome of superiority in local processing. Second, it may be a processing bias, rather than deficit. Third, weak coherence may occur alongside, rather than explain, deficits in social cognition. A review of over 50 empirical studies of coherence suggests robust findings of local bias in ASD, with mixed findings regarding weak global processing. Local bias appears not to be a mere side-effect of executive dysfunction, and may be independent of theory of mind deficits. Possible computational and neural models are discussed.
Leveraging the UML Metamodel: Expressing ORM Semantics Using a UML Profile
DOE Office of Scientific and Technical Information (OSTI.GOV)
CUYLER,DAVID S.
2000-11-01
Object Role Modeling (ORM) techniques produce a detailed domain model from the perspective of the business owner/customer. The typical process begins with a set of simple sentences reflecting facts about the business. The output of the process is a single model representing primarily the persistent information needs of the business. This type of model contains little, if any reference to a targeted computerized implementation. It is a model of business entities not of software classes. Through well-defined procedures, an ORM model can be transformed into a high quality objector relational schema.
Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modeling
NASA Astrophysics Data System (ADS)
Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.
2015-09-01
The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather high temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes, a simple dynamics model of the Asai-Kasahara (AK) type is combined with detailed spectral microphysics (SPECS) forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics, as well as main cloud features, to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity), whereas the ice phase is much more sensitive to the microphysical parameters (ice nucleating particle (INP) number, ice particle shape). The choice of ice particle shape may induce large uncertainties that are on the same order as those for the temperature-dependent INP number distribution.
Ice phase in altocumulus clouds over Leipzig: remote sensing observations and detailed modelling
NASA Astrophysics Data System (ADS)
Simmel, M.; Bühl, J.; Ansmann, A.; Tegen, I.
2015-01-01
The present work combines remote sensing observations and detailed cloud modeling to investigate two altocumulus cloud cases observed over Leipzig, Germany. A suite of remote sensing instruments was able to detect primary ice at rather warm temperatures of -6 °C. For comparison, a second mixed phase case at about -25 °C is introduced. To further look into the details of cloud microphysical processes a simple dynamics model of the Asai-Kasahara type is combined with detailed spectral microphysics forming the model system AK-SPECS. Vertical velocities are prescribed to force the dynamics as well as main cloud features to be close to the observations. Subsequently, sensitivity studies with respect to ice microphysical parameters are carried out with the aim to quantify the most important sensitivities for the cases investigated. For the cases selected, the liquid phase is mainly determined by the model dynamics (location and strength of vertical velocity) whereas the ice phase is much more sensitive to the microphysical parameters (ice nuclei (IN) number, ice particle shape). The choice of ice particle shape may induce large uncertainties which are in the same order as those for the temperature-dependent IN number distribution.
The Specific Features of design and process engineering in branch of industrial enterprise
NASA Astrophysics Data System (ADS)
Sosedko, V. V.; Yanishevskaya, A. G.
2017-06-01
Production output of industrial enterprise is organized in debugged working mechanisms at each stage of product’s life cycle from initial design documentation to product and finishing it with utilization. The topic of article is mathematical model of the system design and process engineering in branch of the industrial enterprise, statistical processing of estimated implementation results of developed mathematical model in branch, and demonstration of advantages at application at this enterprise. During the creation of model a data flow about driving of information, orders, details and modules in branch of enterprise groups of divisions were classified. Proceeding from the analysis of divisions activity, a data flow, details and documents the state graph of design and process engineering was constructed, transitions were described and coefficients are appropriated. To each condition of system of the constructed state graph the corresponding limiting state probabilities were defined, and also Kolmogorov’s equations are worked out. When integration of sets of equations of Kolmogorov the state probability of system activity the specified divisions and production as function of time in each instant is defined. On the basis of developed mathematical model of uniform system of designing and process engineering and manufacture, and a state graph by authors statistical processing the application of mathematical model results was carried out, and also advantage at application at this enterprise is shown. Researches on studying of loading services probability of branch and third-party contractors (the orders received from branch within a month) were conducted. The developed mathematical model of system design and process engineering and manufacture can be applied to definition of activity state probability of divisions and manufacture as function of time in each instant that will allow to keep account of loading of performance of work in branches of the enterprise.
NASA Astrophysics Data System (ADS)
Frankl, Amaury; Stal, Cornelis; Abraha, Amanuel; De Wulf, Alain; Poesen, Jean
2014-05-01
Taking climate change scenarios into account, rainfall patterns are likely to change over the coming decades in eastern Africa. In brief, large parts of eastern Africa are expected to experience a wetting, including seasonality changes. Gullies are threshold phenomena that accomplish most of their geomorphic change during short periods of strong rainfall. Understanding the links between geomorphic change and rainfall characteristics in detail, is thus crucial to ensure the sustainability of future land management. In this study, we present image-based 3D modelling as a low-cost, flexible and rapid method to quantify gully morphology from terrestrial photographs. The methodology was tested on two gully heads in Northern Ethiopia. Ground photographs (n = 88-235) were taken during days with cloud cover. The photographs were processed in PhotoScan software using a semi-automated Structure from Motion-Multi View Stereo (SfM-MVS) workflow. As a result, full 3D models were created, accurate at cm level. These models allow to quantify gully morphology in detail, including information on undercut walls and soil pipe inlets. Such information is crucial for understanding the hydrogeomorphic processes involved. Producing accurate 3D models after each rainfall event, allows to model interrelations between rainfall, land management, runoff and erosion. Expected outcomes are the production of detailed vulnerability maps that allow to design soil and water conservation measures in a cost-effective way. Keywords: 3D model, Ethiopia, Image-based 3D modelling, Gully, PhotoScan, Rainfall.
Spacecraft Thermal and Optical Modeling Impacts on Estimation of the GRAIL Lunar Gravity Field
NASA Technical Reports Server (NTRS)
Fahnestock, Eugene G.; Park, Ryan S.; Yuan, Dah-Ning; Konopliv, Alex S.
2012-01-01
We summarize work performed involving thermo-optical modeling of the two Gravity Recovery And Interior Laboratory (GRAIL) spacecraft. We derived several reconciled spacecraft thermo-optical models having varying detail. We used the simplest in calculating SRP acceleration, and used the most detailed to calculate acceleration due to thermal re-radiation. For the latter, we used both the output of pre-launch finite-element-based thermal simulations and downlinked temperature sensor telemetry. The estimation process to recover the lunar gravity field utilizes both a nominal thermal re-radiation accleration history and an apriori error model derived from that plus an off-nominal history, which bounds parameter uncertainties as informed by sensitivity studies.
On the use of tower-flux measurements to assess the performance of global ecosystem models
NASA Astrophysics Data System (ADS)
El Maayar, M.; Kucharik, C.
2003-04-01
Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.
Behavior of the gypsy moth life system model and development of synoptic model formulations
J. J. Colbert; Xu Rumei
1991-01-01
Aims of the research: The gypsy moth life system model (GMLSM) is a complex model which incorporates numerous components (both biotic and abiotic) and ecological processes. It is a detailed simulation model which has much biological reality. However, it has not yet been tested with life system data. For such complex models, evaluation and testing cannot be adequately...
Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A
2015-01-01
Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.
Revitalizing Adversary Evaluation: Deep Dark Deficits or Muddled Mistaken Musings
ERIC Educational Resources Information Center
Thurston, Paul
1978-01-01
The adversary evaluation model consists of utilizing the judicial process as a metaphor for educational evaluation. In this article, previous criticism of the model is addressed and its fundamental problems are detailed. It is speculated that the model could be improved by borrowing ideas from other legal forms of inquiry. (Author/GC)
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.
Thermochemical Conversion Techno-Economic Analysis | Bioenergy | NREL
Conversion Techno-Economic Analysis Thermochemical Conversion Techno-Economic Analysis NREL's Thermochemical Conversion Analysis team focuses on the conceptual process design and techno-economic analysis , detailed process models, and TEA developed under this project provide insights into the potential economic
A multi-objective programming model for assessment the GHG emissions in MSW management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavrotas, George, E-mail: mavrotas@chemeng.ntua.gr; Skoulaxinou, Sotiria; Gakis, Nikos
2013-09-15
Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty yearsmore » they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH{sub 4} generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application of the model in a Greek region.« less
ERIC Educational Resources Information Center
Meehan, Peter M.; Beal, George M.
The objective of this monograph is to contribute to the further understanding of the knowledge-production-and-utilization process. Its primary focus is on a model both general and detailed enough to provide a comprehensive overview of the diverse functions, roles, and processes required to understand the flow of knowledge from its point of origin…
NASA Astrophysics Data System (ADS)
Zubanov, V. M.; Stepanov, D. V.; Shabliy, L. S.
2017-01-01
The article describes the method for simulation of transient combustion processes in the rocket engine. The engine operates on gaseous propellant: oxygen and hydrogen. Combustion simulation was performed using the ANSYS CFX software. Three reaction mechanisms for the stationary mode were considered and described in detail. Reactions mechanisms have been taken from several sources and verified. The method for converting ozone properties from the Shomate equation to the NASA-polynomial format was described in detail. The way for obtaining quick CFD-results with intermediate combustion components using an EDM model was found. Modeling difficulties with combustion model Finite Rate Chemistry, associated with a large scatter of reference data were identified and described. The way to generate the Flamelet library with CFX-RIF is described. Formulated adequate reaction mechanisms verified at a steady state have also been tested for transient simulation. The Flamelet combustion model was recognized as adequate for the transient mode. Integral parameters variation relates to the values obtained during stationary simulation. A cyclic irregularity of the temperature field, caused by precession of the vortex core, was detected in the chamber with the proposed simulation technique. Investigations of unsteady processes of rocket engines including the processes of ignition were proposed as the area for application of the described simulation technique.
Knowledge sifters in MDA technologies
NASA Astrophysics Data System (ADS)
Kravchenko, Yuri; Kursitys, Ilona; Bova, Victoria
2018-05-01
The article considers a new approach to efficient management of information processes on the basis of object models. With the help of special design tools, a generic and application-independent application model is created, and then the program is implemented in a specific development environment. At the same time, the development process is completely based on a model that must contain all the information necessary for programming. The presence of a detailed model provides the automatic creation of typical parts of the application, the development of which is amenable to automation.
Software Engineering Program: Software Process Improvement Guidebook
NASA Technical Reports Server (NTRS)
1996-01-01
The purpose of this document is to provide experience-based guidance in implementing a software process improvement program in any NASA software development or maintenance community. This guidebook details how to define, operate, and implement a working software process improvement program. It describes the concept of the software process improvement program and its basic organizational components. It then describes the structure, organization, and operation of the software process improvement program, illustrating all these concepts with specific NASA examples. The information presented in the document is derived from the experiences of several NASA software organizations, including the SEL, the SEAL, and the SORCE. Their experiences reflect many of the elements of software process improvement within NASA. This guidebook presents lessons learned in a form usable by anyone considering establishing a software process improvement program within his or her own environment. This guidebook attempts to balance general and detailed information. It provides material general enough to be usable by NASA organizations whose characteristics do not directly match those of the sources of the information and models presented herein. It also keeps the ideas sufficiently close to the sources of the practical experiences that have generated the models and information.
Integration of remote sensing based surface information into a three-dimensional microclimate model
NASA Astrophysics Data System (ADS)
Heldens, Wieke; Heiden, Uta; Esch, Thomas; Mueller, Andreas; Dech, Stefan
2017-03-01
Climate change urges cities to consider the urban climate as part of sustainable planning. Urban microclimate models can provide knowledge on the climate at building block level. However, very detailed information on the area of interest is required. Most microclimate studies therefore make use of assumptions and generalizations to describe the model area. Remote sensing data with area wide coverage provides a means to derive many parameters at the detailed spatial and thematic scale required by urban climate models. This study shows how microclimate simulations for a series of real world urban areas can be supported by using remote sensing data. In an automated process, surface materials, albedo, LAI/LAD and object height have been derived and integrated into the urban microclimate model ENVI-met. Multiple microclimate simulations have been carried out both with the dynamic remote sensing based input data as well as with manual and static input data to analyze the impact of the RS-based surface information and the suitability of the applied data and techniques. A valuable support of the integration of the remote sensing based input data for ENVI-met is the use of an automated processing chain. This saves tedious manual editing and allows for fast and area wide generation of simulation areas. The analysis of the different modes shows the importance of high quality height data, detailed surface material information and albedo.
NASA Astrophysics Data System (ADS)
Fuchs, Erica R. H.; Bruce, E. J.; Ram, R. J.; Kirchain, Randolph E.
2006-08-01
The monolithic integration of components holds promise to increase network functionality and reduce packaging expense. Integration also drives down yield due to manufacturing complexity and the compounding of failures across devices. Consensus is lacking on the economically preferred extent of integration. Previous studies on the cost feasibility of integration have used high-level estimation methods. This study instead focuses on accurate-to-industry detail, basing a process-based cost model of device manufacture on data collected from 20 firms across the optoelectronics supply chain. The model presented allows for the definition of process organization, including testing, as well as processing conditions, operational characteristics, and level of automation at each step. This study focuses on the cost implications of integration of a 1550-nm DFB laser with an electroabsorptive modulator on an InP platform. Results show the monolithically integrated design to be more cost competitive over discrete component options regardless of production scale. Dominant cost drivers are packaging, testing, and assembly. Leveraging the technical detail underlying model projections, component alignment, bonding, and metal-organic chemical vapor deposition (MOCVD) are identified as processes where technical improvements are most critical to lowering costs. Such results should encourage exploration of the cost advantages of further integration and focus cost-driven technology development.
NASA Technical Reports Server (NTRS)
Wang, Yansen; Tao, W.-K.; Lau, K.-M.; Wetzel, Peter J.
2003-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data fiom the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo- China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the lowlevel temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation.
NASA Technical Reports Server (NTRS)
1981-01-01
The development of a coal gasification system design and mass and energy balance simulation program for the TVA and other similar facilities is described. The materials-process-product model (MPPM) and the advanced system for process engineering (ASPEN) computer program were selected from available steady state and dynamic models. The MPPM was selected to serve as the basis for development of system level design model structure because it provided the capability for process block material and energy balance and high-level systems sizing and costing. The ASPEN simulation serves as the basis for assessing detailed component models for the system design modeling program. The ASPEN components were analyzed to identify particular process blocks and data packages (physical properties) which could be extracted and used in the system design modeling program. While ASPEN physical properties calculation routines are capable of generating physical properties required for process simulation, not all required physical property data are available, and must be user-entered.
Problems in Catalytic Oxidation of Hydrocarbons and Detailed Simulation of Combustion Processes
NASA Astrophysics Data System (ADS)
Xin, Yuxuan
This dissertation research consists of two parts, with Part I on the kinetics of catalytic oxidation of hydrocarbons and Part II on aspects on the detailed simulation of combustion processes. In Part I, the catalytic oxidation of C1--C3 hydrocarbons, namely methane, ethane, propane and ethylene, was investigated for lean hydrocarbon-air mixtures over an unsupported Pd-based catalyst, from 600 to 800 K and under atmospheric pressure. In Chapter 2, the experimental facility of wire microcalorimetry and simulation configuration were described in details. In Chapter 3 and 4, the oxidation rate of C1--C 3 hydrocarbons is demonstrated to be determined by the dissociative adsorption of hydrocarbons. A detailed surface kinetics model is proposed with deriving the rate coefficient of hydrocarbon dissociative adsorption from the wire microcalorimetry data. In Part II, four fundamental studies were conducted through detailed combustion simulations. In Chapter 5, self-accelerating hydrogen-air flames are studied via two-dimensional detailed numerical simulation (DNS). The increase in the global flame velocity is shown to be caused by the increase of flame surface area, and the fractal structure of the flame front is demonstrated by the box-counting method. In Chapter 6, skeletal reaction models for butane combustion are derived by using directed relation graph (DRG) and DRG-aided sensitivity analysis (DRGASA), and uncertainty minimization by polynomial chaos expansion (MUM-PCE) mothodes. The dependence of model uncertainty is subjected to the completeness of the model. In Chapter 7, a systematic strategy is proposed to reduce the cost of the multicomponent diffusion model by accurately accounting for the species whose diffusivity is important to the global responses of the combustion systems, and approximating those of less importance by the mixture-averaged model. The reduced model is validated in an n-heptane mechanism with 88 species. In Chapter 8, the influence of Soret diffusion on the n-heptane/air flames is investigated numerically. In the unstretched flames, Soret diffusion primarily affects the chemical kinetics embedded in the flame structure and the net effect is small; while in the stretched flames, its impact is mainly through those of n-heptane and the secondary fuel, H2, in modifying the flame temperature, with substantial effects.
On the Modeling of Thermal Radiation at the Top Surface of a Vacuum Arc Remelting Ingot
NASA Astrophysics Data System (ADS)
Delzant, P.-O.; Baqué, B.; Chapelle, P.; Jardy, A.
2018-02-01
Two models have been implemented for calculating the thermal radiation emitted at the ingot top in the VAR process, namely, a crude model that considers only radiative heat transfer between the free surface and electrode tip and a more detailed model that describes all radiative exchanges between the ingot, electrode, and crucible wall using a radiosity method. From the results of the second model, it is found that the radiative heat flux at the ingot top may depend heavily on the arc gap length and the electrode radius, but remains almost unaffected by variations of the electrode height. Both radiation models have been integrated into a CFD numerical code that simulates the growth and solidification of a VAR ingot. The simulation of a Ti-6-4 alloy melt shows that use of the detailed radiation model leads to some significant modification of the simulation results compared with the simple model. This is especially true during the hot-topping phase, where the top radiation plays an increasingly important role compared with the arc energy input. Thus, while the crude model has the advantage of its simplicity, use of the detailed model should be preferred.
On the Modeling of Thermal Radiation at the Top Surface of a Vacuum Arc Remelting Ingot
NASA Astrophysics Data System (ADS)
Delzant, P.-O.; Baqué, B.; Chapelle, P.; Jardy, A.
2018-06-01
Two models have been implemented for calculating the thermal radiation emitted at the ingot top in the VAR process, namely, a crude model that considers only radiative heat transfer between the free surface and electrode tip and a more detailed model that describes all radiative exchanges between the ingot, electrode, and crucible wall using a radiosity method. From the results of the second model, it is found that the radiative heat flux at the ingot top may depend heavily on the arc gap length and the electrode radius, but remains almost unaffected by variations of the electrode height. Both radiation models have been integrated into a CFD numerical code that simulates the growth and solidification of a VAR ingot. The simulation of a Ti-6-4 alloy melt shows that use of the detailed radiation model leads to some significant modification of the simulation results compared with the simple model. This is especially true during the hot-topping phase, where the top radiation plays an increasingly important role compared with the arc energy input. Thus, while the crude model has the advantage of its simplicity, use of the detailed model should be preferred.
Software Assurance Curriculum Project Volume 1: Master of Software Assurance Reference Curriculum
2010-08-01
activity by providing a check on the relevance and currency of the process used to develop the MSwA2010 curriculum content. Figure 2 is an expansion of...random oracle model, symmetric crypto primitives, modes of operations, asymmetric crypto primitives (Chapter 5) [16] Detailed design...encryption, public key encryption, digital signatures, message authentication codes, crypto protocols, cryptanalysis, and further detailed crypto
Evidence from mixed hydrate nucleation for a funnel model of crystallization.
Hall, Kyle Wm; Carpendale, Sheelagh; Kusalik, Peter G
2016-10-25
The molecular-level details of crystallization remain unclear for many systems. Previous work has speculated on the phenomenological similarities between molecular crystallization and protein folding. Here we demonstrate that molecular crystallization can involve funnel-shaped potential energy landscapes through a detailed analysis of mixed gas hydrate nucleation, a prototypical multicomponent crystallization process. Through this, we contribute both: (i) a powerful conceptual framework for exploring and rationalizing molecular crystallization, and (ii) an explanation of phenomenological similarities between protein folding and crystallization. Such funnel-shaped potential energy landscapes may be typical of broad classes of molecular ordering processes, and can provide a new perspective for both studying and understanding these processes.
Evidence from mixed hydrate nucleation for a funnel model of crystallization
Hall, Kyle Wm.; Carpendale, Sheelagh; Kusalik, Peter G.
2016-01-01
The molecular-level details of crystallization remain unclear for many systems. Previous work has speculated on the phenomenological similarities between molecular crystallization and protein folding. Here we demonstrate that molecular crystallization can involve funnel-shaped potential energy landscapes through a detailed analysis of mixed gas hydrate nucleation, a prototypical multicomponent crystallization process. Through this, we contribute both: (i) a powerful conceptual framework for exploring and rationalizing molecular crystallization, and (ii) an explanation of phenomenological similarities between protein folding and crystallization. Such funnel-shaped potential energy landscapes may be typical of broad classes of molecular ordering processes, and can provide a new perspective for both studying and understanding these processes. PMID:27790987
Climbing the ladder: capability maturity model integration level 3
NASA Astrophysics Data System (ADS)
Day, Bryce; Lutteroth, Christof
2011-02-01
This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.
ERIC Educational Resources Information Center
Bodroza-Pantic, O.; Matic-Kekic, Snezana; Jakovljev, Bogdanka; Markovic, Doko
2008-01-01
In this paper the didactically-methodological procedure named the MTE-model of mathematics teaching (Motivation test-Teaching-Examination test) is suggested and recommended when the teacher has subsequent lessons. This model is presented in detail through the processing of a nonstandard theme--the theme of decomposition of planes. Its efficiency…
User Manual for SAHM package for VisTrails
Talbert, C.B.; Talbert, M.K.
2012-01-01
The Software for Assisted Habitat I\\•1odeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre-and post-processing steps and modeling options incorporated in the construction of a species distribution model. The four main advantages to using the combined VisTrail: SAHM package for species distribution modeling are: 1. formalization and tractable recording of the entire modeling process 2. easier collaboration through a common modeling framework 3. a user-friendly graphical interface to manage file input, model runs, and output 4. extensibility to incorporate future and additional modeling routines and tools. This user manual provides detailed information on each module within the SAHM package, their input, output, common connections, optional arguments, and default settings. This information can also be accessed for individual modules by right clicking on the documentation button for any module in VisTrail or by right clicking on any input or output for a module and selecting view documentation. This user manual is intended to accompany the user guide which provides detailed instructions on how to install the SAHM package within VisTrails and then presents information on the use of the package.
Stepwise construction of a metabolic network in Event-B: The heat shock response.
Sanwal, Usman; Petre, Luigia; Petre, Ion
2017-12-01
There is a high interest in constructing large, detailed computational models for biological processes. This is often done by putting together existing submodels and adding to them extra details/knowledge. The result of such approaches is usually a model that can only answer questions on a very specific level of detail, and thus, ultimately, is of limited use. We focus instead on an approach to systematically add details to a model, with formal verification of its consistency at each step. In this way, one obtains a set of reusable models, at different levels of abstraction, to be used for different purposes depending on the question to address. We demonstrate this approach using Event-B, a computational framework introduced to develop formal specifications of distributed software systems. We first describe how to model generic metabolic networks in Event-B. Then, we apply this method for modeling the biological heat shock response in eukaryotic cells, using Event-B refinement techniques. The advantage of using Event-B consists in having refinement as an intrinsic feature; this provides as a final result not only a correct model, but a chain of models automatically linked by refinement, each of which is provably correct and reusable. This is a proof-of-concept that refinement in Event-B is suitable for biomodeling, serving for mastering biological complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ribeiro, Jose; Silva, Cristovao; Mendes, Ricardo; Plaksin, Igor; Campos, Jose
2011-06-01
The use of emulsion explosives [EEx] for processing materials (compaction, welding and forming) requires the ability to perform detailed simulations of its detonation process [DP]. Detailed numerical simulations of the DP of this kind of explosives, characterized by having a finite reaction zone thickness, are thought to be suitable performed using the Lee-Tarver reactive flow model. In this work a real coded genetic algorithm methodology was used to estimate the 15 parameters of the reaction rate equation [RRE] of that model for a particular EEx. This methodology allows, in a single optimization procedure, using only one experimental result and without the need of any starting solution, to seek for the 15 parameters of the RRE that fit the numerical to the experimental results. Mass averaging and the Plate-Gap Model have been used for the determination of the shock data used in the unreacted explosive JWL EoS assessment and the thermochemical code THOR retrieved the data used in the detonation products JWL EoS assessment. The obtained parameters allow a good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.
Energy efficiency of batch and semi-batch (CCRO) reverse osmosis desalination.
Warsinger, David M; Tow, Emily W; Nayar, Kishor G; Maswadeh, Laith A; Lienhard V, John H
2016-12-01
As reverse osmosis (RO) desalination capacity increases worldwide, the need to reduce its specific energy consumption becomes more urgent. In addition to the incremental changes attainable with improved components such as membranes and pumps, more significant reduction of energy consumption can be achieved through time-varying RO processes including semi-batch processes such as closed-circuit reverse osmosis (CCRO) and fully-batch processes that have not yet been commercialized or modelled in detail. In this study, numerical models of the energy consumption of batch RO (BRO), CCRO, and the standard continuous RO process are detailed. Two new energy-efficient configurations of batch RO are analyzed. Batch systems use significantly less energy than continuous RO over a wide range of recovery ratios and source water salinities. Relative to continuous RO, models predict that CCRO and batch RO demonstrate up to 37% and 64% energy savings, respectively, for brackish water desalination at high water recovery. For batch RO and CCRO, the primary reductions in energy use stem from atmospheric pressure brine discharge and reduced streamwise variation in driving pressure. Fully-batch systems further reduce energy consumption by not mixing streams of different concentrations, which CCRO does. These results demonstrate that time-varying processes can significantly raise RO energy efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nie, Yifan; Liang, Chaoping; Cha, Pil-Ryung; Colombo, Luigi; Wallace, Robert M; Cho, Kyeongjae
2017-06-07
Controlled growth of crystalline solids is critical for device applications, and atomistic modeling methods have been developed for bulk crystalline solids. Kinetic Monte Carlo (KMC) simulation method provides detailed atomic scale processes during a solid growth over realistic time scales, but its application to the growth modeling of van der Waals (vdW) heterostructures has not yet been developed. Specifically, the growth of single-layered transition metal dichalcogenides (TMDs) is currently facing tremendous challenges, and a detailed understanding based on KMC simulations would provide critical guidance to enable controlled growth of vdW heterostructures. In this work, a KMC simulation method is developed for the growth modeling on the vdW epitaxy of TMDs. The KMC method has introduced full material parameters for TMDs in bottom-up synthesis: metal and chalcogen adsorption/desorption/diffusion on substrate and grown TMD surface, TMD stacking sequence, chalcogen/metal ratio, flake edge diffusion and vacancy diffusion. The KMC processes result in multiple kinetic behaviors associated with various growth behaviors observed in experiments. Different phenomena observed during vdW epitaxy process are analysed in terms of complex competitions among multiple kinetic processes. The KMC method is used in the investigation and prediction of growth mechanisms, which provide qualitative suggestions to guide experimental study.
Evaluation of the flame propagation within an SI engine using flame imaging and LES
NASA Astrophysics Data System (ADS)
He, Chao; Kuenne, Guido; Yildar, Esra; van Oijen, Jeroen; di Mare, Francesca; Sadiki, Amsini; Ding, Carl-Philipp; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Janicka, Johannes
2017-11-01
This work shows experiments and simulations of the fired operation of a spark ignition engine with port-fuelled injection. The test rig considered is an optically accessible single cylinder engine specifically designed at TU Darmstadt for the detailed investigation of in-cylinder processes and model validation. The engine was operated under lean conditions using iso-octane as a substitute for gasoline. Experiments have been conducted to provide a sound database of the combustion process. A planar flame imaging technique has been applied within the swirl- and tumble-planes to provide statistical information on the combustion process to complement a pressure-based comparison between simulation and experiments. This data is then analysed and used to assess the large eddy simulation performed within this work. For the simulation, the engine code KIVA has been extended by the dynamically thickened flame model combined with chemistry reduction by means of pressure dependent tabulation. Sixty cycles have been simulated to perform a statistical evaluation. Based on a detailed comparison with the experimental data, a systematic study has been conducted to obtain insight into the most crucial modelling uncertainties.
Detailed Modeling of Physical Processes in Electron Sources for Accelerator Applications
NASA Astrophysics Data System (ADS)
Chubenko, Oksana; Afanasev, Andrei
2017-01-01
At present, electron sources are essential in a wide range of applications - from common technical use to exploring the nature of matter. Depending on the application requirements, different methods and materials are used to generate electrons. State-of-the-art accelerator applications set a number of often-conflicting requirements for electron sources (e.g., quantum efficiency vs. polarization, current density vs. lifetime, etc). Development of advanced electron sources includes modeling and design of cathodes, material growth, fabrication of cathodes, and cathode testing. The detailed simulation and modeling of physical processes is required in order to shed light on the exact mechanisms of electron emission and to develop new-generation electron sources with optimized efficiency. The purpose of the present work is to study physical processes in advanced electron sources and develop scientific tools, which could be used to predict electron emission from novel nano-structured materials. In particular, the area of interest includes bulk/superlattice gallium arsenide (bulk/SL GaAs) photo-emitters and nitrogen-incorporated ultrananocrystalline diamond ((N)UNCD) photo/field-emitters. Work supported by The George Washington University and Euclid TechLabs LLC.
Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O
2017-08-01
Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
DOT National Transportation Integrated Search
2012-09-01
The final report for the Model Orlando Regionally Efficient Travel Management Coordination Center (MORE TMCC) presents the details of : the 2-year process of the partial deployment of the original MORE TMCC design created in Phase I of this project...
Ponce, Carlos; Bravo, Carolina; Alonso, Juan Carlos
2014-01-01
Studies evaluating agri-environmental schemes (AES) usually focus on responses of single species or functional groups. Analyses are generally based on simple habitat measurements but ignore food availability and other important factors. This can limit our understanding of the ultimate causes determining the reactions of birds to AES. We investigated these issues in detail and throughout the main seasons of a bird's annual cycle (mating, postfledging and wintering) in a dry cereal farmland in a Special Protection Area for farmland birds in central Spain. First, we modeled four bird response parameters (abundance, species richness, diversity and “Species of European Conservation Concern” [SPEC]-score), using detailed food availability and vegetation structure measurements (food models). Second, we fitted new models, built using only substrate composition variables (habitat models). Whereas habitat models revealed that both, fields included and not included in the AES benefited birds, food models went a step further and included seed and arthropod biomass as important predictors, respectively, in winter and during the postfledging season. The validation process showed that food models were on average 13% better (up to 20% in some variables) in predicting bird responses. However, the cost of obtaining data for food models was five times higher than for habitat models. This novel approach highlighted the importance of food availability-related causal processes involved in bird responses to AES, which remained undetected when using conventional substrate composition assessment models. Despite their higher costs, measurements of food availability add important details to interpret the reactions of the bird community to AES interventions and thus facilitate evaluating the real efficiency of AES programs. PMID:25165523
Distributed parameterization of complex terrain
NASA Astrophysics Data System (ADS)
Band, Lawrence E.
1991-03-01
This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Lau, W.; Baker, R.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo-China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the low-level temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation. The model results will be compared to the simulation of the 6-7 May 2000 Missouri flash flood event. In addition, the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation will be examined.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Wang, Y.; Lau, W.; Baker, R. D.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo-China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the low-level temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation. The model results will be compared to the simulation of the 6-7 May 2000 Missouri flash flood event. In addition, the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation will be examined.
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
Numerical models of salt marsh evolution: ecological, geomorphic, and climatic factors
Fagherazzi, Sergio; Kirwan, Matthew L.; Mudd, Simon M.; Guntenspergen, Glenn R.; Temmerman, Stijn; D'Alpaos, Andrea; van de Koppel, Johan; Rybczyk, John; Reyes, Enrique; Craft, Chris; Clough, Jonathan
2012-01-01
Salt marshes are delicate landforms at the boundary between the sea and land. These ecosystems support a diverse biota that modifies the erosive characteristics of the substrate and mediates sediment transport processes. Here we present a broad overview of recent numerical models that quantify the formation and evolution of salt marshes under different physical and ecological drivers. In particular, we focus on the coupling between geomorphological and ecological processes and on how these feedbacks are included in predictive models of landform evolution. We describe in detail models that simulate fluxes of water, organic matter, and sediments in salt marshes. The interplay between biological and morphological processes often produces a distinct scarp between salt marshes and tidal flats. Numerical models can capture the dynamics of this boundary and the progradation or regression of the marsh in time. Tidal channels are also key features of the marsh landscape, flooding and draining the marsh platform and providing a source of sediments and nutrients to the marsh ecosystem. In recent years, several numerical models have been developed to describe the morphogenesis and long-term dynamics of salt marsh channels. Finally, salt marshes are highly sensitive to the effects of long-term climatic change. We therefore discuss in detail how numerical models have been used to determine salt marsh survival under different scenarios of sea level rise.
Numerical models of salt marsh evolution: Ecological, geomorphic, and climatic factors
Fagherazzi, S.; Kirwan, M.L.; Mudd, S.M.; Guntenspergen, G.R.; Temmerman, S.; D'Alpaos, A.; Van De Koppel, J.; Rybczyk, J.M.; Reyes, E.; Craft, C.; Clough, J.
2012-01-01
Salt marshes are delicate landforms at the boundary between the sea and land. These ecosystems support a diverse biota that modifies the erosive characteristics of the substrate and mediates sediment transport processes. Here we present a broad overview of recent numerical models that quantify the formation and evolution of salt marshes under different physical and ecological drivers. In particular, we focus on the coupling between geomorphological and ecological processes and on how these feedbacks are included in predictive models of landform evolution. We describe in detail models that simulate fluxes of water, organic matter, and sediments in salt marshes. The interplay between biological and morphological processes often produces a distinct scarp between salt marshes and tidal flats. Numerical models can capture the dynamics of this boundary and the progradation or regression of the marsh in time. Tidal channels are also key features of the marsh landscape, flooding and draining the marsh platform and providing a source of sediments and nutrients to the marsh ecosystem. In recent years, several numerical models have been developed to describe the morphogenesis and long-term dynamics of salt marsh channels. Finally, salt marshes are highly sensitive to the effects of long-term climatic change. We therefore discuss in detail how numerical models have been used to determine salt marsh survival under different scenarios of sea level rise. Copyright 2012 by the American Geophysical Union.
Quantifying the Global Nitrous Oxide Emissions Using a Trait-based Biogeochemistry Model
NASA Astrophysics Data System (ADS)
Zhuang, Q.; Yu, T.
2017-12-01
Nitrogen is an essential element for the global biogeochemical cycle. It is a key nutrient for organisms and N compounds including nitrous oxide significantly influence the global climate. The activities of bacteria and archaea are responsible for the nitrification and denitrification in a wide variety of environments, so microbes play an important role in the nitrogen cycle in soils. To date, most existing process-based models treated nitrification and denitrification as chemical reactions driven by soil physical variables including soil temperature and moisture. In general, the effect of microbes on N cycling has not been modeled in sufficient details. Soil organic carbon also affects the N cycle because it supplies energy to microbes. In my study, a trait-based biogeochemistry model quantifying N2O emissions from the terrestrial ecosystems is developed based on an extant process-based model TEM (Terrestrial Ecosystem Model). Specifically, the improvement to TEM includes: 1) Incorporating the N fixation process to account for the inflow of N from the atmosphere to biosphere; 2) Implementing the effects of microbial dynamics on nitrification process; 3) fully considering the effects of carbon cycling on N nitrogen cycling following the principles of stoichiometry of carbon and nitrogen in soils, plants, and microbes. The difference between simulations with and without the consideration of bacterial activity lies between 5% 25% based on climate conditions and vegetation types. The trait based module allows a more detailed estimation of global N2O emissions.
Comparing fire spread algorithms using equivalence testing and neutral landscape models
Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson
2009-01-01
We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...
Modelling of and Conjecturing on a Soccer Ball in a Korean Eighth Grade Mathematics Classroom
ERIC Educational Resources Information Center
Lee, Kyeong-Hwa
2011-01-01
The purpose of this article was to describe the task design and implementation of cultural artefacts in a mathematics lesson based on the integration of modelling and conjecturing perspectives. The conceived process of integrating a soccer ball into mathematics lessons via modelling- and conjecturing-based instruction was first detailed. Next, the…
ERIC Educational Resources Information Center
Bhola, H. S.
The definitional and conceptual structure of the Esman model of institution building is described in great detail, emphasizing its philosophic and process assumptions and its latent dynamics. The author systematically critiques the Esman model in terms of its (1) specificity to the universe of institution building, (2) generalizability across…
ERIC Educational Resources Information Center
Kenny, John; Fluck, Andrew; Jetson, Tim
2012-01-01
This paper presents a detailed case study of the development and implementation of a quantifiable academic workload model in the education faculty of an Australian university. Flowing from the enterprise bargaining process, the Academic Staff Agreement required the implementation of a workload allocation model for academics that was quantifiable…
NASA Technical Reports Server (NTRS)
Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.
2007-01-01
We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.
Efficient Use of Video for 3d Modelling of Cultural Heritage Objects
NASA Astrophysics Data System (ADS)
Alsadik, B.; Gerke, M.; Vosselman, G.
2015-03-01
Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
Shrestha, Badri Man; Haylor, John
2017-11-15
Rat models of renal transplant are used to investigate immunologic processes and responses to therapeutic agents before their translation into routine clinical practice. In this study, we have described details of rat surgical anatomy and our experiences with the microvascular surgical technique relevant to renal transplant by employing donor inferior vena cava and aortic conduits. For this study, 175 rats (151 Lewis and 24 Fisher) were used to establish the Fisher-Lewis rat model of chronic allograft injury at our institution. Anatomic and technical details were recorded during the period of training and establishment of the model. A final group of 12 transplanted rats were studied for an average duration of 51 weeks for the Lewis-to-Lewis isografts (5 rats) and 42 weeks for the Fisher-to-Lewis allografts (7 rats). Functional measurements and histology confirmed the diagnosis of chronic allograft injury. Mastering the anatomic details and microvascular surgical techniques can lead to the successful establishment of an experimental renal transplant model.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Current State of the Art Historic Building Information Modelling
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2017-08-01
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
NASA Technical Reports Server (NTRS)
Mock, W. D.; Latham, R. A.
1982-01-01
The NASTRAN model plan for the fairing structure was expanded in detail to generate the NASTRAN model of this substructure. The grid point coordinates, element definitions, material properties, and sizing data for each element were specified. The fairing model was thoroughly checked out for continuity, connectivity, and constraints. The substructure was processed for structural influence coefficients (SIC) point loadings to determine the deflection characteristics of the fairing model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.
NASA Astrophysics Data System (ADS)
Frederick, B. C.; Gooch, B. T.; Richter, T.; Young, D. A.; Blankenship, D. D.; Aitken, A.; Siegert, M. J.
2013-12-01
Topography, sediment distribution and heat flux are all key boundary conditions governing the stability of the East Antarctic ice sheet (EAIS). Recent scientific scrutiny has been focused on several large, deep, interior EAIS basins including the submarine basal topography characterizing the Aurora Subglacial Basin (ASB). Numerical ice sheet models require accurate deformable sediment distribution and lithologic character constraints to estimate overall flow velocities and potential instability. To date, such estimates across the ASB have been derived from low-resolution satellite data or historic aerogeophysical surveys conducted prior to the advent of GPS. These rough basal condition estimates have led to poorly-constrained ice sheet stability models for this remote 200,000 sq km expanse of the ASB. Here we present a significantly improved quantitative model characterizing the subglacial lithology and sediment in the ASB region. The product of comprehensive ICECAP (2008-2013) aerogeophysical data processing, this sedimentary basin model details the expanse and thickness of probable Wilkes Land subglacial sedimentary deposits and density contrast boundaries indicative of distinct subglacial lithologic units. As part of the process, BEDMAP2 subglacial topographic results were improved through the additional incorporation of ice-penetrating radar data collected during ICECAP field seasons 2010-2013. Detailed potential field data pre-processing was completed as well as a comprehensive evaluation of crustal density contrasts based on the gravity power spectrum, a subsequent high pass data filter was also applied to remove longer crustal wavelengths from the gravity dataset prior to inversion. Gridded BEDMAP2+ ice and bed radar surfaces were then utilized to establish bounding density models for the 3D gravity inversion process to yield probable sedimentary basin anomalies. Gravity inversion results were iteratively evaluated against radar along-track RMS deviation and gravity and magnetic depth to basement results. This geophysical data processing methodology provides a substantial improvement over prior Wilkes Land sedimentary basin estimates yielding a higher resolution model based upon iteration of several aerogeophysical datasets concurrently. This more detailed subglacial sedimentary basin model for Wilkes Land, East Antarctica will not only contribute to vast improvements on EAIS ice sheet model constraints, but will also provide significant quantifiable controls for subglacial hydrologic and geothermal flux estimates that are also sizable contributors to the cold-based, deep interior basal ice dynamics dominant in the Wilkes Land region.
Using Agile Project Management to Enhance the Performance of Instructional Design Teams
ERIC Educational Resources Information Center
Sweeney, David S.; Cifuentes, Lauren
2010-01-01
Instructional design models describe in detail methodologies for designing effective instruction. Several widely adopted models include suggestions for managing instructional design projects. However, these suggestions focus on how to manage the instructional design steps rather than the instructional design and development team process. The…
Goal Structuring Notation in a Radiation Hardening Assurance Case for COTS-Based Spacecraft
NASA Technical Reports Server (NTRS)
Witulski, A.; Austin, R.; Evans, J.; Mahadevan, N.; Karsai, G.; Sierawski, B.; LaBel, K.; Reed, R.
2016-01-01
The attached presentation is a summary of how mission assurance is supported by model-based representations of spacecraft systems that can define sub-system functionality and interfacing, reliability parameters, as well as detailing a new paradigm for assurance, a model-centric and not document-centric process.
Aspects of the Cognitive Model of Physics Problem Solving.
ERIC Educational Resources Information Center
Brekke, Stewart E.
Various aspects of the cognitive model of physics problem solving are discussed in detail including relevant cues, encoding, memory, and input stimuli. The learning process involved in the recognition of familiar and non-familiar sensory stimuli is highlighted. Its four components include selection, acquisition, construction, and integration. The…
Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking
ERIC Educational Resources Information Center
Rash, Agnes M.; Winkel, Brian J.
2009-01-01
This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…
Modeling microcirculatory blood flow: current state and future perspectives.
Gompper, Gerhard; Fedosov, Dmitry A
2016-01-01
Microvascular blood flow determines a number of important physiological processes of an organism in health and disease. Therefore, a detailed understanding of microvascular blood flow would significantly advance biophysical and biomedical research and its applications. Current developments in modeling of microcirculatory blood flow already allow to go beyond available experimental measurements and have a large potential to elucidate blood flow behavior in normal and diseased microvascular networks. There exist detailed models of blood flow on a single cell level as well as simplified models of the flow through microcirculatory networks, which are reviewed and discussed here. The combination of these models provides promising prospects for better understanding of blood flow behavior and transport properties locally as well as globally within large microvascular networks. © 2015 Wiley Periodicals, Inc.
Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan
2013-04-01
Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.
Strategic Project Management at the NASA Kennedy Space Center
NASA Technical Reports Server (NTRS)
Lavelle, Jerome P.
2000-01-01
This paper describes Project Management at NASA's Kennedy Space Center (KSC) from a strategic perspective. It develops the historical context of the agency and center's strategic planning process and illustrates how now is the time for KSC to become a center which has excellence in project management. The author describes project management activities at the center and details observations on those efforts. Finally the author describes the Strategic Project Management Process Model as a conceptual model which could assist KSC in defining an appropriate project management process system at the center.
Evolutionary biology through the lens of budding yeast comparative genomics.
Marsit, Souhir; Leducq, Jean-Baptiste; Durand, Éléonore; Marchant, Axelle; Filteau, Marie; Landry, Christian R
2017-10-01
The budding yeast Saccharomyces cerevisiae is a highly advanced model system for studying genetics, cell biology and systems biology. Over the past decade, the application of high-throughput sequencing technologies to this species has contributed to this yeast also becoming an important model for evolutionary genomics. Indeed, comparative genomic analyses of laboratory, wild and domesticated yeast populations are providing unprecedented detail about many of the processes that govern evolution, including long-term processes, such as reproductive isolation and speciation, and short-term processes, such as adaptation to natural and domestication-related environments.
The Morphology and Uniformity of Circumstellar OH/H2O Masers around OH/IR Stars
NASA Astrophysics Data System (ADS)
Felli, Derek Sean
Even though low mass stars ( 8 solar masses), the more massive stars drive the chemical evolution of galaxies from which the next generation of stars and planets can form. Understanding mass loss of asymptotic giant branch stars contributes to our understanding of the chemical evolution of the galaxy, stellar populations, and star formation history. Stars with mass 8 solar masses go supernova. In both cases, these stars enrich their environments with elements heavier than simple hydrogen and helium molecules. While some general info about how stars die and form planetary nebulae are known, specific details are missing due to a lack of high-resolution observations and analysis of the intermediate stages. For example, we know that mass loss in stars creates morphologically diverse planetary nebulae, but we do not know the uniformity of these processes, and therefore lack detailed models to better predict how spherically symmetric stars form asymmetric nebulae. We have selected a specific group of late-stage stars and observed them at different scales to reveal the uniformity of mass loss through different layers close to the star. This includes observing nearby masers that trace the molecular shell structure around these stars. This study revealed detailed structure that was analyzed for uniformity to place constraints on how the mass loss processes behave in models. These results will feed into our ability to create more detailed models to better predict the chemical evolution of the next generation of stars and planets.
ERIC Educational Resources Information Center
Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.
2017-01-01
Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…
ERIC Educational Resources Information Center
Xiang, Lin
2011-01-01
This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on…
Modelling tidewater glacier calving: from detailed process models to simple calving laws
NASA Astrophysics Data System (ADS)
Benn, Doug; Åström, Jan; Zwinger, Thomas; Todd, Joe; Nick, Faezeh
2017-04-01
The simple calving laws currently used in ice sheet models do not adequately reflect the complexity and diversity of calving processes. To be effective, calving laws must be grounded in a sound understanding of how calving actually works. We have developed a new approach to formulating calving laws, using a) the Helsinki Discrete Element Model (HiDEM) to explicitly model fracture and calving processes, and b) the full-Stokes continuum model Elmer/Ice to identify critical stress states associated with HiDEM calving events. A range of observed calving processes emerges spontaneously from HiDEM in response to variations in ice-front buoyancy and the size of subaqueous undercuts, and we show that HiDEM calving events are associated with characteristic stress patterns simulated in Elmer/Ice. Our results open the way to developing calving laws that properly reflect the diversity of calving processes, and provide a framework for a unified theory of the calving process continuum.
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
Modeling fMRI signals can provide insights into neural processing in the cerebral cortex
Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo
2015-01-01
Every stimulus or task activates multiple areas in the mammalian cortex. These distributed activations can be measured with functional magnetic resonance imaging (fMRI), which has the best spatial resolution among the noninvasive brain imaging methods. Unfortunately, the relationship between the fMRI activations and distributed cortical processing has remained unclear, both because the coupling between neural and fMRI activations has remained poorly understood and because fMRI voxels are too large to directly sense the local neural events. To get an idea of the local processing given the macroscopic data, we need models to simulate the neural activity and to provide output that can be compared with fMRI data. Such models can describe neural mechanisms as mathematical functions between input and output in a specific system, with little correspondence to physiological mechanisms. Alternatively, models can be biomimetic, including biological details with straightforward correspondence to experimental data. After careful balancing between complexity, computational efficiency, and realism, a biomimetic simulation should be able to provide insight into how biological structures or functions contribute to actual data processing as well as to promote theory-driven neuroscience experiments. This review analyzes the requirements for validating system-level computational models with fMRI. In particular, we study mesoscopic biomimetic models, which include a limited set of details from real-life networks and enable system-level simulations of neural mass action. In addition, we discuss how recent developments in neurophysiology and biophysics may significantly advance the modelling of fMRI signals. PMID:25972586
Modeling fMRI signals can provide insights into neural processing in the cerebral cortex.
Vanni, Simo; Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo
2015-08-01
Every stimulus or task activates multiple areas in the mammalian cortex. These distributed activations can be measured with functional magnetic resonance imaging (fMRI), which has the best spatial resolution among the noninvasive brain imaging methods. Unfortunately, the relationship between the fMRI activations and distributed cortical processing has remained unclear, both because the coupling between neural and fMRI activations has remained poorly understood and because fMRI voxels are too large to directly sense the local neural events. To get an idea of the local processing given the macroscopic data, we need models to simulate the neural activity and to provide output that can be compared with fMRI data. Such models can describe neural mechanisms as mathematical functions between input and output in a specific system, with little correspondence to physiological mechanisms. Alternatively, models can be biomimetic, including biological details with straightforward correspondence to experimental data. After careful balancing between complexity, computational efficiency, and realism, a biomimetic simulation should be able to provide insight into how biological structures or functions contribute to actual data processing as well as to promote theory-driven neuroscience experiments. This review analyzes the requirements for validating system-level computational models with fMRI. In particular, we study mesoscopic biomimetic models, which include a limited set of details from real-life networks and enable system-level simulations of neural mass action. In addition, we discuss how recent developments in neurophysiology and biophysics may significantly advance the modelling of fMRI signals. Copyright © 2015 the American Physiological Society.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC.
To compile its projections of future employment levels, the Bureau of Labor Statistics (BLS) combines the following five interlinked models in a six-step process: a labor force model, an econometric model of the U.S. economy, an industry activity model, an industry labor demand model, and an occupational labor demand model. The BLS was asked to…
22 CFR 124.2 - Exemptions for training and military service.
Code of Federal Regulations, 2010 CFR
2010-04-01
... methods and tools include the development and/or use of mockups, computer models and simulations, and test facilities. (iii) Manufacturing know-how, such as: Information that provides detailed manufacturing processes...
An Activation-Based Model of Sentence Processing as Skilled Memory Retrieval
ERIC Educational Resources Information Center
Lewis, Richard L.; Vasishth, Shravan
2005-01-01
We present a detailed process theory of the moment-by-moment working-memory retrievals and associated control structure that subserve sentence comprehension. The theory is derived from the application of independently motivated principles of memory and cognitive skill to the specialized task of sentence parsing. The resulting theory construes…
ERIC Educational Resources Information Center
Watson, George; Crossley, Michael
2001-01-01
Examines the introduction and evolution of the Strategic Management Process in England's further education sector. Critiques the transfer of business-sector management models to postsecondary education, reviews related policy literature, and summarizes a detailed longitudinal study of cultural change in one college embarking upon incorporation.…
The Four Elements: New Models for a Subversive Dramaturgy.
ERIC Educational Resources Information Center
Rudakoff, Judith
2003-01-01
Characterizes dramaturgy as where an artist conceives and germinates individualized artistic processes to facilitate and instigate the transmission of creativity. Explains a process, which can be used to create a new work or analyze existing plays that begins with a detailed examination of the Four Elements--Air, Earth, Water, and Fire. Notes that…
The mathematical modeling of rapid solidification processing. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Gutierrez-Miravete, E.
1986-01-01
The detailed formulation of and the results obtained from a continuum mechanics-based mathematical model of the planar flow melt spinning (PFMS) rapid solidification system are presented and discussed. The numerical algorithm proposed is capable of computing the cooling and freezing rates as well as the fluid flow and capillary phenomena which take place inside the molten puddle formed in the PFMS process. The FORTRAN listings of some of the most useful computer programs and a collection of appendices describing the basic equations used for the modeling are included.
NASA Astrophysics Data System (ADS)
Ciminelli, Caterina; Dell'Olio, Francesco; Armenise, Mario N.; Iacomacci, Francesco; Pasquali, Franca; Formaro, Roberto
2017-11-01
A fiber optic digital link for on-board data handling is modeled, designed and optimized in this paper. Design requirements and constraints relevant to the link, which is in the frame of novel on-board processing architectures, are discussed. Two possible link configurations are investigated, showing their advantages and disadvantages. An accurate mathematical model of each link component and the entire system is reported and results of link simulation based on those models are presented. Finally, some details on the optimized design are provided.
Simulations of Neon Pellets for Plasma Disruption Mitigation in Tokamaks
NASA Astrophysics Data System (ADS)
Bosviel, Nicolas; Samulyak, Roman; Parks, Paul
2017-10-01
Numerical studies of the ablation of neon pellets in tokamaks in the plasma disruption mitigation parameter space have been performed using a time-dependent pellet ablation model based on the front tracking code FronTier-MHD. The main features of the model include the explicit tracking of the solid pellet/ablated gas interface, a self-consistent evolving potential distribution in the ablation cloud, JxB forces, atomic processes, and an improved electrical conductivity model. The equation of state model accounts for atomic processes in the ablation cloud as well as deviations from the ideal gas law in the dense, cold layers of neon gas near the pellet surface. Simulations predict processes in the ablation cloud and pellet ablation rates and address the sensitivity of pellet ablation processes to details of physics models, in particular the equation of state.
Classification of processes involved in sharing individual participant data from clinical trials.
Ohmann, Christian; Canham, Steve; Banzi, Rita; Kuchinke, Wolfgang; Battaglia, Serena
2018-01-01
Background: In recent years, a cultural change in the handling of data from research has resulted in the strong promotion of a culture of openness and increased sharing of data. In the area of clinical trials, sharing of individual participant data involves a complex set of processes and the interaction of many actors and actions. Individual services/tools to support data sharing are available, but what is missing is a detailed, structured and comprehensive list of processes/subprocesses involved and tools/services needed. Methods : Principles and recommendations from a published data sharing consensus document are analysed in detail by a small expert group. Processes/subprocesses involved in data sharing are identified and linked to actors and possible services/tools. Definitions are adapted from the business process model and notation (BPMN) and applied in the analysis. Results: A detailed and comprehensive list of individual processes/subprocesses involved in data sharing, structured according to 9 main processes, is provided. Possible tools/services to support these processes/subprocesses are identified and grouped according to major type of support. Conclusions: The list of individual processes/subprocesses and tools/services identified is a first step towards development of a generic framework or architecture for sharing of data from clinical trials. Such a framework is strongly needed to give an overview of how various actors, research processes and services could form an interoperable system for data sharing.
Classification of processes involved in sharing individual participant data from clinical trials
Ohmann, Christian; Canham, Steve; Banzi, Rita; Kuchinke, Wolfgang; Battaglia, Serena
2018-01-01
Background: In recent years, a cultural change in the handling of data from research has resulted in the strong promotion of a culture of openness and increased sharing of data. In the area of clinical trials, sharing of individual participant data involves a complex set of processes and the interaction of many actors and actions. Individual services/tools to support data sharing are available, but what is missing is a detailed, structured and comprehensive list of processes/subprocesses involved and tools/services needed. Methods: Principles and recommendations from a published data sharing consensus document are analysed in detail by a small expert group. Processes/subprocesses involved in data sharing are identified and linked to actors and possible services/tools. Definitions are adapted from the business process model and notation (BPMN) and applied in the analysis. Results: A detailed and comprehensive list of individual processes/subprocesses involved in data sharing, structured according to 9 main processes, is provided. Possible tools/services to support these processes/subprocesses are identified and grouped according to major type of support. Conclusions: The list of individual processes/subprocesses and tools/services identified is a first step towards development of a generic framework or architecture for sharing of data from clinical trials. Such a framework is strongly needed to give an overview of how various actors, research processes and services could form an interoperable system for data sharing. PMID:29623192
Online Deviation Detection for Medical Processes
Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.
2014-01-01
Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343
NASA Astrophysics Data System (ADS)
Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.
2015-12-01
A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.
NASA Astrophysics Data System (ADS)
Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila
2016-04-01
A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.
Data assimilation and model evaluation experiment datasets
NASA Technical Reports Server (NTRS)
Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.
1994-01-01
The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.
Customer-centered careflow modeling based on guidelines.
Huang, Biqing; Zhu, Peng; Wu, Cheng
2012-10-01
In contemporary society, customer-centered health care, which stresses customer participation and long-term tailored care, is inevitably becoming a trend. Compared with the hospital or physician-centered healthcare process, the customer-centered healthcare process requires more knowledge and modeling such a process is extremely complex. Thus, building a care process model for a special customer is cost prohibitive. In addition, during the execution of a care process model, the information system should have flexibility to modify the model so that it adapts to changes in the healthcare process. Therefore, supporting the process in a flexible, cost-effective way is a key challenge for information technology. To meet this challenge, first, we analyze various kinds of knowledge used in process modeling, illustrate their characteristics, and detail their roles and effects in careflow modeling. Secondly, we propose a methodology to manage a lifecycle of the healthcare process modeling, with which models could be built gradually with convenience and efficiency. In this lifecycle, different levels of process models are established based on the kinds of knowledge involved, and the diffusion strategy of these process models is designed. Thirdly, architecture and prototype of the system supporting the process modeling and its lifecycle are given. This careflow system also considers the compatibility of legacy systems and authority problems. Finally, an example is provided to demonstrate implementation of the careflow system.
ERIC Educational Resources Information Center
Coltheart, Max; Tree, Jeremy J.; Saunders, Steven J.
2010-01-01
Woollams, Lambon Ralph, Plaut, and Patterson (see record 2007-05396-004) reported detailed data on reading in 51 cases of semantic dementia. They simulated some aspects of these data using a connectionist parallel distributed processing (PDP) triangle model of reading. We argue here that a different model of reading, the dual route cascaded (DRC)…
NASA Technical Reports Server (NTRS)
Wang, Yansen; Tao, W.-K.; Lau, K.-M.; Wetzel, Peter J.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated by coupling a mesoscale atmospheric model (MM5) and a detailed, land surface model, PLACE (the Parameterization for Land-Atmosphere-Cloud Exchange). The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The control simulation with the PLACE land surface model and variable sea surface temperature captured the basic signatures of the monsoon onset processes and associated rainfall statistics. Sensitivity tests indicated that simulations were sigmficantly improved by including the PLACE land surface model. The mechanism by which the land surface processes affect the moisture transport and the convection during the onset of the southeast Asian monsoon were analyzed. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation: the southwest low-level flow over the Indo-china peninsula and the northern, cold frontal intrusion from southern China. The surface sensible and latent heat fluxes modified the low-level temperature distribution and gradient, and therefore the low-level wind due to the thermal wind effect. The more realistic forcing of the sensible and latent heat fluxes from the detailed, land surface model improved the low-level wind simulation apd associated moisture transport and convection.
NASA Technical Reports Server (NTRS)
Wang, Yansen; Tao, W.-K.; Lau, K.-M.; Wetzel, Peter J.
2004-01-01
The onset of the southeast Asian monsoon during 1997 and 1998 was simulated by coupling a mesoscale atmospheric model (MM5) and a detailed, land surface model, PLACE (the Parameterization for Land-Atmosphere-Cloud Exchange). The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The control simulation with the PLACE land surface model and variable sea surface temperature captured the basic signatures of the monsoon onset processes and associated rainfall statistics. Sensitivity tests indicated that simulations were significantly improved by including the PLACE land surface model. The mechanism by which the land surface processes affect the moisture transport and the convection during the onset of the southeast Asian monsoon were analyzed. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation: the southwest low-level flow over the Indo-China peninsula and the northern, cold frontal intrusion from southern China. The surface sensible and latent heat fluxes modified the low-level temperature distribution and merit, and therefore the low-level wind due to the thermal wind effect. The more realistic forcing of the sensible and latent heat fluxes from the detailed, land surface model improved the low-level wind simulation and associated moisture transport and convection.
Titan I propulsion system modeling and possible performance improvements
NASA Astrophysics Data System (ADS)
Giusti, Oreste
This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.
A New, Two-layer Canopy Module For The Detailed Snow Model SNOWPACK
NASA Astrophysics Data System (ADS)
Gouttevin, I.; Lehning, M.; Jonas, T.; Gustafsson, D.; Mölder, M.
2014-12-01
A new, two-layer canopy module with thermal inertia for the detailed snow model SNOWPACK is presented. Compared to the old, one-layered canopy formulation with no heat mass, this module now offers a level of physical detail consistent with the detailed snow and soil representation in SNOWPACK. The new canopy model is designed to reproduce the difference in thermal regimes between leafy and woody canopy elements and their impact on the underlying snowpack energy balance. The new model is validated against data from an Alpine and a boreal site. Comparisons of modelled sub-canopy thermal radiations to stand-scale observations at Alptal, Switzerland, demonstrate the improvements induced by our new parameterizations. The main effect is a more realistic simulation of the canopy night-time drop in temperatures. The lower drop is induced by both thermal inertia and the two-layer representation. A specific result is that such a performance cannot be achieved by a single-layered canopy model. The impact of the new parameterizations on the modelled dynamics of the sub-canopy snowpack is analysed and yields consistent results, but the frequent occurrence of mixed-precipitation events at Alptal prevents a conclusive assessment of model performances against snow data.Without specific tuning, the model is also able to reproduce the measured summertime tree trunk temperatures and biomass heat storage at the boreal site of Norunda, Sweden, with an increased accuracy in amplitude and phase. Overall, the SNOWPACK model with its enhanced canopy module constitutes a unique (in its physical process representation) atmosphere-to-soil-through-canopy-and-snow modelling chain.
Environmental research program. 1995 Annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, N.J.
1996-06-01
The objective of the Environmental Research Program is to enhance the understanding of, and mitigate the effects of pollutants on health, ecological systems, global and regional climate, and air quality. The program is multidisciplinary and includes fundamental research and development in efficient and environmentally benign combustion, pollutant abatement and destruction, and novel methods of detection and analysis of criteria and noncriteria pollutants. This diverse group conducts investigations in combustion, atmospheric and marine processes, flue-gas chemistry, and ecological systems. Combustion chemistry research emphasizes modeling at microscopic and macroscopic scales. At the microscopic scale, functional sensitivity analysis is used to explore themore » nature of the potential-to-dynamics relationships for reacting systems. Rate coefficients are estimated using quantum dynamics and path integral approaches. At the macroscopic level, combustion processes are modelled using chemical mechanisms at the appropriate level of detail dictated by the requirements of predicting particular aspects of combustion behavior. Parallel computing has facilitated the efforts to use detailed chemistry in models of turbulent reacting flow to predict minor species concentrations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rousseau, Aymeric
2013-02-01
Several tools already exist to develop detailed plant model, including GT-Power, AMESim, CarSim, and SimScape. The objective of Autonomie is not to provide a language to develop detailed models; rather, Autonomie supports the assembly and use of models from design to simulation to analysis with complete plug-and-play capabilities. Autonomie provides a plug-and-play architecture to support this ideal use of modeling and simulation for math-based automotive control system design. Models in the standard format create building blocks, which are assembled at runtime into a simulation model of a vehicle, system, subsystem, or component to simulate. All parts of the graphical usermore » interface (GUI) are designed to be flexible to support architectures, systems, components, and processes not yet envisioned. This allows the software to be molded to individual uses, so it can grow as requirements and technical knowledge expands. This flexibility also allows for implementation of legacy code, including models, controller code, processes, drive cycles, and post-processing equations. A library of useful and tested models and processes is included as part of the software package to support a full range of simulation and analysis tasks, immediately. Autonomie also includes a configuration and database management front end to facilitate the storage, versioning, and maintenance of all required files, such as the models themselves, the model’s supporting files, test data, and reports. During the duration of the CRADA, Argonne has worked closely with GM to implement and demonstrate each one of their requirements. A use case was developed by GM for every requirement and demonstrated by Argonne. Each of the new features were verified by GM experts through a series of Gate. Once all the requirements were validated they were presented to the directors as part of GM Gate process.« less
Legacy model integration for enhancing hydrologic interdisciplinary research
NASA Astrophysics Data System (ADS)
Dozier, A.; Arabi, M.; David, O.
2013-12-01
Many challenges are introduced to interdisciplinary research in and around the hydrologic science community due to advances in computing technology and modeling capabilities in different programming languages, across different platforms and frameworks by researchers in a variety of fields with a variety of experience in computer programming. Many new hydrologic models as well as optimization, parameter estimation, and uncertainty characterization techniques are developed in scripting languages such as Matlab, R, Python, or in newer languages such as Java and the .Net languages, whereas many legacy models have been written in FORTRAN and C, which complicates inter-model communication for two-way feedbacks. However, most hydrologic researchers and industry personnel have little knowledge of the computing technologies that are available to address the model integration process. Therefore, the goal of this study is to address these new challenges by utilizing a novel approach based on a publish-subscribe-type system to enhance modeling capabilities of legacy socio-economic, hydrologic, and ecologic software. Enhancements include massive parallelization of executions and access to legacy model variables at any point during the simulation process by another program without having to compile all the models together into an inseparable 'super-model'. Thus, this study provides two-way feedback mechanisms between multiple different process models that can be written in various programming languages and can run on different machines and operating systems. Additionally, a level of abstraction is given to the model integration process that allows researchers and other technical personnel to perform more detailed and interactive modeling, visualization, optimization, calibration, and uncertainty analysis without requiring deep understanding of inter-process communication. To be compatible, a program must be written in a programming language with bindings to a common implementation of the message passing interface (MPI), which includes FORTRAN, C, Java, the .NET languages, Python, R, Matlab, and many others. The system is tested on a longstanding legacy hydrologic model, the Soil and Water Assessment Tool (SWAT), to observe and enhance speed-up capabilities for various optimization, parameter estimation, and model uncertainty characterization techniques, which is particularly important for computationally intensive hydrologic simulations. Initial results indicate that the legacy extension system significantly decreases developer time, computation time, and the cost of purchasing commercial parallel processing licenses, while enhancing interdisciplinary research by providing detailed two-way feedback mechanisms between various process models with minimal changes to legacy code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foley, M.G.; Petrie, G.M.; Baldwin, A.J.
1982-06-01
This report contains the input data and computer results for the Geologic Simulation Model. This model is described in detail in the following report: Petrie, G.M., et. al. 1981. Geologic Simulation Model for a Hypothetical Site in the Columbia Plateau, Pacific Northwest Laboratory, Richland, Washington. The Geologic Simulation Model is a quasi-deterministic process-response model which simulates, for a million years into the future, the development of the geologic and hydrologic systems of the ground-water basin containing the Pasco Basin. Effects of natural processes on the ground-water hydrologic system are modeled principally by rate equations. The combined effects and synergistic interactionsmore » of different processes are approximated by linear superposition of their effects during discrete time intervals in a stepwise-integration approach.« less
Space Generic Open Avionics Architecture (SGOAA) standard specification
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1994-01-01
This standard establishes the Space Generic Open Avionics Architecture (SGOAA). The SGOAA includes a generic functional model, processing structural model, and an architecture interface model. This standard defines the requirements for applying these models to the development of spacecraft core avionics systems. The purpose of this standard is to provide an umbrella set of requirements for applying the generic architecture models to the design of a specific avionics hardware/software processing system. This standard defines a generic set of system interface points to facilitate identification of critical services and interfaces. It establishes the requirement for applying appropriate low level detailed implementation standards to those interfaces points. The generic core avionics functions and processing structural models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
Discrimination of dynamical system models for biological and chemical processes.
Lorenz, Sönke; Diederichs, Elmar; Telgmann, Regina; Schütte, Christof
2007-06-01
In technical chemistry, systems biology and biotechnology, the construction of predictive models has become an essential step in process design and product optimization. Accurate modelling of the reactions requires detailed knowledge about the processes involved. However, when concerned with the development of new products and production techniques for example, this knowledge often is not available due to the lack of experimental data. Thus, when one has to work with a selection of proposed models, the main tasks of early development is to discriminate these models. In this article, a new statistical approach to model discrimination is described that ranks models wrt. the probability with which they reproduce the given data. The article introduces the new approach, discusses its statistical background, presents numerical techniques for its implementation and illustrates the application to examples from biokinetics.
Get on Board the Cost Effective Way: A Tech Prep Replication Process.
ERIC Educational Resources Information Center
Moore, Wayne A.; Szul, Linda F.; Rivosecchi, Karen
1997-01-01
The Northwestern Pennsylvania Tech Prep Consortium model for replicating tech prep programs includes these steps: fact finding, local industry analysis, curriculum development, detailed description, marketing strategies, implementation, and program evaluation. (SK)
A Four-Tier Differentiation Model: Engage All Students in the Learning Process
ERIC Educational Resources Information Center
Herrelko, Janet M.
2013-01-01
This study details the creation of a four-tiered format designed to help preservice teachers write differentiated lesson plans. A short history of lesson plan differentiation models is described and how the four-tier approach was developed through collaboration with classroom teachers and university faculty. The unifying element for the format…
A Four-Stage Model for Planning Computer-Based Instruction.
ERIC Educational Resources Information Center
Morrison, Gary R.; Ross, Steven M.
1988-01-01
Describes a flexible planning process for developing computer based instruction (CBI) in which the CBI design is implemented on paper between the lesson design and the program production. A four-stage model is explained, including (1) an initial flowchart, (2) storyboards, (3) a detailed flowchart, and (4) an evaluation. (16 references)…
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy/Environmental eXtender (APEX) is a watershed-scale water quality model that includes detailed representation of agricultural management but currently does not have microbial fate and transport simulation capabilities. The objective of this work was to develop a process-based ...
Team Design Communication Patterns in e-Learning Design and Development
ERIC Educational Resources Information Center
Rapanta, Chrysi; Maina, Marcelo; Lotz, Nicole; Bacchelli, Alberto
2013-01-01
Prescriptive stage models have been found insufficient to describe the dynamic aspects of designing, especially in interdisciplinary e-learning design teams. There is a growing need for a systematic empirical analysis of team design processes that offer deeper and more detailed insights into instructional design (ID) than general models can offer.…
Growth in Mathematical Understanding: How Can We Characterise It and How Can We Represent It?
ERIC Educational Resources Information Center
Pirie, Susan; Kieren, Thomas
1994-01-01
Proposes a model for the growth of mathematical understanding based on the consideration of understanding as a whole, dynamic, leveled but nonlinear process. Illustrates the model using the concept of fractions. How to map the growth of understanding is explained in detail. (Contains 26 references.) (MKR)
NASA Astrophysics Data System (ADS)
McNamara, J. P.; Semenova, O.; Restrepo, P. J.
2011-12-01
Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.
NASA Astrophysics Data System (ADS)
Bonelli, Francesco; Tuttafesta, Michele; Colonna, Gianpiero; Cutrone, Luigi; Pascazio, Giuseppe
2017-10-01
This paper describes the most advanced results obtained in the context of fluid dynamic simulations of high-enthalpy flows using detailed state-to-state air kinetics. Thermochemical non-equilibrium, typical of supersonic and hypersonic flows, was modeled by using both the accurate state-to-state approach and the multi-temperature model proposed by Park. The accuracy of the two thermochemical non-equilibrium models was assessed by comparing the results with experimental findings, showing better predictions provided by the state-to-state approach. To overcome the huge computational cost of the state-to-state model, a multiple-nodes GPU implementation, based on an MPI-CUDA approach, was employed and a comprehensive code performance analysis is presented. Both the pure MPI-CPU and the MPI-CUDA implementations exhibit excellent scalability performance. GPUs outperform CPUs computing especially when the state-to-state approach is employed, showing speed-ups, of the single GPU with respect to the single-core CPU, larger than 100 in both the case of one MPI process and multiple MPI process.
A Framework for Distributed Problem Solving
NASA Astrophysics Data System (ADS)
Leone, Joseph; Shin, Don G.
1989-03-01
This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
Detailed Characterization of Nearshore Processes During NCEX
NASA Astrophysics Data System (ADS)
Holland, K.; Kaihatu, J. M.; Plant, N.
2004-12-01
Recent technology advances have allowed the coupling of remote sensing methods with advanced wave and circulation models to yield detailed characterizations of nearshore processes. This methodology was demonstrated as part of the Nearshore Canyon EXperiment (NCEX) in La Jolla, CA during Fall 2003. An array of high-resolution, color digital cameras was installed to monitor an alongshore distance of nearly 2 km out to depths of 25 m. This digital imagery was analyzed over the three-month period through an automated process to produce hourly estimates of wave period, wave direction, breaker height, shoreline position, sandbar location, and bathymetry at numerous locations during daylight hours. Interesting wave propagation patterns in the vicinity of the canyons were observed. In addition, directional wave spectra and swash / surf flow velocities were estimated using more computationally intensive methods. These measurements were used to provide forcing and boundary conditions for the Delft3D wave and circulation model, giving additional estimates of nearshore processes such as dissipation and rip currents. An optimal approach for coupling these remotely sensed observations to the numerical model was selected to yield accurate, but also timely characterizations. This involved assimilation of directional spectral estimates near the offshore boundary to mimic forcing conditions achieved under traditional approaches involving nested domains. Measurements of breaker heights and flow speeds were also used to adaptively tune model parameters to provide enhanced accuracy. Comparisons of model predictions and video observations show significant correlation. As compared to nesting within larger-scale and coarser resolution models, the advantages of providing boundary conditions data using remote sensing is much improved resolution and fidelity. For example, rip current development was both modeled and observed. These results indicate that this approach to data-model coupling is tenable and may be useful in near-real-time characterizations required by many applied scenarios.
2018-01-01
We review key mathematical models of the South African human immunodeficiency virus (HIV) epidemic from the early 1990s onwards. In our descriptions, we sometimes differentiate between the concepts of a model world and its mathematical or computational implementation. The model world is the conceptual realm in which we explicitly declare the rules – usually some simplification of ‘real world’ processes as we understand them. Computing details of informative scenarios in these model worlds is a task requiring specialist knowledge, but all other aspects of the modelling process, from describing the model world to identifying the scenarios and interpreting model outputs, should be understandable to anyone with an interest in the epidemic. PMID:29568647
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
NASA Astrophysics Data System (ADS)
Insfrán, J. F.; Ubal, S.; Di Paolo, y. J.
2016-04-01
A simplified model of a proximal convoluted tubule of an average human nephron is presented. The model considers the 2D axisymmetric flow of the luminal solution exchanging matter with the tubule walls and the peritubular fluid by means of 0D models for the epithelial cells. The tubule radius is considered to vary along the conduit due to the trans-epithelial pressure difference. The fate of more than ten typical solutes is tracked down by the model. The Navier-Stokes and Reaction-Diffusion-Advection equations (considering the electro-neutrality principle) are solved in the lumen, giving a detailed picture of the velocity, pressure and concentration fields, along with trans-membrane fluxes and tubule deformation, via coupling with the 0D model for the tubule wall. The calculations are carried out numerically by means of the finite element method. The results obtained show good agreement with those published by other authors using models that ignore the diffusive transport and disregard a detailed calculation of velocity, pressure and concentrations. This work should be seen as a first approach towards the development of a more comprehensive model of the filtration process taking place in the kidneys, which ultimately helps in devising a device that can mimic/complement the renal function.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.
Hamm, V; Collon-Drouaillet, P; Fabriol, R
2008-02-19
The flooding of abandoned mines in the Lorraine Iron Basin (LIB) over the past 25 years has degraded the quality of the groundwater tapped for drinking water. High concentrations of dissolved sulphate have made the water unsuitable for human consumption. This problematic issue has led to the development of numerical tools to support water-resource management in mining contexts. Here we examine two modelling approaches using different numerical tools that we tested on the Saizerais flooded iron-ore mine (Lorraine, France). A first approach considers the Saizerais Mine as a network of two chemical reactors (NCR). The second approach is based on a physically distributed pipe network model (PNM) built with EPANET 2 software. This approach considers the mine as a network of pipes defined by their geometric and chemical parameters. Each reactor in the NCR model includes a detailed chemical model built to simulate quality evolution in the flooded mine water. However, in order to obtain a robust PNM, we simplified the detailed chemical model into a specific sulphate dissolution-precipitation model that is included as sulphate source/sink in both a NCR model and a pipe network model. Both the NCR model and the PNM, based on different numerical techniques, give good post-calibration agreement between the simulated and measured sulphate concentrations in the drinking-water well and overflow drift. The NCR model incorporating the detailed chemical model is useful when a detailed chemical behaviour at the overflow is needed. The PNM incorporating the simplified sulphate dissolution-precipitation model provides better information of the physics controlling the effect of flow and low flow zones, and the time of solid sulphate removal whereas the NCR model will underestimate clean-up time due to the complete mixing assumption. In conclusion, the detailed NCR model will give a first assessment of chemical processes at overflow, and in a second time, the PNM model will provide more detailed information on flow and chemical behaviour (dissolved sulphate concentrations, remaining mass of solid sulphate) in the network. Nevertheless, both modelling methods require hydrological and chemical parameters (recharge flow rate, outflows, volume of mine voids, mass of solids, kinetic constants of the dissolution-precipitation reactions), which are commonly not available for a mine and therefore call for calibration data.
An intraorganizational model for developing and spreading quality improvement innovations.
Kellogg, Katherine C; Gainer, Lindsay A; Allen, Adrienne S; OʼSullivan, Tatum; Singer, Sara J
Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group's traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread.
An intraorganizational model for developing and spreading quality improvement innovations
Kellogg, Katherine C.; Gainer, Lindsay A.; Allen, Adrienne S.; O'Sullivan, Tatum; Singer, Sara J.
2017-01-01
Background: Recent policy reforms encourage quality improvement (QI) innovations in primary care, but practitioners lack clear guidance regarding spread inside organizations. Purpose: We designed this study to identify how large organizations can facilitate intraorganizational spread of QI innovations. Methodology/Approach: We conducted ethnographic observation and interviews in a large, multispecialty, community-based medical group that implemented three QI innovations across 10 primary care sites using a new method for intraorganizational process development and spread. We compared quantitative outcomes achieved through the group’s traditional versus new method, created a process model describing the steps in the new method, and identified barriers and facilitators at each step. Findings: The medical group achieved substantial improvement using its new method of intraorganizational process development and spread of QI innovations: standard work for rooming and depression screening, vaccine error rates and order compliance, and Pap smear error rates. Our model details nine critical steps for successful intraorganizational process development (set priorities, assess the current state, develop the new process, and measure and refine) and spread (develop support, disseminate information, facilitate peer-to-peer training, reinforce, and learn and adapt). Our results highlight the importance of utilizing preexisting organizational structures such as established communication channels, standardized roles, common workflows, formal authority, and performance measurement and feedback systems when developing and spreading QI processes inside an organization. In particular, we detail how formal process advocate positions in each site for each role can facilitate the spread of new processes. Practice Implications: Successful intraorganizational spread is possible and sustainable. Developing and spreading new QI processes across sites inside an organization requires creating a shared understanding of the necessary process steps, considering the barriers that may arise at each step, and leveraging preexisting organizational structures to facilitate intraorganizational process development and spread. PMID:27428788
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pietzcker, Robert C.; Ueckerdt, Falko; Carrara, Samuel
Mitigation-Process Integrated Assessment Models (MP-IAMs) are used to analyze long-term transformation pathways of the energy system required to achieve stringent climate change mitigation targets. Due to their substantial temporal and spatial aggregation, IAMs cannot explicitly represent all detailed challenges of integrating the variable renewable energies (VRE) wind and solar in power systems, but rather rely on parameterized modeling approaches. In the ADVANCE project, six international modeling teams have developed new approaches to improve the representation of power sector dynamics and VRE integration in IAMs. In this study, we qualitatively and quantitatively evaluate the last years' modeling progress and study themore » impact of VRE integration modeling on VRE deployment in IAM scenarios. For a comprehensive and transparent qualitative evaluation, we first develop a framework of 18 features of power sector dynamics and VRE integration. We then apply this framework to the newly-developed modeling approaches to derive a detailed map of strengths and limitations of the different approaches. For the quantitative evaluation, we compare the IAMs to the detailed hourly-resolution power sector model REMIX. We find that the new modeling approaches manage to represent a large number of features of the power sector, and the numerical results are in reasonable agreement with those derived from the detailed power sector model. Updating the power sector representation and the cost and resources of wind and solar substantially increased wind and solar shares across models: Under a carbon price of 30$/tCO2 in 2020 (increasing by 5% per year), the model-average cost-minimizing VRE share over the period 2050-2100 is 62% of electricity generation, 24%-points higher than with the old model version.« less
A feedback model of visual attention.
Spratling, M W; Johnson, M H
2004-03-01
Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.
Energy models and national energy policy
NASA Astrophysics Data System (ADS)
Bloyd, Cary N.; Streets, David G.; Fisher, Ronald E.
1990-01-01
As work begins on the development of a new National Energy Strategy (NES), the role of energy models is becoming increasingly important. Such models are needed to determine and assess both the short and long term effects of new policy initiatives on U.S. energy supply and demand. A central purpose of the model is to translate overall energy strategy goals into policy options while identifying potential costs and environmental benefits. Three models currently being utilized in the NES process are described, followed by a detailed listing of the publicly identified NES goals. These goals are then viewed in light of the basic modeling scenarios that were proposed as part of the NES development process.
NASA Technical Reports Server (NTRS)
Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.
2013-01-01
Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.
NASA Astrophysics Data System (ADS)
Szajnfarber, Zoe; Weigel, Annalisa L.
2013-03-01
This paper investigates the process through which new technical concepts are matured in the NASA innovation ecosystem. We propose an "epoch-shock" conceptualization as an alternative mental model to the traditional stage-gate view. The epoch-shock model is developed inductively, based on detailed empirical observations of the process, and validated, to the extent possible, through expert review. The paper concludes by illustrating how the new epoch-shock conceptualization could provide a useful basis for rethinking feasible interventions to improve innovation management in the space agency context. Where the more traditional stage-gate model leads to an emphasis on centralized flow control, the epoch-shock model acknowledges the decentralized, probabilistic nature of key interactions and highlights which aspects may be influenced.
Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS
NASA Technical Reports Server (NTRS)
Dash, S. M.; Wolf, D. E.
1984-01-01
A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.
The Impact of Rhizosphere Processes on Water Flow and Root Water Uptake
NASA Astrophysics Data System (ADS)
Schwartz, Nimrod; Kroener, Eva; Carminati, Andrea; Javaux, Mathieu
2015-04-01
For many years, the rhizosphere, which is the zone of soil in the vicinity of the roots and which is influenced by the roots, is known as a unique soil environment with different physical, biological and chemical properties than those of the bulk soil. Indeed, in recent studies it has been shown that root exudate and especially mucilage alter the hydraulic properties of the soil, and that drying and wetting cycles of mucilage result in non-equilibrium water dynamics in the rhizosphere. While there are experimental evidences and simplified 1D model for those concepts, an integrated model that considers rhizosphere processes with a detailed model for water and roots flow is absent. Therefore, the objective of this work is to develop a 3D physical model of water flow in the soil-plant continuum that take in consideration root architecture and rhizosphere specific properties. Ultimately, this model will enhance our understanding on the impact of processes occurring in the rhizosphere on water flow and root water uptake. To achieve this objective, we coupled R-SWMS, a detailed 3D model for water flow in soil and root system (Javaux et al 2008), with the rhizosphere model developed by Kroener et al (2014). In the new Rhizo-RSWMS model the rhizosphere hydraulic properties differ from those of the bulk soil, and non-equilibrium dynamics between the rhizosphere water content and pressure head is also considered. We simulated a wetting scenario. The soil was initially dry and it was wetted from the top at a constant flow rate. The model predicts that, after infiltration the water content in the rhizosphere remained lower than in the bulk soil (non-equilibrium), but over time water infiltrated into the rhizosphere and eventually the water content in the rhizosphere became higher than in the bulk soil. These results are in qualitative agreement with the available experimental data on water dynamics in the rhizosphere. Additionally, the results show that rhizosphere processes affect the spatial distribution of root water uptake. This suggests that rhizosphere processes effect root water uptake at the plant scale. Overall, these preliminary results demonstrate the impact of rhizosphere on water flow and root water uptake, and the ability of the Rhizo-RSWMS to simulate these processes. References Javaux, M., Schröder, T., Vanderborght, J., & Vereecken, H. (2008). Use of a three-dimensional detailed modeling approach for predicting root water uptake. Vadose Zone Journal, 7(3), 1079-1088. Kroener, E., Zarebanadkouki, M., Kaestner, A., & Carminati, A. (2014). Nonequilibrium water dynamics in the rhizosphere: How mucilage affects water flow in soils. Water Resources Research, 50(8), 6479-6495.
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
The Role of the Lateral Intraparietal Area in (the Study of) Decision Making.
Huk, Alexander C; Katz, Leor N; Yates, Jacob L
2017-07-25
Over the past two decades, neurophysiological responses in the lateral intraparietal area (LIP) have received extensive study for insight into decision making. In a parallel manner, inferred cognitive processes have enriched interpretations of LIP activity. Because of this bidirectional interplay between physiology and cognition, LIP has served as fertile ground for developing quantitative models that link neural activity with decision making. These models stand as some of the most important frameworks for linking brain and mind, and they are now mature enough to be evaluated in finer detail and integrated with other lines of investigation of LIP function. Here, we focus on the relationship between LIP responses and known sensory and motor events in perceptual decision-making tasks, as assessed by correlative and causal methods. The resulting sensorimotor-focused approach offers an account of LIP activity as a multiplexed amalgam of sensory, cognitive, and motor-related activity, with a complex and often indirect relationship to decision processes. Our data-driven focus on multiplexing (and de-multiplexing) of various response components can complement decision-focused models and provides more detailed insight into how neural signals might relate to cognitive processes such as decision making.
Protection - Principles and practice.
NASA Technical Reports Server (NTRS)
Graham, G. S.; Denning, P. J.
1972-01-01
The protection mechanisms of computer systems control the access to objects, especially information objects. The principles of protection system design are formalized as a model (theory) of protection. Each process has a unique identification number which is attached by the system to each access attempted by the process. Details of system implementation are discussed, taking into account the storing of the access matrix, aspects of efficiency, and the selection of subjects and objects. Two systems which have protection features incorporating all the elements of the model are described.
Cognitive Modeling of Video Game Player User Experience
NASA Technical Reports Server (NTRS)
Bohil, Corey J.; Biocca, Frank A.
2010-01-01
This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.
Modeling the transition region
NASA Technical Reports Server (NTRS)
Singer, Bart A.
1993-01-01
The current status of transition-region models is reviewed in this report. To understand modeling problems, various flow features that influence the transition process are discussed first. Then an overview of the different approaches to transition-region modeling is given. This is followed by a detailed discussion of turbulence models and the specific modifications that are needed to predict flows undergoing laminar-turbulent transition. Methods for determining the usefulness of the models are presented, and an outlook for the future of transition-region modeling is suggested.
ReaDDy - A Software for Particle-Based Reaction-Diffusion Dynamics in Crowded Cellular Environments
Schöneberg, Johannes; Noé, Frank
2013-01-01
We introduce the software package ReaDDy for simulation of detailed spatiotemporal mechanisms of dynamical processes in the cell, based on reaction-diffusion dynamics with particle resolution. In contrast to other particle-based reaction kinetics programs, ReaDDy supports particle interaction potentials. This permits effects such as space exclusion, molecular crowding and aggregation to be modeled. The biomolecules simulated can be represented as a sphere, or as a more complex geometry such as a domain structure or polymer chain. ReaDDy bridges the gap between small-scale but highly detailed molecular dynamics or Brownian dynamics simulations and large-scale but little-detailed reaction kinetics simulations. ReaDDy has a modular design that enables the exchange of the computing core by efficient platform-specific implementations or dynamical models that are different from Brownian dynamics. PMID:24040218
NASA Technical Reports Server (NTRS)
Hogan, John; Kang, Sukwon; Cavazzoni, Jim; Levri, Julie; Finn, Cory; Luna, Bernadette (Technical Monitor)
2000-01-01
The objective of this study is to compare incineration and composting in a Mars-based advanced life support (ALS) system. The variables explored include waste pre-processing requirements, reactor sizing and buffer capacities. The study incorporates detailed mathematical models of biomass production and waste processing into an existing dynamic ALS system model. The ALS system and incineration models (written in MATLAB/SIMULINK(c)) were developed at the NASA Ames Research Center. The composting process is modeled using first order kinetics, with different degradation rates for individual waste components (carbohydrates, proteins, fats, cellulose and lignin). The biomass waste streams are generated using modified "Eneray Cascade" crop models, which use light- and dark-cycle temperatures, irradiance, photoperiod, [CO2], planting density, and relative humidity as model inputs. The study also includes an evaluation of equivalent system mass (ESM).
Calibration of the highway safety manual for Missouri.
DOT National Transportation Integrated Search
2013-12-01
The new Highway Safety Manual (HSM) contains predictive models that need to be calibrated to local conditions. This : calibration process requires detailed data types, such as crash frequencies, traffic volumes, geometrics, and land-use. The : HSM do...
Anti-gravity with present technology - Implementation and theoretical foundation
NASA Astrophysics Data System (ADS)
Alzofon, F. E.
1981-07-01
This paper proposes a semi-empirical model of the processes leading to the gravitational field based on accepted features of subatomic processes. Through an analogy with methods of cryogenics, a method of decreasing (or increasing) the gravitational force on a vehicle, using presently-known technology, is suggested. Various ways of ultilizing this effect in vehicle propulsion are described. A unified field theory is then detailed which provides a more formal foundation for the gravitational field model first introduced. In distinction to the general theory of relativity, it features physical processes which generate the gravitational field.
Collective neutrino oscillations and r-process nucleosynthesis in supernovae
NASA Astrophysics Data System (ADS)
Duan, Huaiyu
2012-10-01
Neutrinos can oscillate collectively in a core-collapse supernova. This phenomenon can occur much deeper inside the supernova envelope than what is predicted from the conventional matter-induced Mikheyev-Smirnov-Wolfenstein effect, and hence may have an impact on nucleosynthesis. The oscillation patterns and the r-process yields are sensitive to the details of the emitted neutrino fluxes, the sign of the neutrino mass hierarchy, the modeling of neutrino oscillations and the astrophysical conditions. The effects of collective neutrino oscillations on the r-process will be illustrated using representative late-time neutrino spectra and outflow models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Zhemin; Department of Physical Electronics, Tokyo Institute of Technology, 2-12-1 O-okayama, Meguro-ku, Tokyo 152-8552; Taguchi, Dai
The details of turnover process of spontaneous polarization and associated carrier motions in indium-tin oxide/poly-(vinylidene-trifluoroethylene)/pentacene/Au capacitor were analyzed by coupling displacement current measurement (DCM) and electric-field-induced optical second-harmonic generation (EFISHG) measurement. A model was set up from DCM results to depict the relationship between electric field in semiconductor layer and applied external voltage, proving that photo illumination effect on the spontaneous polarization process lied in variation of semiconductor conductivity. The EFISHG measurement directly and selectively probed the electric field distribution in semiconductor layer, modifying the model and revealing detailed carrier behaviors involving photo illumination effect, dipole reversal, and interfacial chargingmore » in the device. A further decrease of DCM current in the low voltage region under illumination was found as the result of illumination effect, and the result was argued based on the changing of the total capacitance of the double-layer capacitors.« less
Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge
1999-01-01
A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.
Synthetic aperture radar and digital processing: An introduction
NASA Technical Reports Server (NTRS)
Dicenzo, A.
1981-01-01
A tutorial on synthetic aperture radar (SAR) is presented with emphasis on digital data collection and processing. Background information on waveform frequency and phase notation, mixing, Q conversion, sampling and cross correlation operations is included for clarity. The fate of a SAR signal from transmission to processed image is traced in detail, using the model of a single bright point target against a dark background. Some of the principal problems connected with SAR processing are also discussed.
A new predictive multi-zone model for HCCI engine combustion
Bissoli, Mattia; Frassoldati, Alessio; Cuoci, Alberto; ...
2016-06-30
Here, this work introduces a new predictive multi-zone model for the description of combustion in Homogeneous Charge Compression Ignition (HCCI) engines. The model exploits the existing OpenSMOKE++ computational suite to handle detailed kinetic mechanisms, providing reliable predictions of the in-cylinder auto-ignition processes. All the elements with a significant impact on the combustion performances and emissions, like turbulence, heat and mass exchanges, crevices, residual burned gases, thermal and feed stratification are taken into account. Compared to other computational approaches, this model improves the description of mixture stratification phenomena by coupling a wall heat transfer model derived from CFD application with amore » proper turbulence model. Furthermore, the calibration of this multi-zone model requires only three parameters, which can be derived from a non-reactive CFD simulation: these adaptive variables depend only on the engine geometry and remain fixed across a wide range of operating conditions, allowing the prediction of auto-ignition, pressure traces and pollutants. This computational framework enables the use of detail kinetic mechanisms, as well as Rate of Production Analysis (RoPA) and Sensitivity Analysis (SA) to investigate the complex chemistry involved in the auto-ignition and the pollutants formation processes. In the final sections of the paper, these capabilities are demonstrated through the comparison with experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
This is an Aspen Plus process model for in situ and ex situ upgrading of fast pyrolysis vapors for the conversion of biomass to hydrocarbon fuels. It is based on conceptual designs that allow projections of future commercial implementations of the technologies based on a combination of research and existing commercial technologies. The process model was developed from the ground up at NREL. Results from the model are documented in a detailed design report NREL/TP-5100-62455 (available at http://www.nrel.gov/docs/fy15osti/62455.pdf).
Computer simulation of the metastatic progression.
Wedemann, Gero; Bethge, Anja; Haustein, Volker; Schumacher, Udo
2014-01-01
A novel computer model based on a discrete event simulation procedure describes quantitatively the processes underlying the metastatic cascade. Analytical functions describe the size of the primary tumor and the metastases, while a rate function models the intravasation events of the primary tumor and metastases. Events describe the behavior of the malignant cells until the formation of new metastases. The results of the computer simulations are in quantitative agreement with clinical data determined from a patient with hepatocellular carcinoma in the liver. The model provides a more detailed view on the process than a conventional mathematical model. In particular, the implications of interventions on metastasis formation can be calculated.
A new computer code for discrete fracture network modelling
NASA Astrophysics Data System (ADS)
Xu, Chaoshui; Dowd, Peter
2010-03-01
The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.
A cloud, precipitation and electrification modeling effort for COHMEX
NASA Technical Reports Server (NTRS)
Orville, Harold D.; Helsdon, John H.; Farley, Richard D.
1991-01-01
In mid-1987, the Modeling Group of the Institute of Atmospheric Sciences (IAS) began to simulate and analyze cloud runs that were made during the Cooperative Huntsville Meteorological Experiment (COHMEX) Project and later. The cloud model was run nearly every day during the summer 1986 COHMEX Project. The Modeling Group was then funded to analyze the results, make further modeling tests, and help explain the precipitation processes in the Southeastern United States. The main science objectives of COHMEX were: (1) to observe the prestorm environment and understand the physical mechanisms leading to the formation of small convective systems and processes controlling the production of precipitation; (2) to describe the structure of small convective systems producing precipitation including the large and small scale events in the environment surrounding the developing and mature convective system; (3) to understand the interrelationships between electrical activity within the convective system and the process of precipitation; and (4) to develop and test numerical models describing the boundary layer, tropospheric, and cloud scale thermodynamics and dynamics associated with small convective systems. The latter three of these objectives were addressed by the modeling activities of the IAS. A series of cloud modes were used to simulate the clouds that formed during the operational project. The primary models used to date on the project were a two dimensional bulk water model, a two dimensional electrical model, and to a lesser extent, a two dimensional detailed microphysical cloud model. All of the models are based on fully interacting microphysics, dynamics, thermodynamics, and electrical equations. Only the 20 July 1986 case was analyzed in detail, although all of the cases run during the summer were analyzed as to how well they did in predicting the characteristics of the convection for that day.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
On the theory of coronal heating mechanisms
NASA Technical Reports Server (NTRS)
Kuperus, M.; Ionson, J. A.; Spicer, D. S.
1980-01-01
Theoretical models describing solar coronal heating mechanisms are reviewed in some detail. The requirements of chromospheric and coronal heating are discussed in the context of the fundamental constraints encountered in modelling the outer solar atmosphere. Heating by acoustic processes in the 'nonmagnetic' parts of the atmosphere is examined with particular emphasis on the shock wave theory. Also discussed are theories of heating by electrodynamic processes in the magnetic regions of the corona, either magnetohydrodynamic waves or current heating in the regions with large electric current densities (flare type heating). Problems associated with each of the models are addressed.
Bicknell, Klinton; Levy, Roger
2012-01-01
Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362
Logistics Process Analysis ToolProcess Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2008-03-31
LPAT is the resulting integrated system between ANL-developed Enhanced Logistics Intra Theater Support Tool (ELIST) sponsored by SDDC-TEA and the Fort Future Virtual Installation Tool (sponsored by CERL). The Fort Future Simulation Engine was an application written in the ANL Repast Simphony framework and used as the basis for the process Anlysis Tool (PAT) which evolved into a stand=-along tool for detailed process analysis at a location. Combined with ELIST, an inter-installation logistics component was added to enable users to define large logistical agent-based models without having to program. PAT is the evolution of an ANL-developed software system called Fortmore » Future Virtual Installation Tool (sponsored by CERL). The Fort Future Simulation Engine was an application written in the ANL Repast Simphony framework and used as the basis for the Process Analysis Tool(PAT) which evolved into a stand-alone tool for detailed process analysis at a location (sponsored by the SDDC-TEA).« less
When do drilling alliances add value? The alliance value model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brett, J.F.; Craig, V.B.; Wadsworth, D.B.
1996-12-31
A recent GRI report details three previously unstudied aspects of alliances: specific measurable factors that improve alliance success, how a successful alliance should be structured, and when an alliance makes economic sense. The most innovative tool to emerge from the report, the Alliance Value Model, addresses the third aspect. The theory behind the Alliance Value Model is that the long-term viability of any drilling relationship hinges on its ability to create real value and achieve stability. Based upon the report`s findings, the most effective way to form such an alliance is through a detailed description and integration of the technicalmore » processes involved. This new type of process-driven alliance is characterized by a value chain which links together a common set of technical processes, mutually defined bottomline goals, and shared benefits. Building a process-driven alliance requires time and people and therefore has an associated cost. The real value generated by an alliance must exceed this start-up cost. The Alliance Value Model computes the net present value (NPV) of the cash flows for four different operating arrangements: (1) Business As Usual (conventional competitive bidding process), (2) Process-Driven Alliance (linking technical processes to accelerate production and reduce expenses), (3) Incentivized Process-Driven Alliance (linked technical processes with performance incentives to promote stability), and (4) No Drill Case (primarily used to gauge the market value of services). These arrangements test different degrees of process integration between an operator and its suppliers. They can also help determine if the alliance can add enough value to exceed startup costs and if the relationship will be stable. Each partner can test the impact of the relational structure on its own profitability. When an alliance is warranted, all participants can benefit from real value generated in a stable relationship.« less
The ability to forecast local and regional air pollution events is challenging since the processes governing the production and sustenance of atmospheric pollutants are complex and often non-linear. Comprehensive atmospheric models, by representing in as much detail as possible t...
Smartphone Apps on the Mobile Web: An Exploratory Case Study of Business Models
ERIC Educational Resources Information Center
Ford, Caroline Morgan
2012-01-01
The purpose of this research is to explore the business strategies of a firm seeking to develop and profitably market a mobile smartphone application to understand how small, digital entrepreneurships may build sustainable business models given substantial market barriers. Through a detailed examination of one firm's process to try to…
Class and Home Problems. Modeling an Explosion: The Devil Is in the Details
ERIC Educational Resources Information Center
Hart, Peter W.; Rudie, Alan W.
2011-01-01
Within the past 15 years, three North American pulp mills experienced catastrophic equipment failures while using 50 wt% hydrogen peroxide. In two cases, explosions occurred when normal pulp flow was interrupted due to other process problems. To understand the accidents, a kinetic model of alkali-catalyzed decomposition of peroxide was developed.…
Importance of tread inertia and damping on the tyre/road contact stiffness
NASA Astrophysics Data System (ADS)
Winroth, J.; Andersson, P. B. U.; Kropp, W.
2014-10-01
Predicting tyre/road interaction processes like roughness excitation, stick-slip, stick-snap, wear and traction requires detailed information about the road surface, the tyre dynamics and the local deformation of the tread at the interface. Aspects of inertia and damping when the tread is locally deformed are often neglected in many existing tyre/road interaction models. The objective of this paper is to study how the dynamic features of the tread affect contact forces and contact stiffness during local deformation. This is done by simulating the detailed contact between an elastic layer and a rough road surface using a previously developed numerical time domain contact model. Road roughness on length scales smaller than the discretisation scale is included by the addition of nonlinear contact springs between each pair of contact elements. The dynamic case, with an elastic layer impulse response extending in time, is compared with the case where the corresponding quasi-static response is used. Results highlight the difficulty of estimating a constant contact stiffness as it increases during the indentation process between the elastic layer and the rough road surface. The stiffness-indentation relation additionally depends on how rapidly the contact develops; a faster process gives a stiffer contact. Material properties like loss factor and density also alter the contact development. This work implies that dynamic properties of the local tread deformation may be of importance when simulating contact details during normal tyre/road interaction conditions. There are however indications that the significant effect of damping could approximately be included as an increased stiffness in a quasi-static tread model.
Use of transport models for wildfire behavior simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linn, R.R.; Harlow, F.H.
1998-01-01
Investigators have attempted to describe the behavior of wildfires for over fifty years. Current models for numerical description are mainly algebraic and based on statistical or empirical ideas. The authors have developed a transport model called FIRETEC. The use of transport formulations connects the propagation rates to the full conservation equations for energy, momentum, species concentrations, mass, and turbulence. In this paper, highlights of the model formulation and results are described. The goal of the FIRETEC model is to describe most probable average behavior of wildfires in a wide variety of conditions. FIRETEC represents the essence of the combination ofmore » many small-scale processes without resolving each process in complete detail.« less
Modelling the morphology of migrating bacterial colonies
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Tokihiro, T.; Badoual, M.; Grammaticos, B.
2010-08-01
We present a model which aims at describing the morphology of colonies of Proteus mirabilis and Bacillus subtilis. Our model is based on a cellular automaton which is obtained by the adequate discretisation of a diffusion-like equation, describing the migration of the bacteria, to which we have added rules simulating the consolidation process. Our basic assumption, following the findings of the group of Chuo University, is that the migration and consolidation processes are controlled by the local density of the bacteria. We show that it is possible within our model to reproduce the morphological diagrams of both bacteria species. Moreover, we model some detailed experiments done by the Chuo University group, obtaining a fine agreement.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
Abnormalities of Object Visual Processing in Body Dysmorphic Disorder
Feusner, Jamie D.; Hembacher, Emily; Moller, Hayley; Moody, Teena D.
2013-01-01
Background Individuals with body dysmorphic disorder may have perceptual distortions for their appearance. Previous studies suggest imbalances in detailed relative to configural/holistic visual processing when viewing faces. No study has investigated the neural correlates of processing non-symptom-related stimuli. The objective of this study was to determine whether individuals with body dysmorphic disorder have abnormal patterns of brain activation when viewing non-face/non-body object stimuli. Methods Fourteen medication-free participants with DSM-IV body dysmorphic disorder and 14 healthy controls participated. We performed functional magnetic resonance imaging while participants matched photographs of houses that were unaltered, contained only high spatial frequency (high detail) information, or only low spatial frequency (low detail) information. The primary outcome was group differences in blood oxygen level-dependent signal changes. Results The body dysmorphic disorder group showed lesser activity in the parahippocampal gyrus, lingual gyrus, and precuneus for low spatial frequency images. There were greater activations in medial prefrontal regions for high spatial frequency images, although no significant differences when compared to a low-level baseline. Greater symptom severity was associated with lesser activity in dorsal occipital cortex and ventrolateral prefrontal cortex for normal and high spatial frequency images. Conclusions Individuals with body dysmorphic disorder have abnormal brain activation patterns when viewing objects. Hypoactivity in visual association areas for configural and holistic (low detail) elements and abnormal allocation of prefrontal systems for details is consistent with a model of imbalances in global vs. local processing. This may occur not only for appearance but also for general stimuli unrelated to their symptoms. PMID:21557897
Process-based modelling of the methane balance in periglacial landscapes (JSBACH-methane)
NASA Astrophysics Data System (ADS)
Kaiser, Sonja; Göckede, Mathias; Castro-Morales, Karel; Knoblauch, Christian; Ekici, Altug; Kleinen, Thomas; Zubrzycki, Sebastian; Sachs, Torsten; Wille, Christian; Beer, Christian
2017-01-01
A detailed process-based methane module for a global land surface scheme has been developed which is general enough to be applied in permafrost regions as well as wetlands outside permafrost areas. Methane production, oxidation and transport by ebullition, diffusion and plants are represented. In this model, oxygen has been explicitly incorporated into diffusion, transport by plants and two oxidation processes, of which one uses soil oxygen, while the other uses oxygen that is available via roots. Permafrost and wetland soils show special behaviour, such as variable soil pore space due to freezing and thawing or water table depths due to changing soil water content. This has been integrated directly into the methane-related processes. A detailed application at the Samoylov polygonal tundra site, Lena River Delta, Russia, is used for evaluation purposes. The application at Samoylov also shows differences in the importance of the several transport processes and in the methane dynamics under varying soil moisture, ice and temperature conditions during different seasons and on different microsites. These microsites are the elevated moist polygonal rim and the depressed wet polygonal centre. The evaluation shows sufficiently good agreement with field observations despite the fact that the module has not been specifically calibrated to these data. This methane module is designed such that the advanced land surface scheme is able to model recent and future methane fluxes from periglacial landscapes across scales. In addition, the methane contribution to carbon cycle-climate feedback mechanisms can be quantified when running coupled to an atmospheric model.
Modeling filtration and fouling with a microstructured membrane filter
NASA Astrophysics Data System (ADS)
Cummings, Linda; Sanaei, Pejman
2017-11-01
Membrane filters find widespread use in diverse applications such as A/C systems and water purification. While the details of the filtration process may vary significantly, the broad challenge of efficient filtration is the same: to achieve finely-controlled separation at low power consumption. The obvious resolution to the challenge would appear simple: use the largest pore size consistent with the separation requirement. However, the membrane characteristics (and hence the filter performance) are far from constant over its lifetime: the particles removed from the feed are deposited within and on the membrane filter, fouling it and degrading the performance over time. The processes by which this occurs are complex, and depend on several factors, including: the internal structure of the membrane and the type of particles in the feed. We present a model for fouling of a simple microstructured membrane, and investigate how the details of the microstructure affect the filtration efficiency. Our idealized membrane consists of bifurcating pores, arranged in a layered structure, so that the number (and size) of pores changes in the depth of the membrane. In particular, we address how the details of the membrane microstructure affect the filter lifetime, and the total throughput. NSF DMS 1615719.
NASA Technical Reports Server (NTRS)
Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.
2012-01-01
This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.
The (Mathematical) Modeling Process in Biosciences.
Torres, Nestor V; Santos, Guido
2015-01-01
In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology.
A new MRI land surface model HAL
NASA Astrophysics Data System (ADS)
Hosaka, M.
2011-12-01
A land surface model HAL is newly developed for MRI-ESM1. It is used for the CMIP simulations. HAL consists of three submodels: SiByl (vegetation), SNOWA (snow) and SOILA (soil) in the current version. It also contains a land coupler LCUP which connects some submodels and an atmospheric model. The vegetation submodel SiByl has surface vegetation processes similar to JMA/SiB (Sato et al. 1987, Hirai et al. 2007). SiByl has 2 vegetation layers (canopy and grass) and calculates heat, moisture, and momentum fluxes between the land surface and the atmosphere. The snow submodel SNOWA can have any number of snow layers and the maximum value is set to 8 for the CMIP5 experiments. Temperature, SWE, density, grain size and the aerosol deposition contents of each layer are predicted. The snow properties including the grain size are predicted due to snow metamorphism processes (Niwano et al., 2011), and the snow albedo is diagnosed from the aerosol mixing ratio, the snow properties and the temperature (Aoki et al., 2011). The soil submodel SOILA can also have any number of soil layers, and is composed of 14 soil layers in the CMIP5 experiments. The temperature of each layer is predicted by solving heat conduction equations. The soil moisture is predicted by solving the Darcy equation, in which hydraulic conductivity depends on the soil moisture. The land coupler LCUP is designed to enable the complicated constructions of the submidels. HAL can include some competing submodels (precise and detailed ones, and simpler ones), and they can run at the same simulations. LCUP enables a 2-step model validation, in which we compare the results of the detailed submodels with the in-situ observation directly at the 1st step, and follows the comparison between them and those of the simpler ones at the 2nd step. When the performances of the detailed ones are good, we can improve the simpler ones by using the detailed ones as reference models.
A SYSTEMS BIOLOGY APPROACH TO DEVELOPMENTAL TOXICOLOGY
Abstract
Recent advances in developmental biology have yielded detailed models of gene regulatory networks (GRNs) involved in cell specification and other processes in embryonic differentiation. Such networks form the bedrock on which a systems biology approach to developme...
Entrepreneurial Spirit in Strategic Planning.
ERIC Educational Resources Information Center
Riggs, Donald E.
1987-01-01
Presents a model which merges the concepts of entrepreneurship with those of strategic planning to create a library management system. Each step of the process, including needs assessment and policy formation, strategy choice and implementation, and evaluation, is described in detail. (CLB)
The need for conducting forensic analysis of decommissioned bridges.
DOT National Transportation Integrated Search
2014-01-01
A limiting factor in current bridge management programs is a lack of detailed knowledge of bridge deterioration : mechanisms and processes. The current state of the art is to predict future condition using statistical forecasting : models based upon ...
Somekh, Judith; Choder, Mordechai; Dori, Dov
2012-01-01
We propose a Conceptual Model-based Systems Biology framework for qualitative modeling, executing, and eliciting knowledge gaps in molecular biology systems. The framework is an adaptation of Object-Process Methodology (OPM), a graphical and textual executable modeling language. OPM enables concurrent representation of the system's structure—the objects that comprise the system, and behavior—how processes transform objects over time. Applying a top-down approach of recursively zooming into processes, we model a case in point—the mRNA transcription cycle. Starting with this high level cell function, we model increasingly detailed processes along with participating objects. Our modeling approach is capable of modeling molecular processes such as complex formation, localization and trafficking, molecular binding, enzymatic stimulation, and environmental intervention. At the lowest level, similar to the Gene Ontology, all biological processes boil down to three basic molecular functions: catalysis, binding/dissociation, and transporting. During modeling and execution of the mRNA transcription model, we discovered knowledge gaps, which we present and classify into various types. We also show how model execution enhances a coherent model construction. Identification and pinpointing knowledge gaps is an important feature of the framework, as it suggests where research should focus and whether conjectures about uncertain mechanisms fit into the already verified model. PMID:23308089
Modelling interstellar physics and chemistry: implications for surface and solid-state processes.
Williams, David; Viti, Serena
2013-07-13
We discuss several types of regions in the interstellar medium of the Milky Way and other galaxies in which the chemistry appears to be influenced or dominated by surface and solid-state processes occurring on or in interstellar dust grains. For some of these processes, for example, the formation of H₂ molecules, detailed experimental and theoretical approaches have provided excellent fundamental data for incorporation into astrochemical models. In other cases, there is an astrochemical requirement for much more laboratory and computational study, and we highlight these needs in our description. Nevertheless, in spite of the limitations of the data, it is possible to infer from astrochemical modelling that surface and solid-state processes play a crucial role in astronomical chemistry from early epochs of the Universe up to the present day.
Mathematical models for the early detection and treatment of colorectal cancer.
Harper, P R; Jones, S K
2005-05-01
Colorectal cancer is a major cause of death for men and women in the Western world. When the cancer is detected through an awareness of the symptoms by a patient, typically it is at an advanced stage. It is possible to detect cancer at an early stage through screening and the marked differences in survival for early and late stages provide the incentive for the primary prevention or early detection of colorectal cancer. This paper considers mathematical models for colorectal cancer screening together with models for the treatment of patients. Illustrative results demonstrate that detailed attention to the processes involved in diseases, interventions and treatment enable us to combine data and expert knowledge from various sources. Thus a detailed operational model is a very useful tool in helping to make decisions about screening at national and local levels.
Prediction of nearfield jet entrainment by an interactive mixing/afterburning model
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.; Wilmoth, R. G.
1978-01-01
The development of a computational model (BOAT) for calculating nearfield jet entrainment, and its application to the prediction of nozzle boattail pressures, is discussed. BOAT accounts for the detailed turbulence and thermochemical processes occurring in the nearfield shear layers of jet engine (and rocket) exhaust plumes while interfacing with the inviscid exhaust and external flowfield regions in an overlaid, interactive manner. The ability of the model to analyze simple free shear flows is assessed by detailed comparisons with fundamental laboratory data. The overlaid methodology and the entrainment correction employed to yield the effective plume boundary conditions are assessed via application of BOAT in conjunction with the codes comprising the NASA/LRC patched viscous/inviscid model for determining nozzle boattail drag for subsonic/transonic external flows. Comparisons between the predictions and data on underexpanded laboratory cold air jets are presented.
Global models for synthetic fuels planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamontagne, J.
1983-10-01
This study was performed to identify the set of existing global models with the best potential for use in the US Synthetic Fuels Corporation's strategic planning process, and to recommend the most appropriate model. The study was limited to global models with representations that encompass time horizons beyond the year 2000, multiple fuel forms, and significant regional detail. Potential accessibility to the Synthetic Fuels Corporation and adequate documentation were also required. Four existing models (LORENDAS, WIM, IIASA, and IEA/ORAU) were judged to be the best candidates for the SFC's use at this time; none of the models appears to bemore » ideal for the SFC's purposes. On the basis of currently available information, the most promising short-term option open to the SFC is the use of LORENDAS, with careful attention to definition of alternative energy demand scenarios. Longer-term options which deserve further study are coupling LORENDAS with an explicit model of energy demand, and modification of the IEA/ORAU model to include finer time-period definition and additional technological detail.« less
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
McDonald, Paige L; Harwood, Kenneth J; Butler, Joan T; Schlumpf, Karen S; Eschmann, Carson W; Drago, Daniela
2018-12-01
Intensive courses (ICs), or accelerated courses, are gaining popularity in medical and health professions education, particularly as programs adopt e-learning models to negotiate challenges of flexibility, space, cost, and time. In 2014, the Department of Clinical Research and Leadership (CRL) at the George Washington University School of Medicine and Health Sciences began the process of transitioning two online 15-week graduate programs to an IC model. Within a year, a third program also transitioned to this model. A literature review yielded little guidance on the process of transitioning from 15-week, traditional models of delivery to IC models, particularly in online learning environments. Correspondingly, this paper describes the process by which CRL transitioned three online graduate programs to an IC model and details best practices for course design and facilitation resulting from our iterative redesign process. Finally, we present lessons-learned for the benefit of other medical and health professions' programs contemplating similar transitions. CRL: Department of Clinical Research and Leadership; HSCI: Health Sciences; IC: Intensive course; PD: Program director; QM: Quality Matters.
McDonald, Paige L.; Harwood, Kenneth J.; Butler, Joan T.; Schlumpf, Karen S.; Eschmann, Carson W.; Drago, Daniela
2018-01-01
ABSTRACT Intensive courses (ICs), or accelerated courses, are gaining popularity in medical and health professions education, particularly as programs adopt e-learning models to negotiate challenges of flexibility, space, cost, and time. In 2014, the Department of Clinical Research and Leadership (CRL) at the George Washington University School of Medicine and Health Sciences began the process of transitioning two online 15-week graduate programs to an IC model. Within a year, a third program also transitioned to this model. A literature review yielded little guidance on the process of transitioning from 15-week, traditional models of delivery to IC models, particularly in online learning environments. Correspondingly, this paper describes the process by which CRL transitioned three online graduate programs to an IC model and details best practices for course design and facilitation resulting from our iterative redesign process. Finally, we present lessons-learned for the benefit of other medical and health professionsʼ programs contemplating similar transitions. Abbreviations: CRL: Department of Clinical Research and Leadership; HSCI: Health Sciences; IC: Intensive course; PD: Program director; QM: Quality Matters PMID:29277143
ERIC Educational Resources Information Center
Riggins, Tracy; Miller, Neely C.; Bauer, Patricia J.; Georgieff, Michael K.; Nelson, Charles A.
2009-01-01
The ability to recall contextual details associated with an event begins to develop in the first year of life, yet adult levels of recall are not reached until early adolescence. Dual-process models of memory suggest that the distinct retrieval process that supports the recall of such contextual information is recollection. In the present…
NASA Technical Reports Server (NTRS)
Dunbar, D. N.; Tunnah, B. G.
1979-01-01
Program predicts production volumes of petroleum refinery products, with particular emphasis on aircraft-turbine fuel blends and their key properties. It calculates capital and operating costs for refinery and its margin of profitability. Program also includes provisions for processing of synthetic crude oils from oil shale and coal liquefaction processes and contains highly-detailed blending computations for alternative jet-fuel blends of varying endpoint specifications.
A functional-dynamic reflection on participatory processes in modeling projects.
Seidl, Roman
2015-12-01
The participation of nonscientists in modeling projects/studies is increasingly employed to fulfill different functions. However, it is not well investigated if and how explicitly these functions and the dynamics of a participatory process are reflected by modeling projects in particular. In this review study, I explore participatory modeling projects from a functional-dynamic process perspective. The main differences among projects relate to the functions of participation-most often, more than one per project can be identified, along with the degree of explicit reflection (i.e., awareness and anticipation) on the dynamic process perspective. Moreover, two main approaches are revealed: participatory modeling covering diverse approaches and companion modeling. It becomes apparent that the degree of reflection on the participatory process itself is not always explicit and perfectly visible in the descriptions of the modeling projects. Thus, the use of common protocols or templates is discussed to facilitate project planning, as well as the publication of project results. A generic template may help, not in providing details of a project or model development, but in explicitly reflecting on the participatory process. It can serve to systematize the particular project's approach to stakeholder collaboration, and thus quality management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Sarah
2015-12-01
The dual objectives of this project were improving our basic understanding of processes that control cirrus microphysical properties and improvement of the representation of these processes in the parameterizations. A major effort in the proposed research was to integrate, calibrate, and better understand the uncertainties in all of these measurements.
Advanced information processing system: Inter-computer communication services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Masotto, Tom; Sims, J. Terry; Whittredge, Roy; Alger, Linda S.
1991-01-01
The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.
Is the work flow model a suitable candidate for an observatory supervisory control infrastructure?
NASA Astrophysics Data System (ADS)
Daly, Philip N.; Schumacher, Germán.
2016-08-01
This paper reports on the early investigation of using the work flow model for observatory infrastructure software. We researched several work ow engines and identified 3 for further detailed, study: Bonita BPM, Activiti and Taverna. We discuss the business process model and how it relates to observatory operations and identify a path finder exercise to further evaluate the applicability of these paradigms.
NASA Astrophysics Data System (ADS)
Turner, Andrew; Bhat, Gs; Evans, Jonathan; Marsham, John; Martin, Gill; Parker, Douglas; Taylor, Chris; Bhattacharya, Bimal; Madan, Ranju; Mitra, Ashis; Mrudula, Gm; Muddu, Sekhar; Pattnaik, Sandeep; Rajagopal, En; Tripathi, Sachida
2015-04-01
The monsoon supplies the majority of water in South Asia, making understanding and predicting its rainfall vital for the growing population and economy. However, modelling and forecasting the monsoon from days to the season ahead is limited by large model errors that develop quickly, with significant inter-model differences pointing to errors in physical parametrizations such as convection, the boundary layer and land surface. These errors persist into climate projections and many of these errors persist even when increasing resolution. At the same time, a lack of detailed observations is preventing a more thorough understanding of monsoon circulation and its interaction with the land surface: a process governed by the boundary layer and convective cloud dynamics. The INCOMPASS project will support and develop modelling capability in Indo-UK monsoon research, including test development of a new Met Office Unified Model 100m-resolution domain over India. The first UK detachment of the FAAM research aircraft to India, in combination with an intensive ground-based observation campaign, will gather new observations of the surface, boundary layer structure and atmospheric profiles to go with detailed information on the timing of monsoon rainfall. Observations will be focused on transects in the northern plains of India (covering a range of surface types from irrigated to rain-fed agriculture, and wet to dry climatic zones) and across the Western Ghats and rain shadow in southern India (including transitions from land to ocean and across orography). A pilot observational campaign is planned for summer 2015, with the main field campaign to take place during spring/summer 2016. This project will advance our ability to forecast the monsoon, through a programme of measurements and modelling that aims to capture the key surface-atmosphere feedback processes in models. The observational analysis will allow a unique and unprecedented characterization of monsoon processes that will feed directly into model development at the UK Met Office and Indian NCMRWF, through model evaluation at a range of scales and leading to model improvement by working directly with parametrization developers. The project will institute a new long-term series of measurements of land surface fluxes, a particularly unconstrained observation for India, through eddy covariance flux towers. Combined with detailed land surface modelling using the Joint UK Land Environment Simulator (JULES) model, this will allow testing of land surface initialization in monsoon forecasts and improved land-atmosphere coupling.
Brandt, Adam R; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah
2015-01-01
Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services.
Brandt, Adam R.; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah
2015-01-01
Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services. PMID:26695068
NASA Astrophysics Data System (ADS)
Jaiswal, D.; Long, S.; Parton, W. J.; Hartman, M.
2012-12-01
A coupled modeling system of crop growth model (BioCro) and biogeochemical model (DayCent) has been developed to assess the two-way interactions between plant growth and biogeochemistry. Crop growth in BioCro is simulated using a detailed mechanistic biochemical and biophysical multi-layer canopy model and partitioning of dry biomass into different plant organs according to phenological stages. Using hourly weather records, the model partitions light between dynamically changing sunlit and shaded portions of the canopy and computes carbon and water exchange with the atmosphere and through the canopy for each hour of the day, each day of the year. The model has been parameterized for the bioenergy crops sugarcane, Miscanthus and switchgrass, and validation has shown it to predict growth cycles and partitioning of biomass to a high degree of accuracy. As such it provides an ideal input for a soil biogeochemical model. DayCent is an established model for predicting long-term changes in soil C & N and soil-atmosphere exchanges of greenhouse gases. At present, DayCent uses a relatively simple productivity model. In this project BioCro has replaced this simple model to provide DayCent with a productivity and growth model equal in detail to its biogeochemistry. Dynamic coupling of these two models to produce CroCent allows for differential C: N ratios of litter fall (based on rates of senescence of different plant organs) and calibration of the model for realistic plant productivity in a mechanistic way. A process-based approach to modeling plant growth is needed for bioenergy crops because research on these crops (especially second generation feedstocks) has started only recently, and detailed agronomic information for growth, yield and management is too limited for effective empirical models. The coupled model provides means to test and improve the model against high resolution data, such as that obtained by eddy covariance and explore yield implications of different crop and soil management.
Retzlaff, Nancy; Stadler, Peter F
2018-06-21
Evolutionary processes have been described not only in biology but also for a wide range of human cultural activities including languages and law. In contrast to the evolution of DNA or protein sequences, the detailed mechanisms giving rise to the observed evolution-like processes are not or only partially known. The absence of a mechanistic model of evolution implies that it remains unknown how the distances between different taxa have to be quantified. Considering distortions of metric distances, we first show that poor choices of the distance measure can lead to incorrect phylogenetic trees. Based on the well-known fact that phylogenetic inference requires additive metrics, we then show that the correct phylogeny can be computed from a distance matrix [Formula: see text] if there is a monotonic, subadditive function [Formula: see text] such that [Formula: see text] is additive. The required metric-preserving transformation [Formula: see text] can be computed as the solution of an optimization problem. This result shows that the problem of phylogeny reconstruction is well defined even if a detailed mechanistic model of the evolutionary process remains elusive.
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use RACE/A to model data from two variants of a picture-word interference task in a psychological refractory period design. These models will demonstrate how RACE/A enables interactions between sequential sampling and long-term declarative learning, and between sequential sampling and task control. In a traditional sequential sampling model, the onset of the process within the task is unclear, as is the number of sampling processes. RACE/A provides a theoretical basis for estimating the onset of sequential sampling processes during task execution and allows for easy modeling of multiple sequential sampling processes within a task. Copyright © 2011 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.; Callan, Geary
1990-01-01
The focus of this part of the investigation is to find one or more general modeling techniques that will help reduce the time taken by numerical forecast models to initiate or spin-up precipitation processes and enhance storm intensity. If the conventional data base could explain the atmospheric mesoscale flow in detail, then much of our problem would be eliminated. But the data base is primarily synoptic scale, requiring that a solution must be sought either in nonconventional data, in methods to initialize mesoscale circulations, or in ways of retaining between forecasts the model generated mesoscale dynamics and precipitation fields. All three methods are investigated. The initialization and assimilation of explicit cloud and rainwater quantities computed from conservation equations in a mesoscale regional model are examined. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed. The explicit cloud calculations were purposely kept simple so that different initialization techniques can be easily and economically tested. Precipitation spin-up processes associated with three different types of weather phenomena are examined. Our findings show that diabatic initialization, or diabatic initialization in combination with a new diabatic forcing procedure, work effectively to enhance the spin-up of precipitation in a mesoscale numerical weather prediction forecast. Also, the retention of cloud and rain water during the analysis phase of the 4-D data assimilation procedure is shown to be valuable. Without detailed observations, the vertical placement of the diabatic heating remains a critical problem.
Understanding a basic biological process: Expert and novice models of meiosis
NASA Astrophysics Data System (ADS)
Kindfield, Ann C. H.
Central to secondary and college-level biology instruction is the development of student understanding of a number of subcellular processes. Yet some of the most crucial are consistently cited as the most difficult components of biology to learn. Among these is meiosis. In this article I report on the meiosis models utilized by five individuals at each of three levels of expertise in genetics as each reasoned about this process in an individual interview setting. Detailed characterization of individual meiosis models and comparison among models revealed a set of biologically correct features common to all individuals' models as well as a variety of model flaws (i.e., meiosis misunderstandings) which are categorized according to type and level of expertise. These results are suggestive of both sources of various misunderstandings and factors that might contribute to the construction of a sound understanding of meiosis. Each of these is addressed in relation to their respective implications for instruction.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Development of a software tool to support chemical and biological terrorism intelligence analysis
NASA Astrophysics Data System (ADS)
Hunt, Allen R.; Foreman, William
1997-01-01
AKELA has developed a software tool which uses a systems analytic approach to model the critical processes which support the acquisition of biological and chemical weapons by terrorist organizations. This tool has four major components. The first is a procedural expert system which describes the weapon acquisition process. It shows the relationship between the stages a group goes through to acquire and use a weapon, and the activities in each stage required to be successful. It applies to both state sponsored and small group acquisition. An important part of this expert system is an analysis of the acquisition process which is embodied in a list of observables of weapon acquisition activity. These observables are cues for intelligence collection The second component is a detailed glossary of technical terms which helps analysts with a non- technical background understand the potential relevance of collected information. The third component is a linking capability which shows where technical terms apply to the parts of the acquisition process. The final component is a simple, intuitive user interface which shows a picture of the entire process at a glance and lets the user move quickly to get more detailed information. This paper explains e each of these five model components.
On storm movement and its applications
NASA Astrophysics Data System (ADS)
Niemczynowicz, Janusz
Rainfall-runoff models applicable for design and analysis of sewage systems in urban areas are further developed in order to represent better different physical processes going on on an urban catchment. However, one important part of the modelling procedure, the generation of the rainfall input is still a weak point. The main problem is lack of adequate rainfall data which represent temporal and spatial variations of the natural rainfall process. Storm movement is a natural phenomenon which influences urban runoff. However, the rainfall movement and its influence on runoff generation process is not represented in presently available urban runoff simulation models. Physical description of the rainfall movement and its parameters is given based on detailed measurements performed on twelve gauges in Lund, Sweden. The paper discusses the significance of the rainfall movement on the runoff generation process and gives suggestions how the rainfall movement parameters may be used in runoff modelling.
Designing Class Methods from Dataflow Diagrams
NASA Astrophysics Data System (ADS)
Shoval, Peretz; Kabeli-Shani, Judith
A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.
The single-zone numerical model of homogeneous charge compression ignition engine performance
NASA Astrophysics Data System (ADS)
Fedyanov, E. A.; Itkis, E. M.; Kuzmin, V. N.; Shumskiy, S. N.
2017-02-01
The single-zone model of methane-air mixture combustion in the Homogeneous Charge Compression Ignition engine was developed. First modeling efforts resulted in the selection of the detailed kinetic reaction mechanism, most appropriate for the conditions of the HCCI process. Then, the model was completed so as to simulate the performance of the four-stroke engine and was coupled by physically reasonable adjusting functions. Validation of calculations against experimental data showed acceptable agreement.
Design, Modeling, Fabrication & Characterization of Industrial Si Solar Cells
NASA Astrophysics Data System (ADS)
Chowdhury, Ahrar Ahmed
Photovoltaic is a viable solution towards meeting the energy demand in an ecofriendly environment. To ensure the mass access in photovoltaic electricity, cost effective approach needs to be adapted. This thesis aims towards substrate independent fabrication process in order to achieve high efficiency cost effective industrial Silicon (Si) solar cells. Most cost-effective structures, such as, Al-BSF (Aluminum Back Surface Field), FSF (Front Surface Field) and bifacial cells are investigated in detail to exploit the efficiency potentials. First off, we introduced two-dimensional simulation model to design and modeling of most commonly used Si solar cells in today's PV arena. Best modelled results of high efficiency Al-BSF, FSF and bifacial cells are 20.50%, 22% and 21.68% respectively. Special attentions are given on the metallization design on all the structures in order to reduce the Ag cost. Furthermore, detail design and modeling were performed on FSF and bifacial cells. The FSF cells has potentials to gain 0.42%abs efficiency by combining the emitter design and front surface passivation. The prospects of bifacial cells can be revealed with the optimization of gridline widths and gridline numbers. Since, bifacial cells have metallization on both sides, a double fold cost saving is possible via innovative metallization design. Following modeling an effort is undertaken to reach the modelled result in fabrication the process. We proposed substrate independent fabrication process aiming towards establishing simultaneous processing sequences for both monofacial and bifacial cells. Subsequently, for the contact formation cost effective screen-printed technology is utilized throughout this thesis. The best Al-BSF cell attained efficiency ˜19.40%. Detail characterization was carried out to find a roadmap of achieving >20.50% efficiency Al-BSF cell. Since, n-type cell is free from Light Induced degradation (LID), recently there is a growing interest on FSF cell. Our best fabricated result of FSF cell achieved ˜18.40% efficiency. Characterizations on such cells provide that, cell performance can be further improved by utilizing high lifetime base wafer. We showed a step by step improvement on the device parameters to achieve ˜22% efficiency FSF cell. Finally, bifacial cells were fabricated with 13.32% front and 9.65% rear efficiency. The efficiency limitation is due to the quality of base wafer. Detail resistance breakdown was conducted on these cells to analyze parasitic resistance losses. It was found that base and gridline resistances dominated the FF loss. However, very low contact resistance of 20 mO-cm 2 at front side and 2 mO-cm2 at the rear side was observed by utilizing same Ag paste for front and rear contact formation. This might provide a pathway towards the search of an optimized Ag paste to attain high efficiency screen-printed bifacial cell. Detail investigations needs to be carried out to unveil the property of this Ag paste. In future work, more focus will be given on the metallization design to incorporate further reduction in Ag cost. Al2O3 passivation layer will be incorporated as a means to attain ˜23% screen-printed bifacial cell.
NASA Astrophysics Data System (ADS)
Pawłowicz, Joanna A.
2017-10-01
The TLS method (Terrestrial Laser Scanning) may replace the traditional building survey methods, e.g. those requiring the use measuring tapes or range finders. This technology allows for collecting digital data in the form of a point cloud, which can be used to create a 3D model of a building. In addition, it allows for collecting data with an incredible precision, which translates into the possibility to reproduce all architectural features of a building. This data is applied in reverse engineering to create a 3D model of an object existing in a physical space. This study presents the results of a research carried out using a point cloud to recreate the architectural features of a historical building with the application of reverse engineering. The research was conducted on a two-storey residential building with a basement and an attic. Out of the building’s façade sticks a veranda featuring a complicated, wooden structure. The measurements were taken at the medium and the highest resolution using a ScanStation C10 laser scanner by Leica. The data obtained was processed using specialist software, which allowed for the application of reverse engineering, especially for reproducing the sculpted details of the veranda. Following digitization, all redundant data was removed from the point cloud and the cloud was subjected to modelling. For testing purposes, a selected part of the veranda was modelled by means of two methods: surface matching and Triangulated Irregular Network. Both modelling methods were applied in the case of data collected at medium and the highest resolution. Creating a model based on data obtained at medium resolution, both by means of the surface matching and the TIN method, does not allow for a precise recreation of architectural details. The study presents certain sculpted elements recreated based on the highest resolution data with superimposed TIN juxtaposed against a digital image. The resulting model is very precise. Creating good models requires highly accurate field data. It is important to properly choose the distance between the measuring station and the measured object in order to ensure that the angles of incidence (horizontal and vertical) of the laser beam are as straight as possible. The model created based on medium resolution offers very poor quality of details, i.e. only the bigger, basic elements of each detail are clearly visible, while the smaller ones are blurred. This is why in order to obtain data sufficient to reproduce architectural details laser scanning should be performed at the highest resolution. In addition, modelling by means of the surface matching method should be avoided - a better idea is to use the TIN method. In addition to providing a realistically-looking visualization, the method has one more important advantage - it is 4 times faster than the surface matching method.
High Accuracy 3D Processing of Satellite Imagery
NASA Technical Reports Server (NTRS)
Gruen, A.; Zhang, L.; Kocaman, S.
2007-01-01
Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.
Approaches to Validation of Models for Low Gravity Fluid Behavior
NASA Technical Reports Server (NTRS)
Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad
2005-01-01
This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.
Making ecological models adequate
Getz, Wayne M.; Marshall, Charles R.; Carlson, Colin J.; Giuggioli, Luca; Ryan, Sadie J.; Romañach, Stephanie; Boettiger, Carl; Chamberlain, Samuel D.; Larsen, Laurel; D'Odorico, Paolo; O'Sullivan, David
2018-01-01
Critical evaluation of the adequacy of ecological models is urgently needed to enhance their utility in developing theory and enabling environmental managers and policymakers to make informed decisions. Poorly supported management can have detrimental, costly or irreversible impacts on the environment and society. Here, we examine common issues in ecological modelling and suggest criteria for improving modelling frameworks. An appropriate level of process description is crucial to constructing the best possible model, given the available data and understanding of ecological structures. Model details unsupported by data typically lead to over parameterisation and poor model performance. Conversely, a lack of mechanistic details may limit a model's ability to predict ecological systems’ responses to management. Ecological studies that employ models should follow a set of model adequacy assessment protocols that include: asking a series of critical questions regarding state and control variable selection, the determinacy of data, and the sensitivity and validity of analyses. We also need to improve model elaboration, refinement and coarse graining procedures to better understand the relevancy and adequacy of our models and the role they play in advancing theory, improving hind and forecasting, and enabling problem solving and management.
Characterizing the multiexciton fission intermediate in pentacene through 2D spectral modeling
NASA Astrophysics Data System (ADS)
Tempelaar, Roel; Reichman, David
Singlet fission, the molecular process in which a singlet excitation splits into two triplet excitons, holds promise to enhance the photoconversion efficiency of solar cells. Despite advances in both experiments and theory, a detailed understanding of this process remains lacking. In particular, the nature of the correlated triplet pair state (TT), which acts as a fission intermediate, remains obscure. Recently, 2D spectroscopy was shown to allow for the direct detection of the extremely weak optical transition between TT and the ground state through coherently prepared vibrational wavepackets in the associated electronic potentials. Here, we present a microscopic model of singlet fission which includes an exact quantum treatment of such vibrational modes. Our model reproduces the reported 2D spectra of pentacene, while providing a detailed insight into the anatomy of TT. As such, our results form a stepping stone towards understanding singlet fission at a molecular level, while bridging the gap between the wealth of recent theoretical works on one side and experimental measurements on the other. R.T. acknowledges The Netherlands Organisation for Scientific Research NWO for support through a Rubicon Grant.
International Space Station Alpha (ISSA) Integrated Traffic Model
NASA Technical Reports Server (NTRS)
Gates, R. E.
1995-01-01
The paper discusses the development process of the International Space Station Alpha (ISSA) Integrated Traffic Model which is a subsystem analyses tool utilized in the ISSA design analysis cycles. Fast-track prototyping of the detailed relationships between daily crew and station consumables, propellant needs, maintenance requirements and crew rotation via spread sheets provide adequate benchmarks to assess cargo vehicle design and performance characteristics.
Simulation of Electric Propulsion Thrusters
2011-01-01
and operational lifetime. The second area of modelling activity concerns the plumes produced by electric thrusters. Detailed information on the plumes ...to reproduce the in-orbit space environment using ground-based laboratory facilities. Device modelling also plays an important role in plume ...of the numerical analysis of other aspects of thruster design, such as thermal and structural processes, is omitted here. There are two fundamental
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
The Dynamics of Planet Formation
NASA Astrophysics Data System (ADS)
Chambers, J. E.
2005-05-01
The transformation of a protoplanetary disk of gas and dust into a system of planets is a mysterious business that is frustratingly difficult to observe in detail. For this reason, studies of planet formation are largely based on theoretical models with only a few anchor points where precious observations are available. In this talk I will give an overview of some of these theoretical models, indicating areas of uncertainty and places where the models are on firmer ground. For convenience, theorists usually divide planet formation into a series of stages: formation of solid bodies from dust, aggregation of solid bodies into protoplanets, late-stage growth and the formation of giant planets, and planetary migration. Here I will concentrate mostly on the second and third of these stages (understanding of the first and last stages is rather limited, and the author's understanding is especially so). The intermediate stages involve interplay between several physical processes: physical collisions, gravitational scattering, dynamical friction, gas drag, and the capture and collapse of atmospheres. I will describe these processes in some detail, and show using analytical models how these effects can lead to a variety of planetary outcomes. This work was supported by NASA's Planetary Geology and Geophysics and TPF Foundation Science Mission programmes.
NASA Astrophysics Data System (ADS)
Evans, M. E.; Merow, C.; Record, S.; Menlove, J.; Gray, A.; Cundiff, J.; McMahon, S.; Enquist, B. J.
2013-12-01
Current attempts to forecast how species' distributions will change in response to climate change suffer under a fundamental trade-off: between modeling many species superficially vs. few species in detail (between correlative vs. mechanistic models). The goals of this talk are two-fold: first, we present a Bayesian multilevel modeling framework, dynamic range modeling (DRM), for building process-based forecasts of many species' distributions at a time, designed to address the trade-off between detail and number of distribution forecasts. In contrast to 'species distribution modeling' or 'niche modeling', which uses only species' occurrence data and environmental data, DRMs draw upon demographic data, abundance data, trait data, occurrence data, and GIS layers of climate in a single framework to account for two processes known to influence range dynamics - demography and dispersal. The vision is to use extensive databases on plant demography, distributions, and traits - in the Botanical Information and Ecology Network, the Forest Inventory and Analysis database (FIA), and the International Tree Ring Data Bank - to develop DRMs for North American trees. Second, we present preliminary results from building the core submodel of a DRM - an integral projection model (IPM) - for a sample of dominant tree species in western North America. IPMs are used to infer demographic niches - i.e., the set of environmental conditions under which population growth rate is positive - and project population dynamics through time. Based on >550,000 data points derived from FIA for nine tree species in western North America, we show IPM-based models of their current and future distributions, and discuss how IPMs can be used to forecast future forest productivity, mortality patterns, and inform efforts at assisted migration.
Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck
2010-01-01
Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138
NASA Astrophysics Data System (ADS)
Lay, Vera; Bodenburg, Sascha; Buske, Stefan; Townend, John; Kellett, Richard; Savage, Martha; Schmitt, Douglas; Constantinou, Alexis; Eccles, Jennifer; Lawton, Donald; Hall, Kevin; Bertram, Malcolm; Gorman, Andrew
2017-04-01
The plate-bounding Alpine Fault in New Zealand is an 850 km long transpressive continental fault zone that is late in its earthquake cycle. The Deep Fault Drilling Project (DFDP) aims to deliver insight into the geological structure of this fault zone and its evolution by drilling and sampling the Alpine Fault at depth. Previously analysed 2D reflection seismic data image the main Alpine Fault reflector at a depth of 1.5-2.2 km with a dip of approximately 48° to the southeast below the DFDP-2 borehole. Additionally, there are indications of a more complex 3D fault structure with several fault branches which have not yet been clearly imaged in detail. For that reason we acquired a 3D-VSP seismic data set at the DFDP-2 drill site in January 2016. A zero-offset VSP and a walk-away VSP survey were conducted using a Vibroseis source. Within the borehole, a permanently installed "Distributed Acoustic Fibre Optic Cable" (down to 893 m) and a 3C Sercel slimwave tool (down to 400 m) were used to record the seismic wavefield. In addition, an array of 160 three-component receivers with a spacing of 10 m perpendicular and 20 m parallel to the main strike of the Alpine Fault was set up and moved successively along the valley to record reflections from the main Alpine Fault zone over a broad depth range and to derive a detailed 3D tomographic velocity model in the hanging wall. We will show a detailed 3D velocity model derived from first-arrival traveltime tomography. Subsets of the whole data set were analysed separately to estimate the corresponding ray coverage and the reliability of the observed features in the obtained velocity model. By testing various inversion parameters and starting models, we derived a detailed near-surface velocity model that reveals the significance of the old glacial valley structures. Hence, this new 3D model improves the velocity model derived previously from a 2D seismic profile line in that area. Furthermore, processing of the dense 3C data shows clear reflections on both inline and crossline profiles. Correlating single reflection events enables us to identify the origin of reflections recorded in the data and reveal their 3D character. This array data gives strong evidence for reflections coming from the side, possibly from the steeply dipping valley flanks. Finally, the data will be processed using advanced seismic imaging methods to derive a detailed structural image of the valley and the fault zone at depth. Thus, the results will provide a detailed basis for a seismic site characterization at the DFDP-2 drill site, that will be of crucial importance for further structural and geological investigations of the architecture of the Alpine Fault in this area.
S stars in the Gaia era: stellar parameters and nucleosynthesis
NASA Astrophysics Data System (ADS)
van Eck, Sophie; Karinkuzhi, Drisya; Shetye, Shreeya; Jorissen, Alain; Goriely, Stéphane; Siess, Lionel; Merle, Thibault; Plez, Bertrand
2018-04-01
S stars are s-process and C-enriched (0.5
Comprehensive model of a hermetic reciprocating compressor
NASA Astrophysics Data System (ADS)
Yang, B.; Ziviani, D.; Groll, E. A.
2017-08-01
A comprehensive simulation model is presented to predict the performance of a hermetic reciprocating compressor and to reveal the underlying mechanisms when the compressor is running. The presented model is composed of sub-models simulating the in-cylinder compression process, piston ring/journal bearing frictional power loss, single phase induction motor and the overall compressor energy balance among different compressor components. The valve model, leakage through piston ring model and in-cylinder heat transfer model are also incorporated into the in-cylinder compression process model. A numerical algorithm solving the model is introduced. The predicted results of the compressor mass flow rate and input power consumption are compared to the published compressor map values. Future work will focus on detailed experimental validation of the model and parametric studies investigating the effects of structural parameters, including the stroke-to-bore ratio, on the compressor performance.
NASA Technical Reports Server (NTRS)
Perry, Bruce; Anderson, Molly
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.
NASA Astrophysics Data System (ADS)
Yan, Fuhan; Li, Zhaofeng; Jiang, Yichuan
2016-05-01
The issues of modeling and analyzing diffusion in social networks have been extensively studied in the last few decades. Recently, many studies focus on uncertain diffusion process. The uncertainty of diffusion process means that the diffusion probability is unpredicted because of some complex factors. For instance, the variety of individuals' opinions is an important factor that can cause uncertainty of diffusion probability. In detail, the difference between opinions can influence the diffusion probability, and then the evolution of opinions will cause the uncertainty of diffusion probability. It is known that controlling the diffusion process is important in the context of viral marketing and political propaganda. However, previous methods are hardly feasible to control the uncertain diffusion process of individual opinion. In this paper, we present suitable strategy to control this diffusion process based on the approximate estimation of the uncertain factors. We formulate a model in which the diffusion probability is influenced by the distance between opinions, and briefly discuss the properties of the diffusion model. Then, we present an optimization problem at the background of voting to show how to control this uncertain diffusion process. In detail, it is assumed that each individual can choose one of the two candidates or abstention based on his/her opinion. Then, we present strategy to set suitable initiators and their opinions so that the advantage of one candidate will be maximized at the end of diffusion. The results show that traditional influence maximization algorithms are not applicable to this problem, and our algorithm can achieve expected performance.
Better models are more effectively connected models
NASA Astrophysics Data System (ADS)
Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John
2016-04-01
The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity can be represented in models: either by allowing it to emerge from model behaviour or by parameterizing it inside model structures; and on the appropriate scale at which processes should be represented explicitly or implicitly. It will also explore how modellers themselves approach connectivity through the results of a community survey. Finally, it will present the outline of an international modelling exercise aimed at assessing how different modelling concepts can capture connectivity in real catchments.
CHIMERA II - A real-time multiprocessing environment for sensor-based robot control
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1989-01-01
A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.
Identifying Model-Based Reconfiguration Goals through Functional Deficiencies
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Trave-Massuyes, Louise
2004-01-01
Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.
On a Quantum Model of Brain Activities
NASA Astrophysics Data System (ADS)
Fichtner, K.-H.; Fichtner, L.; Freudenberg, W.; Ohya, M.
2010-01-01
One of the main activities of the brain is the recognition of signals. A first attempt to explain the process of recognition in terms of quantum statistics was given in [6]. Subsequently, details of the mathematical model were presented in a (still incomplete) series of papers (cf. [7, 2, 5, 10]). In the present note we want to give a general view of the principal ideas of this approach. We will introduce the basic spaces and justify the choice of spaces and operations. Further, we bring the model face to face with basic postulates any statistical model of the recognition process should fulfill. These postulates are in accordance with the opinion widely accepted in psychology and neurology.
Villa-Uriol, M. C.; Berti, G.; Hose, D. R.; Marzo, A.; Chiarini, A.; Penrose, J.; Pozo, J.; Schmidt, J. G.; Singh, P.; Lycett, R.; Larrabide, I.; Frangi, A. F.
2011-01-01
Cerebral aneurysms are a multi-factorial disease with severe consequences. A core part of the European project @neurIST was the physical characterization of aneurysms to find candidate risk factors associated with aneurysm rupture. The project investigated measures based on morphological, haemodynamic and aneurysm wall structure analyses for more than 300 cases of ruptured and unruptured aneurysms, extracting descriptors suitable for statistical studies. This paper deals with the unique challenges associated with this task, and the implemented solutions. The consistency of results required by the subsequent statistical analyses, given the heterogeneous image data sources and multiple human operators, was met by a highly automated toolchain combined with training. A testimonial of the successful automation is the positive evaluation of the toolchain by over 260 clinicians during various hands-on workshops. The specification of the analyses required thorough investigations of modelling and processing choices, discussed in a detailed analysis protocol. Finally, an abstract data model governing the management of the simulation-related data provides a framework for data provenance and supports future use of data and toolchain. This is achieved by enabling the easy modification of the modelling approaches and solution details through abstract problem descriptions, removing the need of repetition of manual processing work. PMID:22670202
Dispersal and fallout simulations for urban consequences management (u)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinstein, Fernando F; Wachtor, Adam J; Nelson, Matt
2010-01-01
Hazardous chemical, biological, or radioactive releases from leaks, spills, fires, or blasts, may occur (intentionally or accidentally) in urban environments during warfare or as part of terrorist attacks on military bases or other facilities. The associated contaminant dispersion is complex and semi-chaotic. Urban predictive simulation capabilities can have direct impact in many threat-reduction areas of interest, including, urban sensor placement and threat analysis, contaminant transport (CT) effects on surrounding civilian population (dosages, evacuation, shelter-in-place), education and training of rescue teams and services. Detailed simulations for the various processes involved are in principle possible, but generally not fast. Predicting urban airflowmore » accompanied by CT presents extremely challenging requirements. Crucial technical issues include, simulating turbulent fluid and particulate transport, initial and boundary condition modeling incorporating a consistent stratified urban boundary layer with realistic wind fluctuations, and post-processing of the simulation results for practical consequences management. Relevant fluid dynamic processes to be simulated include, detailed energetic and contaminant sources, complex building vortex shedding and flows in recirculation zones, and modeling of particle distributions, including particulate fallout, as well as deposition, re-suspension and evaporation. Other issues include, modeling building damage effects due to eventual blasts, addressing appropriate regional and atmospheric data reduction.« less
A process proof test for model concepts: Modelling the meso-scale
NASA Astrophysics Data System (ADS)
Hellebrand, Hugo; Müller, Christoph; Matgen, Patrick; Fenicia, Fabrizio; Savenije, Huub
In hydrological modelling the use of detailed soil data is sometimes troublesome, since often these data are hard to obtain and, if available at all, difficult to interpret and process in a way that makes them meaningful for the model at hand. Intuitively the understanding and mapping of dominant runoff processes in the soil show high potential for improving hydrological models. In this study a labour-intensive methodology to assess dominant runoff processes is simplified in such a way that detailed soil maps are no longer needed. Nonetheless, there is an ongoing debate on how to integrate this type of information in hydrological models. In this study, dominant runoff processes (DRP) are mapped for meso-scale basins using the permeability of the substratum, land use information and the slope in a GIS. During a field campaign the processes are validated and for each DRP assumptions are made concerning their water storage capacity. The latter is done by means of combining soil data obtained during the field campaign with soil data obtained from the literature. Second, several parsimoniously parameterized conceptual hydrological models are used that incorporate certain aspects of the DRP. The result of these models are compared with a benchmark model in which the soil is represented as only one lumped parameter to test the contribution of the DRP in hydrological models. The proposed methodology is tested for 15 meso-scale river basins located in Luxembourg. The main goal of this study is to investigate if integrating dominant runoff processes, which have high information content concerning soil characteristics, with hydrological models allows the improvement of simulation results models with a view to regionalization and predictions in ungauged basins. The regionalization procedure gave no clear results. The calibration procedure and the well-mixed discharge signal of the calibration basins are considered major causes for this and it made the deconvolution of discharge signals of meso-scale basins problematic. From the results it is also suggested that DRP could very well display some sort of uniqueness of place, which was not foreseen in the methods from which they were derived. Furthermore, a strong seasonal influence on model performance was observed, implying a seasonal dependence of the DRP. When comparing the performance between the DRP models and the benchmark model no real distinction was found. To improve the performance of the DRP models, which are used in this study and also for then use of conceptual models in general, there is a need for an improved identification of the mechanisms that cause the different dominant runoff processes at the meso-scale. To achieve this, more orthogonal data could be of use for a better conceptualization of the DRPs. Then, models concepts should be adapted accordingly.
NASA Astrophysics Data System (ADS)
Niederheiser, R.; Rutzinger, M.; Bremer, M.; Wichmann, V.
2018-04-01
The investigation of changes in spatial patterns of vegetation and identification of potential micro-refugia requires detailed topographic and terrain information. However, mapping alpine topography at very detailed scales is challenging due to limited accessibility of sites. Close-range sensing by photogrammetric dense matching approaches based on terrestrial images captured with hand-held cameras offers a light-weight and low-cost solution to retrieve high-resolution measurements even in steep terrain and at locations, which are difficult to access. We propose a novel approach for rapid capturing of terrestrial images and a highly automated processing chain for retrieving detailed dense point clouds for topographic modelling. For this study, we modelled 249 plot locations. For the analysis of vegetation distribution and location properties, topographic parameters, such as slope, aspect, and potential solar irradiation were derived by applying a multi-scale approach utilizing voxel grids and spherical neighbourhoods. The result is a micro-topography archive of 249 alpine locations that includes topographic parameters at multiple scales ready for biogeomorphological analysis. Compared with regional elevation models at larger scales and traditional 2D gridding approaches to create elevation models, we employ analyses in a fully 3D environment that yield much more detailed insights into interrelations between topographic parameters, such as potential solar irradiation, surface area, aspect and roughness.
Madison Katherine Akers; Michael Kane; Dehai Zhao; Richard F. Daniels; Robert O. Teskey
2015-01-01
Examining the role of foliage in stand development across a range of stand structures provides a more detailed understanding of the processes driving productivity and allows further development of process-based models for prediction. Productivity changes observed at the stand scale will be the integration of changes at the individual tree scale, but few studies have...
D. M. Jimenez; B. W. Butler; J. Reardon
2003-01-01
Current methods for predicting fire-induced plant mortality in shrubs and trees are largely empirical. These methods are not readily linked to duff burning, soil heating, and surface fire behavior models. In response to the need for a physics-based model of this process, a detailed model for predicting the temperature distribution through a tree stem as a function of...
Information Processing and Collective Behavior in a Model Neuronal System
2014-03-28
for an AFOSR project headed by Steve Reppert on Monarch Butterfly navigation. We visited the Reppert lab at the UMASS Medical School and have had many...developed a detailed mathematical model of the mammalian circadian clock. Our model can accurately predict diverse experimental data including the...i.e. P1 affects P2 which affects P3 …). The output of the system is calculated (measurements), and the interactions are forgotten. Based on
Development and Exploration of the Core-Corona Model of Imploding Plasma Loads.
1980-07-01
cal relaxation processes can maintain an isothermal system . The final constraint in the original core-corona model equations was that of quasi-static...on the energy balance. The detailed physics of these upgrades and their improvement of the quantitative modeling of the system are discussed in the...participate in lengthening the radiaton pulse. 18 If such motion is favored in these systems , the impact on the radiation pulse length could be
Absorbable energy monitoring scheme: new design protocol to test vehicle structural crashworthiness.
Ofochebe, Sunday M; Enibe, Samuel O; Ozoegwu, Chigbogu G
2016-05-01
In vehicle crashworthiness design optimization detailed system evaluation capable of producing reliable results are basically achieved through high-order numerical computational (HNC) models such as the dynamic finite element model, mesh-free model etc. However the application of these models especially during optimization studies is basically challenged by their inherent high demand on computational resources, conditional stability of the solution process, and lack of knowledge of viable parameter range for detailed optimization studies. The absorbable energy monitoring scheme (AEMS) presented in this paper suggests a new design protocol that attempts to overcome such problems in evaluation of vehicle structure for crashworthiness. The implementation of the AEMS involves studying crash performance of vehicle components at various absorbable energy ratios based on a 2DOF lumped-mass-spring (LMS) vehicle impact model. This allows for prompt prediction of useful parameter values in a given design problem. The application of the classical one-dimensional LMS model in vehicle crash analysis is further improved in the present work by developing a critical load matching criterion which allows for quantitative interpretation of the results of the abstract model in a typical vehicle crash design. The adequacy of the proposed AEMS for preliminary vehicle crashworthiness design is demonstrated in this paper, however its extension to full-scale design-optimization problem involving full vehicle model that shows greater structural detail requires more theoretical development.
The methodology of database design in organization management systems
NASA Astrophysics Data System (ADS)
Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.
2017-01-01
The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.
Introduction: Special issue on advances in topobathymetric mapping, models, and applications
Gesch, Dean B.; Brock, John C.; Parrish, Christopher E.; Rogers, Jeffrey N.; Wright, C. Wayne
2016-01-01
Detailed knowledge of near-shore topography and bathymetry is required for many geospatial data applications in the coastal environment. New data sources and processing methods are facilitating development of seamless, regional-scale topobathymetric digital elevation models. These elevation models integrate disparate multi-sensor, multi-temporal topographic and bathymetric datasets to provide a coherent base layer for coastal science applications such as wetlands mapping and monitoring, sea-level rise assessment, benthic habitat mapping, erosion monitoring, and storm impact assessment. The focus of this special issue is on recent advances in the source data, data processing and integration methods, and applications of topobathymetric datasets.
Tunney, Richard J.; Mullett, Timothy L.; Moross, Claudia J.; Gardner, Anna
2012-01-01
The butcher-on-the-bus is a rhetorical device or hypothetical phenomenon that is often used to illustrate how recognition decisions can be based on different memory processes (Mandler, 1980). The phenomenon describes a scenario in which a person is recognized but the recognition is accompanied by a sense of familiarity or knowing characterized by an absence of contextual details such as the person’s identity. We report two recognition memory experiments that use signal detection analyses to determine whether this phenomenon is evidence for a recollection plus familiarity model of recognition or is better explained by a univariate signal detection model. We conclude that there is an interaction between confidence estimates and remember-know judgments which is not explained fully by either single-process signal detection or traditional dual-process models. PMID:22745631
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Chenn Zhou
2008-10-15
Pulverized coal injection (PCI) into the blast furnace (BF) has been recognized as an effective way to decrease the coke and total energy consumption along with minimization of environmental impacts. However, increasing the amount of coal injected into the BF is currently limited by the lack of knowledge of some issues related to the process. It is therefore important to understand the complex physical and chemical phenomena in the PCI process. Due to the difficulty in attaining trus BF measurements, Computational fluid dynamics (CFD) modeling has been identified as a useful technology to provide such knowledge. CFD simulation is powerfulmore » for providing detailed information on flow properties and performing parametric studies for process design and optimization. In this project, comprehensive 3-D CFD models have been developed to simulate the PCI process under actual furnace conditions. These models provide raceway size and flow property distributions. The results have provided guidance for optimizing the PCI process.« less
Aymerich, I; Rieger, L; Sobhani, R; Rosso, D; Corominas, Ll
2015-09-15
The objective of this paper is to demonstrate the importance of incorporating more realistic energy cost models (based on current energy tariff structures) into existing water resource recovery facilities (WRRFs) process models when evaluating technologies and cost-saving control strategies. In this paper, we first introduce a systematic framework to model energy usage at WRRFs and a generalized structure to describe energy tariffs including the most common billing terms. Secondly, this paper introduces a detailed energy cost model based on a Spanish energy tariff structure coupled with a WRRF process model to evaluate several control strategies and provide insights into the selection of the contracted power structure. The results for a 1-year evaluation on a 115,000 population-equivalent WRRF showed monthly cost differences ranging from 7 to 30% when comparing the detailed energy cost model to an average energy price. The evaluation of different aeration control strategies also showed that using average energy prices and neglecting energy tariff structures may lead to biased conclusions when selecting operating strategies or comparing technologies or equipment. The proposed framework demonstrated that for cost minimization, control strategies should be paired with a specific optimal contracted power. Hence, the design of operational and control strategies must take into account the local energy tariff. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.
Theoretical study of optical pump process in solid gain medium based on four-energy-level model
NASA Astrophysics Data System (ADS)
Ma, Yongjun; Fan, Zhongwei; Zhang, Bin; Yu, Jin; Zhang, Hongbo
2018-04-01
A semiclassical algorithm is explored to a four-energy level model, aiming to find out the factors that affect the dynamics behavior during the pump process. The impacts of pump intensity Ω p , non-radiative transition rate γ 43 and decay rate of electric dipole δ 14 are discussed in detail. The calculation results show that large γ 43, small δ 14, and strong pumping Ω p are beneficial to the establishing of population inversion. Under strong pumping conditions, the entire pump process can be divided into four different phases, tentatively named far-from-equilibrium process, Rabi oscillation process, quasi dynamic equilibrium process and ‘equilibrium’ process. The Rabi oscillation can slow the pumping process and cause some instability. Moreover, the duration of the entire process is negatively related to Ω p and γ 43 whereas positively related to δ 14.
Soong, Ming Foong; Ramli, Rahizar; Saifizul, Ahmad
2017-01-01
Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details.
Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy
2017-01-01
Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details. PMID:28617819
NASA Astrophysics Data System (ADS)
Santagati, C.; Inzerillo, L.; Di Paola, F.
2013-07-01
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
Plasma characterization studies for materials processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfender, E.; Heberlein, J.
New applications for plasma processing of materials require a more detailed understanding of the fundamental processes occurring in the processing reactors. We have developed reactors offering specific advantages for materials processing, and we are using modeling and diagnostic techniques for the characterization of these reactors. The emphasis is in part set by the interest shown by industry pursuing specific plasma processing applications. In this paper we report on the modeling of radio frequency plasma reactors for use in materials synthesis, and on the characterization of the high rate diamond deposition process using liquid precursors. In the radio frequency plasma torchmore » model, the influence of specific design changes such as the location of the excitation coil on the enthalpy flow distribution is investigated for oxygen and air as plasma gases. The diamond deposition with liquid precursors has identified the efficient mass transport in form of liquid droplets into the boundary layer as responsible for high growth, and the chemical properties of the liquid for the film morphology.« less
Mathur, Rohit; Xing, Jia; Gilliam, Robert; Sarwar, Golam; Hogrefe, Christian; Pleim, Jonathan; Pouliot, George; Roselle, Shawn; Spero, Tanya L.; Wong, David C.; Young, Jeffrey
2018-01-01
The Community Multiscale Air Quality (CMAQ) modeling system is extended to simulate ozone, particulate matter, and related precursor distributions throughout the Northern Hemisphere. Modelled processes were examined and enhanced to suitably represent the extended space and time scales for such applications. Hemispheric scale simulations with CMAQ and the Weather Research and Forecasting (WRF) model are performed for multiple years. Model capabilities for a range of applications including episodic long-range pollutant transport, long-term trends in air pollution across the Northern Hemisphere, and air pollution-climate interactions are evaluated through detailed comparison with available surface, aloft, and remotely sensed observations. The expansion of CMAQ to simulate the hemispheric scales provides a framework to examine interactions between atmospheric processes occurring at various spatial and temporal scales with physical, chemical, and dynamical consistency. PMID:29681922
Overhead longwave infrared hyperspectral material identification using radiometric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelinski, M. E.
Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimalmore » atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.« less
Aircraft Structural Mass Property Prediction Using Conceptual-Level Structural Analysis
NASA Technical Reports Server (NTRS)
Sexstone, Matthew G.
1998-01-01
This paper describes a methodology that extends the use of the Equivalent LAminated Plate Solution (ELAPS) structural analysis code from conceptual-level aircraft structural analysis to conceptual-level aircraft mass property analysis. Mass property analysis in aircraft structures has historically depended upon parametric weight equations at the conceptual design level and Finite Element Analysis (FEA) at the detailed design level. ELAPS allows for the modeling of detailed geometry, metallic and composite materials, and non-structural mass coupled with analytical structural sizing to produce high-fidelity mass property analyses representing fully configured vehicles early in the design process. This capability is especially valuable for unusual configuration and advanced concept development where existing parametric weight equations are inapplicable and FEA is too time consuming for conceptual design. This paper contrasts the use of ELAPS relative to empirical weight equations and FEA. ELAPS modeling techniques are described and the ELAPS-based mass property analysis process is detailed. Examples of mass property stochastic calculations produced during a recent systems study are provided. This study involved the analysis of three remotely piloted aircraft required to carry scientific payloads to very high altitudes at subsonic speeds. Due to the extreme nature of this high-altitude flight regime, few existing vehicle designs are available for use in performance and weight prediction. ELAPS was employed within a concurrent engineering analysis process that simultaneously produces aerodynamic, structural, and static aeroelastic results for input to aircraft performance analyses. The ELAPS models produced for each concept were also used to provide stochastic analyses of wing structural mass properties. The results of this effort indicate that ELAPS is an efficient means to conduct multidisciplinary trade studies at the conceptual design level.
Aircraft Structural Mass Property Prediction Using Conceptual-Level Structural Analysis
NASA Technical Reports Server (NTRS)
Sexstone, Matthew G.
1998-01-01
This paper describes a methodology that extends the use of the Equivalent LAminated Plate Solution (ELAPS) structural analysis code from conceptual-level aircraft structural analysis to conceptual-level aircraft mass property analysis. Mass property analysis in aircraft structures has historically depended upon parametric weight equations at the conceptual design level and Finite Element Analysis (FEA) at the detailed design level ELAPS allows for the modeling of detailed geometry, metallic and composite materials, and non-structural mass coupled with analytical structural sizing to produce high-fidelity mass property analyses representing fully configured vehicles early in the design process. This capability is especially valuable for unusual configuration and advanced concept development where existing parametric weight equations are inapplicable and FEA is too time consuming for conceptual design. This paper contrasts the use of ELAPS relative to empirical weight equations and FEA. ELAPS modeling techniques are described and the ELAPS-based mass property analysis process is detailed Examples of mass property stochastic calculations produced during a recent systems study are provided This study involved the analysis of three remotely piloted aircraft required to carry scientific payloads to very high altitudes at subsonic speeds. Due to the extreme nature of this high-altitude flight regime,few existing vehicle designs are available for use in performance and weight prediction. ELAPS was employed within a concurrent engineering analysis process that simultaneously produces aerodynamic, structural, and static aeroelastic results for input to aircraft performance analyses. The ELAPS models produced for each concept were also used to provide stochastic analyses of wing structural mass properties. The results of this effort indicate that ELAPS is an efficient means to conduct multidisciplinary trade studies at the conceptual design level.
VARTM Model Development and Verification
NASA Technical Reports Server (NTRS)
Cano, Roberto J. (Technical Monitor); Dowling, Norman E.
2004-01-01
In this investigation, a comprehensive Vacuum Assisted Resin Transfer Molding (VARTM) process simulation model was developed and verified. The model incorporates resin flow through the preform, compaction and relaxation of the preform, and viscosity and cure kinetics of the resin. The computer model can be used to analyze the resin flow details, track the thickness change of the preform, predict the total infiltration time and final fiber volume fraction of the parts, and determine whether the resin could completely infiltrate and uniformly wet out the preform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Guochang; Chen, George, E-mail: gc@ecs.soton.ac.uk, E-mail: sli@mail.xjtu.edu.cn; School of Electronic and Computer Science, University of Southampton, Southampton SO17 1BJ
Charge transport properties in nanodielectrics present different tendencies for different loading concentrations. The exact mechanisms that are responsible for charge transport in nanodielectrics are not detailed, especially for high loading concentration. A charge transport model in nanodielectrics has been proposed based on quantum tunneling mechanism and dual-level traps. In the model, the thermally assisted hopping (TAH) process for the shallow traps and the tunnelling process for the deep traps are considered. For different loading concentrations, the dominant charge transport mechanisms are different. The quantum tunneling mechanism plays a major role in determining the charge conduction in nanodielectrics with high loadingmore » concentrations. While for low loading concentrations, the thermal hopping mechanism will dominate the charge conduction process. The model can explain the observed conductivity property in nanodielectrics with different loading concentrations.« less
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
A Comprehensive Two-moment Warm Microphysical Bulk Scheme :
NASA Astrophysics Data System (ADS)
Caro, D.; Wobrock, W.; Flossmann, A.; Chaumerliac, N.
The microphysic properties of gaz, aerosol particles, and hydrometeors have impli- cations at local scale (precipitations, pollution peak,..), at regional scale (inundation, acid rains,...), and also, at global scale (radiative forcing,...). So, a multi-scale study is necessary to understand and forecast in a good way meteorological phenomena con- cerning clouds. However, it cannot be carried with detailed microphysic model, on account of computers limitations. So, microphysical bulk schemes have to estimate the n´ large scale z properties of clouds due to smaller scale processes and charac- teristics. So, the development of such bulk scheme is rather important to go further in the knowledge of earth climate and in the forecasting of intense meteorological phenomena. Here, a quasi-spectral microphysic warm scheme has been developed to predict the concentrations and mixing ratios of aerosols, cloud droplets and raindrops. It considers, explicitely and analytically, the nucleation of droplets (Abdul-Razzak et al., 2000), condensation/evaporation (Chaumerliac et al., 1987), the breakup and collision-coalescence processes with the Long (1974) Ss kernels and the Berry and ´ Reinhardt (1974) Ss autoconversion parameterization, but also, the aerosols and gaz ´ scavenging. First, the parameterization has been estimated in the simplest dynamic framework of an air parcel model, with the results of the detailed scavenging model, DESCAM (Flossmann et al., 1985). Then, it has been tested, in the dynamic frame- work of a kinematic model (Szumowski et al., 1998) dedicated to the HaRP cam- paign (Hawaiian Rainband Project, 1990), with the observations and with the results of the two dimensional detailed microphysic scheme, DESCAM 2-D (Flossmann et al., 1988), implement in the CLARK model (Clark and Farley, 1984).
A new numerical model of the middle atmosphere. 2: Ozone and related species
NASA Technical Reports Server (NTRS)
Garcia, Rolando R.; Solomon, Susan
1994-01-01
A new two-dimensional model with detailed photochemistry is presented. The model includes descriptions of planetary wave and gravity wave propagation and dissipation to characterize the wave forcing and associated mixing in the stratosphere and mesosphere. Such a representation allows for explicit calculation of the regions of strong mixing in the middle atmosphere required for accurate simulation of trace gas transport. The new model also includes a detailed description of photochemical processes in the stratosphere and mesosphere. The downward transport of H2, H2O, and NO(y) from the mesosphere to the stratosphere is examined, and it is shown that mesospheric processes can influence the distributions of these chemical species in polar regions. For HNO3 we also find that small concentrations of liquid aerosols above 30 km could play a major role in determining the abundance in polar winter at high latitudes. The model is also used to examine the chemical budget of ozone in the midlatitude stratosphere and to set constraints on the effectiveness of bromine relative to chlorine for ozone loss and the role of the HO2 + BrO reaction. Recent laboratory data used in this modeling study suggest that this process greatly enhances the effectiveness of bromine for ozone destruction, making bromine-catalyzed chemistry second only to HO(x)-catalyzed ozone destruction in the contemporary stratosphere at midlatitudes below about 18 km. The calculated vertical distribution of ozone in the lower stratosphere agrees well with observations, as does the total column ozone during most seasons and latitudes, with the important exception of southern hemisphere winter and spring.
NASA Astrophysics Data System (ADS)
Schmith, Johanne; Höskuldsson, Ármann; Holm, Paul Martin; Larsen, Guðrún
2018-04-01
Katla volcano in Iceland produces hazardous large explosive basaltic eruptions on a regular basis, but very little quantitative data for future hazard assessments exist. Here details on fragmentation mechanism and eruption dynamics are derived from a study of deposit stratigraphy with detailed granulometry and grain morphology analysis, granulometric modeling, componentry and the new quantitative regularity index model of fragmentation mechanism. We show that magma/water interaction is important in the ash generation process, but to a variable extent. By investigating the large explosive basaltic eruptions from 1755 and 1625, we document that eruptions of similar size and magma geochemistry can have very different fragmentation dynamics. Our models show that fragmentation in the 1755 eruption was a combination of magmatic degassing and magma/water-interaction with the most magma/water-interaction at the beginning of the eruption. The fragmentation of the 1625 eruption was initially also a combination of both magmatic and phreatomagmatic processes, but magma/water-interaction diminished progressively during the later stages of the eruption. However, intense magma/water interaction was reintroduced during the final stages of the eruption dominating the fine fragmentation at the end. This detailed study of fragmentation changes documents that subglacial eruptions have highly variable interaction with the melt water showing that the amount and access to melt water changes significantly during eruptions. While it is often difficult to reconstruct the progression of eruptions that have no quantitative observational record, this study shows that integrating field observations and granulometry with the new regularity index can form a coherent model of eruption evolution.
An ontology model for nursing narratives with natural language generation technology.
Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung
2013-01-01
The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.
NASA Technical Reports Server (NTRS)
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.
Grain boundary oxidation and its effects on high temperature fatigue life
NASA Technical Reports Server (NTRS)
Liu, H. W.; Oshida, Yoshiki
1986-01-01
Fatigue lives at elevated temperatures are often shortened by creep and/or oxidation. Creep causes grain boundary void nucleation and grain boundary cavitation. Grain boundary voids and cavities will accelerate fatigue crack nucleation and propagation, and thereby shorten fatigue life. The functional relationships between the damage rate of fatigue crack nucleation and propagation and the kinetic process of oxygen diffusion depend on the detailed physical processes. The kinetics of grain boundary oxidation penetration was investigated. The statistical distribution of grain boundary penetration depth was analyzed. Its effect on high temperature fatigue life are discussed. A model of intermittent micro-ruptures of grain boundary oxide was proposed for high temperature fatigue crack growth. The details of these studies are reported.
NASA Technical Reports Server (NTRS)
Huning, J. R.; Logan, T. L.; Smith, J. H.
1982-01-01
The potential of using digital satellite data to establish a cloud cover data base for the United States, one that would provide detailed information on the temporal and spatial variability of cloud development are studied. Key elements include: (1) interfacing GOES data from the University of Wisconsin Meteorological Data Facility with the Jet Propulsion Laboratory's VICAR image processing system and IBIS geographic information system; (2) creation of a registered multitemporal GOES data base; (3) development of a simple normalization model to compensate for sun angle; (4) creation of a variable size georeference grid that provides detailed cloud information in selected areas and summarized information in other areas; and (5) development of a cloud/shadow model which details the percentage of each grid cell that is cloud and shadow covered, and the percentage of cloud or shadow opacity. In addition, comparison of model calculations of insolation with measured values at selected test sites was accomplished, as well as development of preliminary requirements for a large scale data base of cloud cover statistics.
Process Model of A Fusion Fuel Recovery System for a Direct Drive IFE Power Reactor
NASA Astrophysics Data System (ADS)
Natta, Saswathi; Aristova, Maria; Gentile, Charles
2008-11-01
A task has been initiated to develop a detailed representative model for the fuel recovery system (FRS) in the prospective direct drive inertial fusion energy (IFE) reactor. As part of the conceptual design phase of the project, a chemical process model is developed in order to observe the interaction of system components. This process model is developed using FEMLAB Multiphysics software with the corresponding chemical engineering module (CEM). Initially, the reactants, system structure, and processes are defined using known chemical species of the target chamber exhaust. Each step within the Fuel recovery system is modeled compartmentally and then merged to form the closed loop fuel recovery system. The output, which includes physical properties and chemical content of the products, is analyzed after each step of the system to determine the most efficient and productive system parameters. This will serve to attenuate possible bottlenecks in the system. This modeling evaluation is instrumental in optimizing and closing the fusion fuel cycle in a direct drive IFE power reactor. The results of the modeling are presented in this paper.
The (Mathematical) Modeling Process in Biosciences
Torres, Nestor V.; Santos, Guido
2015-01-01
In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology. PMID:26734063
Shukla, Nagesh; Keast, John E; Ceglarek, Darek
2014-10-01
The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Multiple electron processes of He and Ne by proton impact
NASA Astrophysics Data System (ADS)
Terekhin, Pavel Nikolaevich; Montenegro, Pablo; Quinto, Michele; Monti, Juan; Fojon, Omar; Rivarola, Roberto
2016-05-01
A detailed investigation of multiple electron processes (single and multiple ionization, single capture, transfer-ionization) of He and Ne is presented for proton impact at intermediate and high collision energies. Exclusive absolute cross sections for these processes have been obtained by calculation of transition probabilities in the independent electron and independent event models as a function of impact parameter in the framework of the continuum distorted wave-eikonal initial state theory. A binomial analysis is employed to calculate exclusive probabilities. The comparison with available theoretical and experimental results shows that exclusive probabilities are needed for a reliable description of the experimental data. The developed approach can be used for obtaining the input database for modeling multiple electron processes of charged particles passing through the matter.
VizieR Online Data Catalog: Investigation of mass loss mechanism of LPVs (Winters+, 2000)
NASA Astrophysics Data System (ADS)
Winters, J. M.; Le Bertre, T.; Jeong, K. S.; Helling, C.; Sedlmayr, E.
2000-09-01
Parameters and resultant quantities of a grid of hydrodynamical models for the circumstellar dust shells around pulsating red giants which treat the time-dependent hydrodynamics and include a detailed treatment of the dust formation process. (1 data file).
Configuration Management, Capacity Planning Decision Support, Modeling and Simulation
1988-12-01
flow includes both top-down and bottom-up requirements. The flow also includes hardware, software and transfer acquisition, installation, operation ... management and upgrade as required. Satisfaction of a users needs and requirements is a difficult and detailed process. The key assumptions at this
Post Occupancy Evaluation of Educational Buildings and Equipment.
ERIC Educational Resources Information Center
Watson, Chris
1997-01-01
Details the post occupancy evaluation (POE) process for public buildings. POEs are used to improve design and optimize educational building and equipment use. The evaluation participants, the method used, the results and recommendations, model schools, and classroom alterations using POE are described. (9 references.) (RE)
NASA Technical Reports Server (NTRS)
Braden, J. A.; Hancock, J. P.; Hackett, J. E.; Lyman, V.
1979-01-01
The experimental data encompassing surface pressure measurements, and wake surveys at static and wind-on conditions are analyzed. Cruise performance trends reflecting nacelle geometric variations, and nozzle operating conditions are presented. Details of the modeling process are included.
NASA Astrophysics Data System (ADS)
Shi, Zhemin; Taguchi, Dai; Manaka, Takaaki; Iwamoto, Mitsumasa
2016-04-01
The details of turnover process of spontaneous polarization and associated carrier motions in indium-tin oxide/poly-(vinylidene-trifluoroethylene)/pentacene/Au capacitor were analyzed by coupling displacement current measurement (DCM) and electric-field-induced optical second-harmonic generation (EFISHG) measurement. A model was set up from DCM results to depict the relationship between electric field in semiconductor layer and applied external voltage, proving that photo illumination effect on the spontaneous polarization process lied in variation of semiconductor conductivity. The EFISHG measurement directly and selectively probed the electric field distribution in semiconductor layer, modifying the model and revealing detailed carrier behaviors involving photo illumination effect, dipole reversal, and interfacial charging in the device. A further decrease of DCM current in the low voltage region under illumination was found as the result of illumination effect, and the result was argued based on the changing of the total capacitance of the double-layer capacitors.
To Improve Homicide Firearm Information Reporting - Rhode Island State Crime Laboratory.
Jiang, Yongwen; Lyons, Dennis; Northup, Jane B; Hilliard, Dennis; Foss, Karen; Young, Shannon; Viner-Brown, Samara
2018-05-01
Information on homicide firearms can be used to help state and local communities understand the problems of violence and decrease injuries and deaths. However, it is difficult to collect these data. To our knowledge, in the public health arena, the National Violent Death Reporting System (NVDRS) is the only system that collects detailed firearm information. The Rhode Island State Crime Laboratory (RISCL) can provide detailed information about the firearms and cartridge cases\\bullets involved in firearm deaths. With help from the RISCL, the firearm information related to homicides in Rhode Island has improved dramatically. In 2015, information on caliber/gauge increased by 80%, the firearm type by 50%, the make by 50%, and the model by 20%. By documenting the process of using information from the RISCL, it is hoped that this process can be used as a model by other states when reporting on violent deaths. [Full article available at http://rimed.org/rimedicaljournal-2018-05.asp].
Under-Track CFD-Based Shape Optimization for a Low-Boom Demonstrator Concept
NASA Technical Reports Server (NTRS)
Wintzer, Mathias; Ordaz, Irian; Fenbert, James W.
2015-01-01
The detailed outer mold line shaping of a Mach 1.6, demonstrator-sized low-boom concept is presented. Cruise trim is incorporated a priori as part of the shaping objective, using an equivalent-area-based approach. Design work is performed using a gradient-driven optimization framework that incorporates a three-dimensional, nonlinear flow solver, a parametric geometry modeler, and sensitivities derived using the adjoint method. The shaping effort is focused on reducing the under-track sonic boom level using an inverse design approach, while simultaneously satisfying the trim requirement. Conceptual-level geometric constraints are incorporated in the optimization process, including the internal layout of fuel tanks, landing gear, engine, and crew station. Details of the model parameterization and design process are documented for both flow-through and powered states, and the performance of these optimized vehicles presented in terms of inviscid L/D, trim state, pressures in the near-field and at the ground, and predicted sonic boom loudness.
High-speed AFM for scanning the architecture of living cells
NASA Astrophysics Data System (ADS)
Li, Jing; Deng, Zhifeng; Chen, Daixie; Ao, Zhuo; Sun, Quanmei; Feng, Jiantao; Yin, Bohua; Han, Li; Han, Dong
2013-08-01
We address the modelling of tip-cell membrane interactions under high speed atomic force microscopy. Using a home-made device with a scanning area of 100 × 100 μm2, in situ imaging of living cells is successfully performed under loading rates from 1 to 50 Hz, intending to enable detailed descriptions of physiological processes in living samples.We address the modelling of tip-cell membrane interactions under high speed atomic force microscopy. Using a home-made device with a scanning area of 100 × 100 μm2, in situ imaging of living cells is successfully performed under loading rates from 1 to 50 Hz, intending to enable detailed descriptions of physiological processes in living samples. Electronic supplementary information (ESI) available: Movie of the real-time change of inner surface within fresh blood vessel. The movie was captured at a speed of 30 Hz in the range of 80 μm × 80 μm. See DOI: 10.1039/c3nr01464a
Statistical mechanics of neocortical interactions: Constraints on 40-Hz models of short-term memory
NASA Astrophysics Data System (ADS)
Ingber, Lester
1995-10-01
Calculations presented in L. Ingber and P.L. Nunez, Phys. Rev. E 51, 5074 (1995) detailed the evolution of short-term memory in the neocortex, supporting the empirical 7+/-2 rule of constraints on the capacity of neocortical processing. These results are given further support when other recent models of 40-Hz subcycles of low-frequency oscillations are considered.
International Space Station Alpha (ISSA) Integrated Traffic Model
NASA Technical Reports Server (NTRS)
Gates, Robert E.
1994-01-01
The paper discusses the development process of the International Space Station Alpha (ISSA) Integrated Traffic Model which is a subsystem analyses tool utilized in the ISSA design analysis cycles. Fast-track prototyping of the detailed relationships between daily crew and station consumables, propellant needs, maintenance requirements, and crew rotation via spread sheets provides adequate bench marks to assess cargo vehicle design and performance characteristics.
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
Insights into DNA-mediated interparticle interactions from a coarse-grained model
NASA Astrophysics Data System (ADS)
Ding, Yajun; Mittal, Jeetain
2014-11-01
DNA-functionalized particles have great potential for the design of complex self-assembled materials. The major hurdle in realizing crystal structures from DNA-functionalized particles is expected to be kinetic barriers that trap the system in metastable amorphous states. Therefore, it is vital to explore the molecular details of particle assembly processes in order to understand the underlying mechanisms. Molecular simulations based on coarse-grained models can provide a convenient route to explore these details. Most of the currently available coarse-grained models of DNA-functionalized particles ignore key chemical and structural details of DNA behavior. These models therefore are limited in scope for studying experimental phenomena. In this paper, we present a new coarse-grained model of DNA-functionalized particles which incorporates some of the desired features of DNA behavior. The coarse-grained DNA model used here provides explicit DNA representation (at the nucleotide level) and complementary interactions between Watson-Crick base pairs, which lead to the formation of single-stranded hairpin and double-stranded DNA. Aggregation between multiple complementary strands is also prevented in our model. We study interactions between two DNA-functionalized particles as a function of DNA grafting density, lengths of the hybridizing and non-hybridizing parts of DNA, and temperature. The calculated free energies as a function of pair distance between particles qualitatively resemble experimental measurements of DNA-mediated pair interactions.
NASA Astrophysics Data System (ADS)
Shu, Qian; Koo, Bonyoung; Yarwood, Greg; Henderson, Barron H.
2017-12-01
Differences between two air quality modeling systems reveal important uncertainties in model representations of secondary organic aerosol (SOA) fate. Two commonly applied models (CMAQ: Community Multiscale Air Quality; CAMx: Comprehensive Air Quality Model with extensions) predict very different OA concentrations over the eastern U.S., even when using the same source data for emissions and meteorology and the same SOA modeling approach. Both models include an option to output a detailed accounting of how each model process (e.g., chemistry, deposition, etc.) alters the mass of each modeled species, referred to as process analysis. We therefore perform a detailed diagnostic evaluation to quantify simulated tendencies (Gg/hr) of each modeled process affecting both the total model burden (Gg) of semi-volatile organic compounds (SVOC) in the gas (g) and aerosol (a) phases and the vertical structures to identify causes of concentration differences between the two models. Large differences in deposition (CMAQ: 69.2 Gg/d; CAMx: 46.5 Gg/d) contribute to significant OA bias in CMAQ relative to daily averaged ambient concentration measurements. CMAQ's larger deposition results from faster daily average deposition velocities (VD) for both SVOC (g) (VD,cmaq = 2.15 × VD,camx) and aerosols (VD,cmaq = 4.43 × Vd,camx). Higher aerosol deposition velocity would be expected to cause similar biases for inert compounds like elemental carbon (EC), but this was not seen. Daytime low-biases in EC were also simulated in CMAQ as expected but were offset by nighttime high-biases. Nighttime high-biases were a result of overly shallow mixing in CMAQ leading to a higher fraction of EC total atmospheric mass in the first layer (CAMx: 5.1-6.4%; CMAQ: 5.6-6.9%). Because of the opposing daytime and nighttime biases, the apparent daily average bias for EC is reduced. For OA, there are two effects of reduced vertical mixing: SOA and SVOC are concentrated near the surface, but SOA yields are reduced near the surface by nighttime enhancement of NOx. These results help to characterize model processes in the context of SOA and provide guidance for model improvement.
Power Laws in Stochastic Processes for Social Phenomena: An Introductory Review
NASA Astrophysics Data System (ADS)
Kumamoto, Shin-Ichiro; Kamihigashi, Takashi
2018-03-01
Many phenomena with power laws have been observed in various fields of the natural and social sciences, and these power laws are often interpreted as the macro behaviors of systems that consist of micro units. In this paper, we review some basic mathematical mechanisms that are known to generate power laws. In particular, we focus on stochastic processes including the Yule process and the Simon process as well as some recent models. The main purpose of this paper is to explain the mathematical details of their mechanisms in a self-contained manner.
Sustainable Design Approach: A case study of BIM use
NASA Astrophysics Data System (ADS)
Abdelhameed, Wael
2017-11-01
Achieving sustainable design in areas such as energy-efficient design depends largely on the accuracy of the analysis performed after the design is completed with all its components and material details. There are different analysis approaches and methods that predict relevant values and metrics such as U value, energy use and energy savings. Although certain differences in the accuracy of these approaches and methods have been recorded, this research paper does not focus on such matter, where determining the reason for discrepancies between those approaches and methods is difficult, because all error sources act simultaneously. The research paper rather introduces an approach through which BIM, building information modelling, can be utilised during the initial phases of the designing process, by analysing the values and metrics of sustainable design before going into the design details of a building. Managing all of the project drawings in a single file, BIM -building information modelling- is well known as one digital platform that offers a multidisciplinary detailed design -AEC model (Barison and Santos, 2010, Welle et.al., 2011). The paper presents in general BIM use in the early phases of the design process, in order to achieve certain required areas of sustainable design. The paper proceeds to introduce BIM use in specific areas such as site selection, wind velocity and building orientation, in terms of reaching the farther possible sustainable solution. In the initial phases of designing, material details and building components are not fully specified or selected yet. The designer usually focuses on zoning, topology, circulations, and other design requirements. The proposed approach employs the strategies and analysis of BIM use during those initial design phases in order to have the analysis and results of each solution or alternative design. The stakeholders and designers would have a better effective decision making process with a full clarity of each alternative's consequences. The architect would settle down and proceed in the alternative design of the best sustainable analysis. In later design stages, using the sustainable types of materials such as insulation, cladding, etc., and applying sustainable building components such as doors, windows, etc. would add more improvements and enhancements in reaching better values and metrics. The paper describes the methodology of this design approach through BIM strategies adopted in design creation. Case studies of architectural designs are used to highlight the details and benefits of this proposed approach.
Polar Processes in a 50-year Simulation of Stratospheric Chemistry and Transport
NASA Technical Reports Server (NTRS)
Kawa, S.R.; Douglass, A. R.; Patrick, L. C.; Allen, D. R.; Randall, C. E.
2004-01-01
The unique chemical, dynamical, and microphysical processes that occur in the winter polar lower stratosphere are expected to interact strongly with changing climate and trace gas abundances. Significant changes in ozone have been observed and prediction of future ozone and climate interactions depends on modeling these processes successfully. We have conducted an off-line model simulation of the stratosphere for trace gas conditions representative of 1975-2025 using meteorology from the NASA finite-volume general circulation model. The objective of this simulation is to examine the sensitivity of stratospheric ozone and chemical change to varying meteorology and trace gas inputs. This presentation will examine the dependence of ozone and related processes in polar regions on the climatological and trace gas changes in the model. The model past performance is base-lined against available observations, and a future ozone recovery scenario is forecast. Overall the model ozone simulation is quite realistic, but initial analysis of the detailed evolution of some observable processes suggests systematic shortcomings in our description of the polar chemical rates and/or mechanisms. Model sensitivities, strengths, and weaknesses will be discussed with implications for uncertainty and confidence in coupled climate chemistry predictions.
Process improvement as an investment: Measuring its worth
NASA Technical Reports Server (NTRS)
Mcgarry, Frank; Jeletic, Kellyann
1993-01-01
This paper discusses return on investment (ROI) generated from software process improvement programs. It details the steps needed to compute ROI and compares these steps from the perspective of two process improvement approaches: the widely known Software Engineering Institute's capability maturity model and the approach employed by NASA's Software Engineering Laboratory (SEL). The paper then describes the specific investments made in the SEL over the past 18 years and discusses the improvements gained from this investment by the production organization in the SEL.
Effects of input uncertainty on cross-scale crop modeling
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.
Silicon web process development
NASA Technical Reports Server (NTRS)
Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.
1977-01-01
Thirty-five (35) furnace runs were carried out during this quarter, of which 25 produced a total of 120 web crystals. The two main thermal models for the dendritic growth process were completed and are being used to assist the design of the thermal geometry of the web growth apparatus. The first model, a finite element representation of the susceptor and crucible, was refined to give greater precision and resolution in the critical central region of the melt. The second thermal model, which describes the dissipation of the latent heat to generate thickness-velocity data, was completed. Dendritic web samples were fabricated into solar cells using a standard configuration and a standard process for a N(+) -P-P(+) configuration. The detailed engineering design was completed for a new dendritic web growth facility of greater width capability than previous facilities.
Modeling of ultrasonic processes utilizing a generic software framework
NASA Astrophysics Data System (ADS)
Bruns, P.; Twiefel, J.; Wallaschek, J.
2017-06-01
Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.
Mesoscale energy deposition footprint model for kiloelectronvolt cluster bombardment of solids.
Russo, Michael F; Garrison, Barbara J
2006-10-15
Molecular dynamics simulations have been performed to model 5-keV C60 and Au3 projectile bombardment of an amorphous water substrate. The goal is to obtain detailed insights into the dynamics of motion in order to develop a straightforward and less computationally demanding model of the process of ejection. The molecular dynamics results provide the basis for the mesoscale energy deposition footprint model. This model provides a method for predicting relative yields based on information from less than 1 ps of simulation time.
'Geo'chemical research: a key building block for nuclear waste disposal safety cases.
Altmann, Scott
2008-12-12
Disposal of high level radioactive waste in deep underground repositories has been chosen as solution by several countries. Because of the special status this type waste has in the public mind, national implementation programs typically mobilize massive R&D efforts, last decades and are subject to extremely detailed and critical social-political scrutiny. The culminating argument of each program is a 'Safety Case' for a specific disposal concept containing, among other elements, the results of performance assessment simulations whose object is to model the release of radionuclides to the biosphere. Public and political confidence in performance assessment results (which generally show that radionuclide release will always be at acceptable levels) is based on their confidence in the quality of the scientific understanding in the processes included in the performance assessment model, in particular those governing radionuclide speciation and mass transport in the geological host formation. Geochemistry constitutes a core area of research in this regard. Clay-mineral rich formations are the subjects of advanced radwaste programs in several countries (France, Belgium, Switzerland...), principally because of their very low permeabilities and demonstrated capacities to retard by sorption most radionuclides. Among the key processes which must be represented in performance assessment models are (i) radioelement speciation (redox state, speciation, reactions determining radionuclide solid-solution partitioning) and (ii) diffusion-driven transport. The safety case must therefore demonstrate a detailed understanding of the physical-chemical phenomena governing the effects of these two aspects, for each radionuclide, within the geological barrier system. A wide range of coordinated (and internationally collaborated) research has been, and is being, carried out in order to gain the detailed scientific understanding needed for constructing those parts of the Safety Case supporting how radionuclide transfer is represented in the performance assessment model. The objective here is to illustrate how geochemical research contributes to this process and, above all, to identify a certain number of subjects which should be treated in priority.
The neuronal dynamics underlying cognitive flexibility in set shifting tasks.
Stemme, Anja; Deco, Gustavo; Busch, Astrid
2007-12-01
The ability to switch attention from one aspect of an object to another or in other words to switch the "attentional set" as investigated in tasks like the "Wisconsin Card Sorting Test" is commonly referred to as cognitive flexibility. In this work we present a biophysically detailed neurodynamical model which illustrates the neuronal base of the processes related to this cognitive flexibility. For this purpose we conducted behavioral experiments which allow the combined evaluation of different aspects of set shifting tasks: uninstructed set shifts as investigated in Wisconsin-like tasks, effects of stimulus congruency as investigated in Stroop-like tasks and the contribution of working memory as investigated in "Delayed-Match-to-Sample" tasks. The work describes how general experimental findings are usable to design the architecture of a biophysical detailed though minimalistic model with a high orientation on neurobiological findings and how, in turn, the simulations support experimental investigations. The resulting model is able to account for experimental and individual response times and error rates and enables the switch of attention as a system inherent model feature: The switching process suggested by the model is based on the memorization of the visual stimuli and does not require any synaptic learning. The operation of the model thus demonstrates with at least a high probability the neuronal dynamics underlying a key component of human behavior: the ability to adapt behavior according to context requirements--cognitive flexibility.
Anomalous transport and stochastic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.
1996-03-01
The relation between kinetic transport theory and theory of stochastic processes is reviewed. The Langevin equation formalism provides important, but rather limited information about diffusive processes. A quite promising new approach to modeling complex situations, such as transport in incompletely destroyed magnetic surfaces, is provided by the theory of Continuous Time Random Walks (CTRW), which is presented in some detail. An academic test problem is discussed in great detail: transport of particles in a fluctuating magnetic field, in the limit of infinite perpendicular correlation length. The well-known subdiffusive behavior of the Mean Square Displacement (MSD), proportional to t{sup 1/2}, ismore » recovered by a CTRW, but the complete density profile is not. However, the quasilinear approximation of the kinetic equation has the form of a non-Markovian diffusion equation and can thus be generated by a CTRW. 16 refs., 3 figs.« less
DigitalHuman (DH): An Integrative Mathematical Model ofHuman Physiology
NASA Technical Reports Server (NTRS)
Hester, Robert L.; Summers, Richard L.; lIescu, Radu; Esters, Joyee; Coleman, Thomas G.
2010-01-01
Mathematical models and simulation are important tools in discovering the key causal relationships governing physiological processes and improving medical intervention when physiological complexity is a central issue. We have developed a model of integrative human physiology called DigitalHuman (DH) consisting of -5000 variables modeling human physiology describing cardiovascular, renal, respiratory, endocrine, neural and metabolic physiology. Users can view time-dependent solutions and interactively introduce perturbations by altering numerical parameters to investigate new hypotheses. The variables, parameters and quantitative relationships as well as all other model details are described in XML text files. All aspects of the model, including the mathematical equations describing the physiological processes are written in XML open source, text-readable files. Model structure is based upon empirical data of physiological responses documented within the peer-reviewed literature. The model can be used to understand proposed physiological mechanisms and physiological interactions that may not be otherwise intUitively evident. Some of the current uses of this model include the analyses of renal control of blood pressure, the central role of the liver in creating and maintaining insulin resistance, and the mechanisms causing orthostatic hypotension in astronauts. Additionally the open source aspect of the modeling environment allows any investigator to add detailed descriptions of human physiology to test new concepts. The model accurately predicts both qualitative and more importantly quantitative changes in clinically and experimentally observed responses. DigitalHuman provides scientists a modeling environment to understand the complex interactions of integrative physiology. This research was supported by.NIH HL 51971, NSF EPSCoR, and NASA
OPERATIONS RESEARCH IN THE DESIGN OF MANAGEMENT INFORMATION SYSTEMS
management information systems is concerned with the identification and detailed specification of the information and data processing...of advanced data processing techniques in management information systems today, the close coordination of operations research and data systems activities has become a practical necessity for the modern business firm.... information systems in which mathematical models are employed as the basis for analysis and systems design. Operations research provides a
ERIC Educational Resources Information Center
Goodman, Kenneth S.; Goodman, Yetta M.
Research conducted to refine and perfect a theory and model of the reading process is presented in this report. Specifically, studies of the reading miscues of 96 students who were either speakers of English as a second language or of stable, rural dialects are detailed. Chapters deal with the following topics: methodology, the reading process,…
Combining Mechanistic Approaches for Studying Eco-Hydro-Geomorphic Coupling
NASA Astrophysics Data System (ADS)
Francipane, A.; Ivanov, V.; Akutina, Y.; Noto, V.; Istanbullouglu, E.
2008-12-01
Vegetation interacts with hydrology and geomorphic form and processes of a river basin in profound ways. Despite recent advances in hydrological modeling, the dynamic coupling between these processes is yet to be adequately captured at the basin scale to elucidate key features of process interaction and their role in the organization of vegetation and landscape morphology. In this study, we present a blueprint for integrating a geomorphic component into the physically-based, spatially distributed ecohydrological model, tRIBS- VEGGIE, which reproduces essential water and energy processes over the complex topography of a river basin and links them to the basic plant life regulatory processes. We present a preliminary design of the integrated modeling framework in which hillslope and channel erosion processes at the catchment scale, will be coupled with vegetation-hydrology dynamics. We evaluate the developed framework by applying the integrated model to Lucky Hills basin, a sub-catchment of the Walnut Gulch Experimental Watershed (Arizona). The evaluation is carried out by comparing sediment yields at the basin outlet, that follows a detailed verification of simulated land-surface energy partition, biomass dynamics, and soil moisture states.
NASA Astrophysics Data System (ADS)
Tourigny, E.; Nobre, C.; Cardoso, M. F.
2012-12-01
Deforestation of tropical forests for logging and agriculture, associated to slash-and-burn practices, is a major source of CO2 emissions, both immediate due to biomass burning and future due to the elimination of a potential CO2 sink. Feedbacks between climate change and LUCC (Land-Use and Land-Cover Change) can potentially increase the loss of tropical forests and increase the rate of CO2 emissions, through mechanisms such as land and soil degradation and the increase in wildfire occurrence and severity. However, current understanding of the processes of fires (including ignition, spread and consequences) in tropical forests and climatic feedbacks are poorly understood and need further research. As the processes of LUCC and associated fires occur at local scales, linking them to large-scale atmospheric processes requires a means of up-scaling higher resolutions processes to lower resolutions. Our approach is to couple models which operate at various spatial and temporal scales: a Global Climate Model (GCM), Dynamic Global Vegetation Model (DGVM) and local-scale LUCC and fire spread model. The climate model resolves large scale atmospheric processes and forcings, which are imposed on the surface DGVM and fed-back to climate. Higher-resolution processes such as deforestation, land use management and associated (as well as natural) fires are resolved at the local level. A dynamic tiling scheme allows to represent local-scale heterogeneity while maintaining computational efficiency of the land surface model, compared to traditional landscape models. Fire behavior is modeled at the regional scale (~500m) to represent the detailed landscape using a semi-empirical fire spread model. The relatively coarse scale (as compared to other fire spread models) is necessary due to the paucity of detailed land-cover information and fire history (particularly in the tropics and developing countries). This work presents initial results of a spatially-explicit fire spread model coupled to the IBIS DGVM model. Our area of study comprises selected regions in and near the Brazilian "arc of deforestation". For model training and evaluation, several areas have been mapped using high-resolution imagery from the Landsat TM/ETM+ sensors (Figure 1). This high resolution reference data is used for local-scale simulations and also to evaluate the accuracy of the global MCD45 burned area product, which will be used in future studies covering the entire "arc of deforestation".; Area of study along the arc of deforestation and cerrado: landsat scenes used and burned area (2010) from MCD45 product.
NASA Astrophysics Data System (ADS)
Alshakova, E. L.
2017-01-01
The program in the AutoLISP language allows automatically to form parametrical drawings during the work in the AutoCAD software product. Students study development of programs on AutoLISP language with the use of the methodical complex containing methodical instructions in which real examples of creation of images and drawings are realized. Methodical instructions contain reference information necessary for the performance of the offered tasks. The method of step-by-step development of the program is the basis for training in programming on AutoLISP language: the program draws elements of the drawing of a detail by means of definitely created function which values of arguments register in that sequence in which AutoCAD gives out inquiries when performing the corresponding command in the editor. The process of the program design is reduced to the process of step-by-step formation of functions and sequence of their calls. The author considers the development of the AutoLISP program for the creation of parametrical drawings of details, the defined design, the user enters the dimensions of elements of details. These programs generate variants of tasks of the graphic works performed in educational process of "Engineering graphics", "Engineering and computer graphics" disciplines. Individual tasks allow to develop at students skills of independent work in reading and creation of drawings, as well as 3D modeling.
Wachs, Juan P; Frenkel, Boaz; Dori, Dov
2014-11-01
Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at various levels of detail, each level is depicted in a separate diagram, and all the diagrams are "aware" of each other as part of the whole model. Providing ontology of verbal and non-verbal modalities of communication in the OR, the resulting conceptual model is a solid basis for analyzing and understanding the source of the large variety of errors occurring in the course of an operation, providing an opportunity to decrease the quantity and severity of mistakes related to the use and misuse of surgical instrumentations. Since the model is event driven, rather than person driven, the focus is on the factors causing the errors, rather than the specific person. This approach advocates searching for technological solutions to alleviate tool-related errors rather than finger-pointing. Concretely, the model was validated through a structured questionnaire and it was found that surgeons agreed that the conceptual model was flexible (3.8 of 5, std=0.69), accurate, and it generalizable (3.7 of 5, std=0.37 and 3.7 of 5, std=0.85, respectively). The detailed conceptual model of the tools handling subsystem of the operation performed in an OR focuses on the details of the communication and the interactions taking place between the surgeon and the surgical technician during an operation, with the objective of pinpointing the exact circumstances in which errors can happen. Exact and concise specification of the communication events in general and the surgical instrument requests in particular is a prerequisite for a methodical analysis of the various modes of errors and the circumstances under which they occur. This has significant potential value in both reduction in tool-handling-related errors during an operation and providing a solid formal basis for designing a cybernetic agent which can replace a surgical technician in routine tool handling activities during an operation, freeing the technician to focus on quality assurance, monitoring and control of the cybernetic agent activities. This is a critical step in designing the next generation of cybernetic OR assistants. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Resnick, S. V.; Prosuntsov, P. V.; Sapronov, D. V.
2015-01-01
Promising directions of a new generation gas turbine engines development include using in gas turbines ceramic materials blades with high strength, thermal and chemical stability. One of the serious problems in developing such motors is insufficient knowledge of contact phenomena occurring in ceramic and metal details connection nodes. This work presents the numerical modeling results of thermal processes on ceramic and metal details rough boundaries. The investigation results are used in conducting experimental researches in conditions reproducing operating.
Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-01-01
This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas,more » and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.« less
Bubble driven quasioscillatory translational motion of catalytic micromotors.
Manjare, Manoj; Yang, Bo; Zhao, Y-P
2012-09-21
A new quasioscillatory translational motion has been observed for big Janus catalytic micromotors with a fast CCD camera. Such motional behavior is found to coincide with both the bubble growth and burst processes resulting from the catalytic reaction, and the competition of the two processes generates a net forward motion. Detailed physical models have been proposed to describe the above processes. It is suggested that the bubble growth process imposes a growth force moving the micromotor forward, while the burst process induces an instantaneous local pressure depression pulling the micromotor backward. The theoretic predictions are consistent with the experimental data.
Bubble Driven Quasioscillatory Translational Motion of Catalytic Micromotors
NASA Astrophysics Data System (ADS)
Manjare, Manoj; Yang, Bo; Zhao, Y.-P.
2012-09-01
A new quasioscillatory translational motion has been observed for big Janus catalytic micromotors with a fast CCD camera. Such motional behavior is found to coincide with both the bubble growth and burst processes resulting from the catalytic reaction, and the competition of the two processes generates a net forward motion. Detailed physical models have been proposed to describe the above processes. It is suggested that the bubble growth process imposes a growth force moving the micromotor forward, while the burst process induces an instantaneous local pressure depression pulling the micromotor backward. The theoretic predictions are consistent with the experimental data.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.
Sohn, Bong-Soo
2017-03-11
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones
Sohn, Bong-Soo
2017-01-01
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487
Zeng, Jiaolong; Yuan, Jianmin
2007-08-01
Calculation details of radiative opacity for lowly ionized gold plasmas by using our developed fully relativistic detailed level-accounting approach are presented to show the importance of accurate atomic data for a quantitative reproduction of the experimental observations. Even though a huge number of transition lines are involved in the radiative absorption of high- Z plasmas so that one believes that statistical models can often give a reasonable description of their opacities, we first show in detail that an adequate treatment of physical effects, in particular the configuration interaction (including the core-valence electron correlation), is essential to produce atomic data of bound-bound and bound-free processes for gold plasmas, which are accurate enough to correctly explain the relative intensity of two strong absorption peaks experimentally observed located near photon energy of 70 and 80 eV. A detailed study is also carried out for gold plasmas of an average ionization degree sequence of 10, for both spectrally resolved opacities and Rosseland and Planck means. For comparison, results obtained by using an average atom model are also given to show that even for a relatively higher density of matter, correlation effects are also important to predict the correct positions of absorption peaks of transition arrays.
NEVER forget: negative emotional valence enhances recapitulation.
Bowen, Holly J; Kark, Sarah M; Kensinger, Elizabeth A
2018-06-01
A hallmark feature of episodic memory is that of "mental time travel," whereby an individual feels they have returned to a prior moment in time. Cognitive and behavioral neuroscience methods have revealed a neurobiological counterpart: Successful retrieval often is associated with reactivation of a prior brain state. We review the emerging literature on memory reactivation and recapitulation, and we describe evidence for the effects of emotion on these processes. Based on this review, we propose a new model: Negative Emotional Valence Enhances Recapitulation (NEVER). This model diverges from existing models of emotional memory in three key ways. First, it underscores the effects of emotion during retrieval. Second, it stresses the importance of sensory processing to emotional memory. Third, it emphasizes how emotional valence - whether an event is negative or positive - affects the way that information is remembered. The model specifically proposes that, as compared to positive events, negative events both trigger increased encoding of sensory detail and elicit a closer resemblance between the sensory encoding signature and the sensory retrieval signature. The model also proposes that negative valence enhances the reactivation and storage of sensory details over offline periods, leading to a greater divergence between the sensory recapitulation of negative and positive memories over time. Importantly, the model proposes that these valence-based differences occur even when events are equated for arousal, thus rendering an exclusively arousal-based theory of emotional memory insufficient. We conclude by discussing implications of the model and suggesting directions for future research to test the tenets of the model.
The Vehicle Integrated Performance Analysis Experience: Reconnecting With Technical Integration
NASA Technical Reports Server (NTRS)
McGhee, D. S.
2006-01-01
Very early in the Space Launch Initiative program, a small team of engineers at MSFC proposed a process for performing system-level assessments of a launch vehicle. Aimed primarily at providing insight and making NASA a smart buyer, the Vehicle Integrated Performance Analysis (VIPA) team was created. The difference between the VIPA effort and previous integration attempts is that VIPA a process using experienced people from various disciplines, which focuses them on a technically integrated assessment. The foundations of VIPA s process are described. The VIPA team also recognized the need to target early detailed analysis toward identifying significant systems issues. This process is driven by the T-model for technical integration. VIPA s approach to performing system-level technical integration is discussed in detail. The VIPA process significantly enhances the development and monitoring of realizable project requirements. VIPA s assessment validates the concept s stated performance, identifies significant issues either with the concept or the requirements, and then reintegrates these issues to determine impacts. This process is discussed along with a description of how it may be integrated into a program s insight and review process. The VIPA process has gained favor with both engineering and project organizations for being responsive and insightful
Using SysML for MBSE analysis of the LSST system
NASA Astrophysics Data System (ADS)
Claver, Charles F.; Dubois-Felsmann, Gregory; Delgado, Francisco; Hascall, Pat; Marshall, Stuart; Nordby, Martin; Schalk, Terry; Schumacher, German; Sebag, Jacques
2010-07-01
The Large Synoptic Survey Telescope is a complex hardware - software system of systems, making up a highly automated observatory in the form of an 8.4m wide-field telescope, a 3.2 billion pixel camera, and a peta-scale data processing and archiving system. As a project, the LSST is using model based systems engineering (MBSE) methodology for developing the overall system architecture coded with the Systems Modeling Language (SysML). With SysML we use a recursive process to establish three-fold relationships between requirements, logical & physical structural component definitions, and overall behavior (activities and sequences) at successively deeper levels of abstraction and detail. Using this process we have analyzed and refined the LSST system design, ensuring the consistency and completeness of the full set of requirements and their match to associated system structure and behavior. As the recursion process proceeds to deeper levels we derive more detailed requirements and specifications, and ensure their traceability. We also expose, define, and specify critical system interfaces, physical and information flows, and clarify the logic and control flows governing system behavior. The resulting integrated model database is used to generate documentation and specifications and will evolve to support activities from construction through final integration, test, and commissioning, serving as a living representation of the LSST as designed and built. We discuss the methodology and present several examples of its application to specific systems engineering challenges in the LSST design.
A review of physically based models for soil erosion by water
NASA Astrophysics Data System (ADS)
Le, Minh-Hoang; Cerdan, Olivier; Sochala, Pierre; Cheviron, Bruno; Brivois, Olivier; Cordier, Stéphane
2010-05-01
Physically-based models rely on fundamental physical equations describing stream flow and sediment and associated nutrient generation in a catchment. This paper reviews several existing erosion and sediment transport approaches. The process of erosion include soil detachment, transport and deposition, we present various forms of equations and empirical formulas used when modelling and quantifying each of these processes. In particular, we detail models describing rainfall and infiltration effects and the system of equations to describe the overland flow and the evolution of the topography. We also present the formulas for the flow transport capacity and the erodibility functions. Finally, we present some recent numerical schemes to approach the shallow water equations and it's coupling with infiltration and erosion source terms.
Comprehensive two-dimensional gas chromatography for the analysis of Fischer-Tropsch oil products.
van der Westhuizen, Rina; Crous, Renier; de Villiers, André; Sandra, Pat
2010-12-24
The Fischer-Tropsch (FT) process involves a series of catalysed reactions of carbon monoxide and hydrogen, originating from coal, natural gas or biomass, leading to a variety of synthetic chemicals and fuels. The benefits of comprehensive two-dimensional gas chromatography (GC×GC) compared to one-dimensional GC (1D-GC) for the detailed investigation of the oil products of low and high temperature FT processes are presented. GC×GC provides more accurate quantitative data to construct Anderson-Schultz-Flory (ASF) selectivity models that correlate the FT product distribution with reaction variables. On the other hand, the high peak capacity and sensitivity of GC×GC allow the detailed study of components present at trace level. Analyses of the aromatic and oxygenated fractions of a high temperature FT (HT-FT) process are presented. GC×GC data have been used to optimise or tune the HT-FT process by using a lab-scale micro-FT-reactor. Copyright © 2010 Elsevier B.V. All rights reserved.
A method for identifying EMI critical circuits during development of a large C3
NASA Astrophysics Data System (ADS)
Barr, Douglas H.
The circuit analysis methods and process Boeing Aerospace used on a large, ground-based military command, control, and communications (C3) system are described. This analysis was designed to help identify electromagnetic interference (EMI) critical circuits. The methodology used the MIL-E-6051 equipment criticality categories as the basis for defining critical circuits, relational database technology to help sort through and account for all of the approximately 5000 system signal cables, and Macintosh Plus personal computers to predict critical circuits based on safety margin analysis. The EMI circuit analysis process systematically examined all system circuits to identify which ones were likely to be EMI critical. The process used two separate, sequential safety margin analyses to identify critical circuits (conservative safety margin analysis, and detailed safety margin analysis). These analyses used field-to-wire and wire-to-wire coupling models using both worst-case and detailed circuit parameters (physical and electrical) to predict circuit safety margins. This process identified the predicted critical circuits that could then be verified by test.
Type Ia Supernovae as Sites of the p-process: Two-dimensional Models Coupled to Nucleosynthesis
NASA Astrophysics Data System (ADS)
Travaglio, C.; Röpke, F. K.; Gallino, R.; Hillebrandt, W.
2011-10-01
Beyond Fe, there is a class of 35 proton-rich nuclides, between 74Se and 196Hg, called p-nuclei. They are bypassed by the s and r neutron capture processes and are typically 10-1000 times less abundant than the s- and/or r-isotopes in the solar system. The bulk of p-isotopes is created in the "gamma processes" by sequences of photodisintegrations and beta decays in explosive conditions in both core collapse supernovae (SNe II) and in Type Ia supernovae (SNe Ia). SNe II contribute to the production of p-nuclei through explosive neon and oxygen burning. However, the major problem in SN II ejecta is a general underproduction of the light p-nuclei for A < 120. We explore SNe Ia as p-process sites in the framework of a two-dimensional SN Ia delayed detonation model as well as pure deflagration models. The white dwarf precursor is assumed to have reached the Chandrasekhar mass in a binary system by mass accretion from a giant/main-sequence companion. We use enhanced s-seed distributions, with seeds directly obtained from a sequence of thermal pulse instabilities both in the asymptotic giant branch phase and in the accreted material. We apply the tracer-particle method to reconstruct the nucleosynthesis by the thermal histories of Lagrangian particles, passively advected in the hydrodynamic calculations. For each particle, we follow the explosive nucleosynthesis with a detailed nuclear reaction network for all isotopes up to 209Bi. We select tracers within the typical temperature range for p-process production, (1.5-3.7) × 109 K, and analyze in detail their behavior, exploring the influence of different s-process distributions on the p-process nucleosynthesis. In addition, we discuss the sensitivity of p-process production to parameters of the explosion mechanism, taking into account the consequences on Fe and alpha elements. We find that SNe Ia can produce a large amount of p-nuclei, both the light p-nuclei below A = 120 and the heavy-p nuclei, at quite flat average production factors, tightly related to the s-process seed distribution. For the first time, we find a stellar source able to produce both light and heavy p-nuclei almost at the same level as 56Fe, including the debated neutron magic 92, 94Mo and 96, 98Ru. We also find that there is an important contribution from the p-process nucleosynthesis to the s-only nuclei 80Kr, 86Sr, to the neutron magic 90Zr, and to the neutron-rich 96Zr. Finally, we investigate the metallicity effect on p-process production in our models. Starting with different s-process seed distributions for two metallicities Z = 0.02 and Z = 0.001, running two-dimensional SN Ia models with different initial composition, we estimate that SNe Ia can contribute to at least 50% of the solar p-process composition. A more detailed analysis of the role of SNe Ia in Galactic chemical evolution of p-nuclei is in preparation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi
Here, we present a practical three-step procedure of using the Standard Model effective field theory (SM EFT) to connect ultraviolet (UV) models of new physics with weak scale precision observables. With this procedure, one can interpret precision measurements as constraints on a given UV model. We give a detailed explanation for calculating the effective action up to one-loop order in a manifestly gauge covariant fashion. This covariant derivative expansion method dramatically simplifies the process of matching a UV model with the SM EFT, and also makes available a universal formalism that is easy to use for a variety of UVmore » models. A few general aspects of RG running effects and choosing operator bases are discussed. Finally, we provide mapping results between the bosonic sector of the SM EFT and a complete set of precision electroweak and Higgs observables to which present and near future experiments are sensitive. Many results and tools which should prove useful to those wishing to use the SM EFT are detailed in several appendices.« less
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Ekchian, J. E.; Frank, R. M.; Heywood, J. B.
1985-01-01
A computer simulation of the turbocharged turbocompounded direct-injection diesel engine system was developed in order to study the performance characteristics of the total system as major design parameters and materials are varied. Quasi-steady flow models of the compressor, turbines, manifolds, intercooler, and ducting are coupled with a multicylinder reciprocator diesel model, where each cylinder undergoes the same thermodynamic cycle. The master cylinder model describes the reciprocator intake, compression, combustion and exhaust processes in sufficient detail to define the mass and energy transfers in each subsystem of the total engine system. Appropriate thermal loading models relate the heat flow through critical system components to material properties and design details. From this information, the simulation predicts the performance gains, and assesses the system design trade-offs which would result from the introduction of selected heat transfer reduction materials in key system components, over a range of operating conditions.
Modeling Physiological Systems in the Human Body as Networks of Quasi-1D Fluid Flows
NASA Astrophysics Data System (ADS)
Staples, Anne
2008-11-01
Extensive research has been done on modeling human physiology. Most of this work has been aimed at developing detailed, three-dimensional models of specific components of physiological systems, such as a cell, a vein, a molecule, or a heart valve. While efforts such as these are invaluable to our understanding of human biology, if we were to construct a global model of human physiology with this level of detail, computing even a nanosecond in this computational being's life would certainly be prohibitively expensive. With this in mind, we derive the Pulsed Flow Equations, a set of coupled one-dimensional partial differential equations, specifically designed to capture two-dimensional viscous, transport, and other effects, and aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. Our goal is to be able to perform faster-than-real time simulations of global processes in the human body on desktop computers.
Downscaling GLOF Hazards: An in-depth look at the Nepal Himalaya
NASA Astrophysics Data System (ADS)
Rounce, D.; McKinney, D. C.; Lala, J.
2016-12-01
The Nepal Himalaya house a large number of glacial lakes that pose a flood hazard to downstream communities and infrastructure. The modeling of the entire process chain of these glacial lake outburst floods (GLOFs) has been advancing rapidly in recent years. The most common cause of failure is mass movement entering the glacial lake, which triggers a tsunami-like wave that breaches the terminal moraine and causes the ensuing downstream flood. Unfortunately, modeling the avalanche, the breach of the moraine, and the downstream flood requires a large amount of site-specific information and can be very labor-intensive. Therefore, these detailed models need to be paired with large-scale hazard assessments that identify the glacial lakes that are the biggest threat and the triggering events that threaten these lakes. This study discusses the merger of a large-scale, remotely-based hazard assessment with more detailed GLOF models to show how GLOF hazard modeling can be downscaled in the Nepal Himalaya.
Robert R. Ziemer
1979-01-01
For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...
NASA Astrophysics Data System (ADS)
Zieleniewski, Simon; Thatte, Niranjan; Kendrew, Sarah; Houghton, Ryan; Tecza, Matthias; Clarke, Fraser; Fusco, Thierry; Swinbank, Mark
2014-07-01
With the next generation of extremely large telescopes commencing construction, there is an urgent need for detailed quantitative predictions of the scientific observations that these new telescopes will enable. Most of these new telescopes will have adaptive optics fully integrated with the telescope itself, allowing unprecedented spatial resolution combined with enormous sensitivity. However, the adaptive optics point spread function will be strongly wavelength dependent, requiring detailed simulations that accurately model these variations. We have developed a simulation pipeline for the HARMONI integral field spectrograph, a first light instrument for the European Extremely Large Telescope. The simulator takes high-resolution input data-cubes of astrophysical objects and processes them with accurate atmospheric, telescope and instrumental effects, to produce mock observed cubes for chosen observing parameters. The output cubes represent the result of a perfect data reduc- tion process, enabling a detailed analysis and comparison between input and output, showcasing HARMONI's capabilities. The simulations utilise a detailed knowledge of the telescope's wavelength dependent adaptive op- tics point spread function. We discuss the simulation pipeline and present an early example of the pipeline functionality for simulating observations of high redshift galaxies.
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
Aeroheating Thermal Model Correlation for Mars Global Surveyor (MGS) Solar Array
NASA Technical Reports Server (NTRS)
Amundsen, Ruth M.; Dec, John A.; George, Benjamin E.
2003-01-01
The Mars Global Surveyor (MGS) Spacecraft made use of aerobraking to gradually reduce its orbit period from a highly elliptical insertion orbit to its final science orbit. Aerobraking produces a high heat load on the solar arrays, which have a large surface area exposed to the airflow and relatively low mass. To accurately model the complex behavior during aerobraking, the thermal analysis needed to be tightly coupled to the spatially varying, time dependent aerodynamic heating. Also, the thermal model itself needed to accurately capture the behavior of the solar array and its response to changing heat load conditions. The correlation of the thermal model to flight data allowed a validation of the modeling process, as well as information on what processes dominate the thermal behavior. Correlation in this case primarily involved detailing the thermal sensor nodes, using as-built mass to modify material property estimates, refining solar cell assembly properties, and adding detail to radiation and heat flux boundary conditions. This paper describes the methods used to develop finite element thermal models of the MGS solar array and the correlation of the thermal model to flight data from the spacecraft drag passes. Correlation was made to data from four flight thermal sensors over three of the early drag passes. Good correlation of the model was achieved, with a maximum difference between the predicted model maximum and the observed flight maximum temperature of less than 5%. Lessons learned in the correlation of this model assisted in validating a similar model and method used for the Mars Odyssey solar array aeroheating analysis, which were used during onorbit operations.
Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav
2018-03-01
Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Simulating aerial gravitropism and posture control in plants: what has been done, what is missing
NASA Astrophysics Data System (ADS)
Coutand, Catherine; Pot, Guillaume; Bastien, R.; Badel, Eric; Moulia, Bruno
The gravitropic response requires a process of perception of the signal and a motor process to actuate the movements. Different models have been developed, some focuses on the perception process and some focuses on the motor process. The kinematics of the gravitropic response will be first detailed to set the phenomenology of gravi- and auto-tropism. A model of perception (AC model) will be first presented to demonstrate that sensing inclination is not sufficient to control the gravitropic movement, and that proprioception is also involved. Then, “motor models” will be reviewed. In herbaceous plants, differential growth is the main motor. Modelling tropic movements with simulating elongation raises some difficulties that will be explained. In woody structures the main motor process is the differentiation of reaction wood via cambial growth. We will first present the simplest biomechanical model developed to simulate gravitropism and its limits will be pointed out. Then a more sophisticated model (TWIG) will be presented with a special focus on the importance of wood viscoelasticity and the wood maturation process and its regulation by a mechanosensing process. The presentation will end by a balance sheet of what is done and what is missing for a complete modelling of gravitropism and will present first results of a running project dedicating to get the data required to include phototropism in the actual models.
On the nature of bias and defects in the software specification process
NASA Technical Reports Server (NTRS)
Straub, Pablo A.; Zelkowitz, Marvin V.
1992-01-01
Implementation bias in a specification is an arbitrary constraint in the solution space. This paper describes the problem of bias. Additionally, this paper presents a model of the specification and design processes describing individual subprocesses in terms of precision/detail diagrams and a model of bias in multi-attribute software specifications. While studying how bias is introduced into a specification we realized that software defects and bias are dual problems of a single phenomenon. This was used to explain the large proportion of faults found during the coding phase at the Software Engineering Laboratory at NASA/GSFC.
Simulation of SEU Cross-sections using MRED under Conditions of Limited Device Information
NASA Technical Reports Server (NTRS)
Lauenstein, J. M.; Reed, R. A.; Weller, R. A.; Mendenhall, M. H.; Warren, K. M.; Pellish, J. A.; Schrimpf, R. D.; Sierawski, B. D.; Massengill, L. W.; Dodd, P. E.;
2007-01-01
This viewgraph presentation reviews the simulation of Single Event Upset (SEU) cross sections using the membrane electrode assembly (MEA) resistance and electrode diffusion (MRED) tool using "Best guess" assumptions about the process and geometry, and direct ionization, low-energy beam test results. This work will also simulate SEU cross-sections including angular and high energy responses and compare the simulated results with beam test data for the validation of the model. Using MRED, we produced a reasonably accurate upset response model of a low-critical charge SRAM without detailed information about the circuit, device geometry, or fabrication process
Anomalous single production of the fourth generation quarks at the CERN LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciftci, R.
Possible anomalous single productions of the fourth standard model generation up and down type quarks at CERN Large Hadron Collider are studied. Namely, pp{yields}u{sub 4}(d{sub 4})X with subsequent u{sub 4}{yields}bW{sup +} process followed by the leptonic decay of the W boson and d{sub 4}{yields}b{gamma} (and its H.c.) decay channel are considered. Signatures of these processes and corresponding standard model backgrounds are discussed in detail. Discovery limits for the quark mass and achievable values of the anomalous coupling strength are determined.
NASA Technical Reports Server (NTRS)
Drdla, K.; Turco, R. P.; Elliott, S.
1993-01-01
A detailed model of polar stratospheric clouds (PSCs), which includes nucleation, condensational growth. and sedimentation processes, has been applied to the study of heterogeneous chemical reactions. For the first time, the extent of chemical processing during a polar winter has been estimated for an idealized air parcel in the Antarctic vortex by calculating in detail the rates of heterogeneous reactions on PSC particles. The resulting active chlorine and NO(x) concentrations at first sunrise are analyzed with respect to their influence upon the Antarctic ozone hole using a photochemical model. It is found that the species present at sunrise are primarily influenced by the relative values of the heterogeneous reaction rate constants and the initial gas concentrations. However, the extent of chlorine activation is also influenced by whether N2O5 is removed by reaction with HCl or H2O. The reaction of N2O5 with HCl, which occurs rapidly on type 1 PSCs, activates the chlorine contained in the reservoir species HCl. Hence the presence and surface area of type 1 PSCs early in the winter are crucial in determining ozone depletion.
A surety engineering framework to reduce cognitive systems risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caudell, Thomas P.; Peercy, David Eugene; Caldera, Eva O.
Cognitive science research investigates the advancement of human cognition and neuroscience capabilities. Addressing risks associated with these advancements can counter potential program failures, legal and ethical issues, constraints to scientific research, and product vulnerabilities. Survey results, focus group discussions, cognitive science experts, and surety researchers concur technical risks exist that could impact cognitive science research in areas such as medicine, privacy, human enhancement, law and policy, military applications, and national security (SAND2006-6895). This SAND report documents a surety engineering framework and a process for identifying cognitive system technical, ethical, legal and societal risks and applying appropriate surety methods to reducemore » such risks. The framework consists of several models: Specification, Design, Evaluation, Risk, and Maturity. Two detailed case studies are included to illustrate the use of the process and framework. Several Appendices provide detailed information on existing cognitive system architectures; ethical, legal, and societal risk research; surety methods and technologies; and educing information research with a case study vignette. The process and framework provide a model for how cognitive systems research and full-scale product development can apply surety engineering to reduce perceived and actual risks.« less
Gabdoulline, Razif R; Wade, Rebecca C
2009-07-08
The factors that determine the extent to which diffusion and thermal activation processes govern electron transfer (ET) between proteins are debated. The process of ET between plastocyanin (PC) and cytochrome f (CytF) from the cyanobacterium Phormidium laminosum was initially thought to be diffusion-controlled but later was found to be under activation control (Schlarb-Ridley, B. G.; et al. Biochemistry 2005, 44, 6232). Here we describe Brownian dynamics simulations of the diffusional association of PC and CytF, from which ET rates were computed using a detailed model of ET events that was applied to all of the generated protein configurations. The proteins were modeled as rigid bodies represented in atomic detail. In addition to electrostatic forces, which were modeled as in our previous simulations of protein-protein association, the proteins interacted by a nonpolar desolvation (hydrophobic) force whose derivation is described here. The simulations yielded close to realistic residence times of transient protein-protein encounter complexes of up to tens of microseconds. The activation barrier for individual ET events derived from the simulations was positive. Whereas the electrostatic interactions between P. laminosum PC and CytF are weak, simulations for a second cyanobacterial PC-CytF pair, that from Nostoc sp. PCC 7119, revealed ET rates influenced by stronger electrostatic interactions. In both cases, the simulations imply significant contributions to ET from both diffusion and thermal activation processes.
Freni, G; La Loggia, G; Notaro, V
2010-01-01
Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly because a large part of the total uncertainty is dependent on depth-damage curves. Improving the estimation of these curves may provide better results in term of uncertainty reduction than the adoption of detailed hydraulic models.
Grégoire, David; Verdon, Laura; Lefort, Vincent; Grassl, Peter; Saliba, Jacqueline; Regoin, Jean-Pierre; Loukili, Ahmed; Pijaudier-Cabot, Gilles
2015-10-25
The purpose of this paper is to analyse the development and the evolution of the fracture process zone during fracture and damage in quasi-brittle materials. A model taking into account the material details at the mesoscale is used to describe the failure process at the scale of the heterogeneities. This model is used to compute histograms of the relative distances between damaged points. These numerical results are compared with experimental data, where the damage evolution is monitored using acoustic emissions. Histograms of the relative distances between damage events in the numerical calculations and acoustic events in the experiments exhibit good agreement. It is shown that the mesoscale model provides relevant information from the point of view of both global responses and the local failure process. © 2015 The Authors. International Journal for Numerical and Analytical Methods in Geomechanics published by John Wiley & Sons Ltd.
Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent
Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M.
2017-01-01
The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients. PMID:28744212
Model for amorphous aggregation processes
NASA Astrophysics Data System (ADS)
Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz
2009-11-01
The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.
Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent.
Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M
2017-01-01
The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients.
Intravesical dosimetry applied to laser positioning in photodynamic therapy
NASA Astrophysics Data System (ADS)
Beslon, Guillaume; Ambroise, Philippe; Heit, Bernard; Bremont, Jacques; Guillemin, Francois H.
1996-12-01
Superficial bladder tumor is a challenging indication for photodynamic therapy. Due to lack of specificity of the sensitizers, the light has to be precisely monitored over the bladder surface, illuminated by an isotropic source, to restrict the cytotoxic effect to the tumor without affecting the normal epithelium. In order to assist the surgeon while processing the therapy, an urothelium illumination model is proposed. It is computed through a spline interpolation, on the basis of 12 intravesical sensors. This paper presents the overall system architecture and details the modelization and visualization processes. With this model, the surgeon is able to master the source displacement inside the bladder and to homogenize the tissue exposure.
NASA Astrophysics Data System (ADS)
Carette, Yannick; Vanhove, Hans; Duflou, Joost
2018-05-01
Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kress, Joel David
The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework,more » and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO 2 capture. The sorbent model includes a detailed treatment of transport and amine-CO 2- H 2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.« less
Fatichi, Simone; Vivoni, Enrique R.; Odgen, Fred L; Ivanov, Valeriy Y; Mirus, Benjamin B.; Gochis, David; Downer, Charles W; Camporese, Matteo; Davison, Jason H; Ebel, Brian A.; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard G.; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David
2016-01-01
Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth’s system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.
Modeling and simulation: A key to future defense technology
NASA Technical Reports Server (NTRS)
Muccio, Anthony B.
1993-01-01
The purpose of this paper is to express the rationale for continued technological and scientific development of the modeling and simulation process for the defense industry. The defense industry, along with a variety of other industries, is currently being forced into making sacrifices in response to the current economic hardships. These sacrifices, which may not compromise the safety of our nation, nor jeopardize our current standing as the world peace officer, must be concentrated in areas which will withstand the needs of the changing world. Therefore, the need for cost effective alternatives of defense issues must be examined. This paper provides support that the modeling and simulation process is an economically feasible process which will ensure our nation's safety as well as provide and keep up with the future technological developments and demands required by the defense industry. The outline of this paper is as follows: introduction, which defines and describes the modeling and simulation process; discussion, which details the purpose and benefits of modeling and simulation and provides specific examples of how the process has been successful; and conclusion, which summarizes the specifics of modeling and simulation of defense issues and lends the support for its continued use in the defense arena.
The Simulation of Read-time Scalable Coherent Interface
NASA Technical Reports Server (NTRS)
Li, Qiang; Grant, Terry; Grover, Radhika S.
1997-01-01
Scalable Coherent Interface (SCI, IEEE/ANSI Std 1596-1992) (SCI1, SCI2) is a high performance interconnect for shared memory multiprocessor systems. In this project we investigate an SCI Real Time Protocols (RTSCI1) using Directed Flow Control Symbols. We studied the issues of efficient generation of control symbols, and created a simulation model of the protocol on a ring-based SCI system. This report presents the results of the study. The project has been implemented using SES/Workbench. The details that follow encompass aspects of both SCI and Flow Control Protocols, as well as the effect of realistic client/server processing delay. The report is organized as follows. Section 2 provides a description of the simulation model. Section 3 describes the protocol implementation details. The next three sections of the report elaborate on the workload, results and conclusions. Appended to the report is a description of the tool, SES/Workbench, used in our simulation, and internal details of our implementation of the protocol.
The distribution of density in supersonic turbulence
NASA Astrophysics Data System (ADS)
Squire, Jonathan; Hopkins, Philip F.
2017-11-01
We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
A Goal-Oriented Model of Natural Language Interaction
1977-01-01
AHSTKACT This report describes a research program in modeling human communication . The methodology involved selecting a single, naturally-occurring...knowledge is seldom used in the design process. Human communication skills have not bee’’ characferi?ed at a level of detail appropriate for guiding design...necessarily combine to give a complete picture of human communication . Experience over several more dialogues may suggest that one or all be replaced
An Interactive Design Space Supporting Development of Vehicle Architecture Concept Models
2011-01-01
Denver, Colorado, USA IMECE2011-64510 AN INTERACTIVE DESIGN SPACE SUPPORTING DEVELOPMENT OF VEHICLE ARCHITECTURE CONCEPT MODELS Gary Osborne...early in the development cycle. Optimization taking place later in the cycle usually occurs at the detail design level, and tends to result in...architecture changes may be imposed, but such modifications are equivalent to a huge optimization cycle covering almost the entire design process, and
Extremely late photometry of the nearby SN 2011fe
NASA Astrophysics Data System (ADS)
Kerzendorf, W. E.; McCully, C.; Taubenberger, S.; Jerkstrand, A.; Seitenzahl, I.; Ruiter, A. J.; Spyromilio, J.; Long, K. S.; Fransson, C.
2017-12-01
Type Ia supernovae are widely accepted to be the outcomes of thermonuclear explosions in white dwarf stars. However, many details of these explosions remain uncertain (e.g. the mass, ignition mechanism and flame speed). Theory predicts that at very late times (beyond 1000 d) it might be possible to distinguish between explosion models. Few very nearby supernovae can be observed that long after the explosion. The Type Ia supernova SN 2011fe located in M101 and along a line of sight with negligible extinction, provides us with the once-in-a-lifetime chance to obtain measurements that may distinguish between theoretical models. In this work, we present the analysis of photometric data of SN 2011fe taken between 900 and 1600 d after explosion with Gemini and HST. At these extremely late epochs theory suggests that the light-curve shape might be used to measure isotopic abundances which is a useful model discriminant. However, we show in this work that there are several currently not well constrained physical processes introducing large systematic uncertainties to the isotopic abundance measurement. We conclude that without further detailed knowledge of the physical processes at this late stage one cannot reliably exclude any models on the basis of this data set.
Vadose zone transport field study: Detailed test plan for simulated leak tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
AL Ward; GW Gee
2000-06-23
The US Department of Energy (DOE) Groundwater/Vadose Zone Integration Project Science and Technology initiative was created in FY 1999 to reduce the uncertainty associated with vadose zone transport processes beneath waste sites at DOE's Hanford Site near Richland, Washington. This information is needed not only to evaluate the risks from transport, but also to support the adoption of measures for minimizing impacts to the groundwater and surrounding environment. The principal uncertainties in vadose zone transport are the current distribution of source contaminants and the natural heterogeneity of the soil in which the contaminants reside. Oversimplified conceptual models resulting from thesemore » uncertainties and limited use of hydrologic characterization and monitoring technologies have hampered the understanding contaminant migration through Hanford's vadose zone. Essential prerequisites for reducing vadose transport uncertainly include the development of accurate conceptual models and the development or adoption of monitoring techniques capable of delineating the current distributions of source contaminants and characterizing natural site heterogeneity. The Vadose Zone Transport Field Study (VZTFS) was conceived as part of the initiative to address the major uncertainties confronting vadose zone fate and transport predictions at the Hanford Site and to overcome the limitations of previous characterization attempts. Pacific Northwest National Laboratory (PNNL) is managing the VZTFS for DOE. The VZTFS will conduct field investigations that will improve the understanding of field-scale transport and lead to the development or identification of efficient and cost-effective characterization methods. Ideally, these methods will capture the extent of contaminant plumes using existing infrastructure (i.e., more than 1,300 steel-cased boreholes). The objectives of the VZTFS are to conduct controlled transport experiments at well-instrumented field sites at Hanford to: identify mechanisms controlling transport processes in soils typical of the hydrogeologic conditions of Hanford's waste disposal sites; reduce uncertainty in conceptual models; develop a detailed and accurate database of hydraulic and transport parameters for validation of three-dimensional numerical models; identify and evaluate advanced, cost-effective characterization methods with the potential to assess changing conditions in the vadose zone, particularly as surrogates of currently undetectable high-risk contaminants. This plan provides details for conducting field tests during FY 2000 to accomplish these objectives. Details of additional testing during FY 2001 and FY 2002 will be developed as part of the work planning process implemented by the Integration Project.« less
NASA Technical Reports Server (NTRS)
Starr, David O. (Technical Monitor); Smith, Eric A.
2002-01-01
Comprehensive understanding of the microphysical nature of Mediterranean storms can be accomplished by a combination of in situ meteorological data analysis and radar-passive microwave data analysis, effectively integrated with numerical modeling studies at various scales, from synoptic scale down through the mesoscale, the cloud macrophysical scale, and ultimately the cloud microphysical scale. The microphysical properties of and their controls on severe storms are intrinsically related to meteorological processes under which storms have evolved, processes which eventually select and control the dominant microphysical properties themselves. This involves intense convective development, stratiform decay, orographic lifting, and sloped frontal lifting processes, as well as the associated vertical motions and thermodynamical instabilities governing physical processes that affect details of the size distributions and fall rates of the various types of hydrometeors found within the storm environment. Insofar as hazardous Mediterranean storms, highlighted in this study by three mountain storms producing damaging floods in northern Italy between 1992 and 2000, developing a comprehensive microphysical interpretation requires an understanding of the multiple phases of storm evolution and the heterogeneous nature of precipitation fields within a storm domain. This involves convective development, stratiform transition and decay, orographic lifting, and sloped frontal lifting processes. This also involves vertical motions and thermodynamical instabilities governing physical processes that determine details of the liquid/ice water contents, size disi:ributions, and fall rates of the various modes of hydrometeors found within hazardous storm environments.
Reproducible model development in the cardiac electrophysiology Web Lab.
Daly, Aidan C; Clerx, Michael; Beattie, Kylie A; Cooper, Jonathan; Gavaghan, David J; Mirams, Gary R
2018-05-26
The modelling of the electrophysiology of cardiac cells is one of the most mature areas of systems biology. This extended concentration of research effort brings with it new challenges, foremost among which is that of choosing which of these models is most suitable for addressing a particular scientific question. In a previous paper, we presented our initial work in developing an online resource for the characterisation and comparison of electrophysiological cell models in a wide range of experimental scenarios. In that work, we described how we had developed a novel protocol language that allowed us to separate the details of the mathematical model (the majority of cardiac cell models take the form of ordinary differential equations) from the experimental protocol being simulated. We developed a fully-open online repository (which we termed the Cardiac Electrophysiology Web Lab) which allows users to store and compare the results of applying the same experimental protocol to competing models. In the current paper we describe the most recent and planned extensions of this work, focused on supporting the process of model building from experimental data. We outline the necessary work to develop a machine-readable language to describe the process of inferring parameters from wet lab datasets, and illustrate our approach through a detailed example of fitting a model of the hERG channel using experimental data. We conclude by discussing the future challenges in making further progress in this domain towards our goal of facilitating a fully reproducible approach to the development of cardiac cell models. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Recent Updates of A Multi-Phase Transport (AMPT) Model
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
2008-10-01
We will present recent updates to the AMPT model, a Monte Carlo transport model for high energy heavy ion collisions, since its first public release in 2004 and the corresponding detailed descriptions in Phys. Rev. C 72, 064901 (2005). The updates often result from user requests. Some of these updates expand the physics processes or descriptions in the model, while some updates improve the usability of the model such as providing the initial parton distributions or help avoid crashes on some operating systems. We will also explain how the AMPT model is being maintained and updated.
VisTrails SAHM: visualization and workflow management for species habitat modeling
Morisette, Jeffrey T.; Jarnevich, Catherine S.; Holcombe, Tracy R.; Talbert, Colin B.; Ignizio, Drew A.; Talbert, Marian; Silva, Claudio; Koop, David; Swanson, Alan; Young, Nicholas E.
2013-01-01
The Software for Assisted Habitat Modeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre- and post-processing steps and modeling options incorporated in the construction of a species distribution model through the established workflow management and visualization VisTrails software. This paper provides an overview of the VisTrails:SAHM software including a link to the open source code, a table detailing the current SAHM modules, and a simple example modeling an invasive weed species in Rocky Mountain National Park, USA.
RUIZ-RAMOS, MARGARITA; MÍNGUEZ, M. INÉS
2006-01-01
• Background Plant structural (i.e. architectural) models explicitly describe plant morphology by providing detailed descriptions of the display of leaf and stem surfaces within heterogeneous canopies and thus provide the opportunity for modelling the functioning of plant organs in their microenvironments. The outcome is a class of structural–functional crop models that combines advantages of current structural and process approaches to crop modelling. ALAMEDA is such a model. • Methods The formalism of Lindenmayer systems (L-systems) was chosen for the development of a structural model of the faba bean canopy, providing both numerical and dynamic graphical outputs. It was parameterized according to the results obtained through detailed morphological and phenological descriptions that capture the detailed geometry and topology of the crop. The analysis distinguishes between relationships of general application for all sowing dates and stem ranks and others valid only for all stems of a single crop cycle. • Results and Conclusions The results reveal that in faba bean, structural parameterization valid for the entire plant may be drawn from a single stem. ALAMEDA was formed by linking the structural model to the growth model ‘Simulation d'Allongement des Feuilles’ (SAF) with the ability to simulate approx. 3500 crop organs and components of a group of nine plants. Model performance was verified for organ length, plant height and leaf area. The L-system formalism was able to capture the complex architecture of canopy leaf area of this indeterminate crop and, with the growth relationships, generate a 3D dynamic crop simulation. Future development and improvement of the model are discussed. PMID:16390842
Single Plant Root System Modeling under Soil Moisture Variation
NASA Astrophysics Data System (ADS)
Yabusaki, S.; Fang, Y.; Chen, X.; Scheibe, T. D.
2016-12-01
A prognostic Virtual Plant-Atmosphere-Soil System (vPASS) model is being developed that integrates comprehensively detailed mechanistic single plant modeling with microbial, atmospheric, and soil system processes in its immediate environment. Three broad areas of process module development are targeted: Incorporating models for root growth and function, rhizosphere interactions with bacteria and other organisms, litter decomposition and soil respiration into established porous media flow and reactive transport models Incorporating root/shoot transport, growth, photosynthesis and carbon allocation process models into an integrated plant physiology model Incorporating transpiration, Volatile Organic Compounds (VOC) emission, particulate deposition and local atmospheric processes into a coupled plant/atmosphere model. The integrated plant ecosystem simulation capability is being developed as open source process modules and associated interfaces under a modeling framework. The initial focus addresses the coupling of root growth, vascular transport system, and soil under drought scenarios. Two types of root water uptake modeling approaches are tested: continuous root distribution and constitutive root system architecture. The continuous root distribution models are based on spatially averaged root development process parameters, which are relatively straightforward to accommodate in the continuum soil flow and reactive transport module. Conversely, the constitutive root system architecture models use root growth rates, root growth direction, and root branching to evolve explicit root geometries. The branching topologies require more complex data structures and additional input parameters. Preliminary results are presented for root model development and the vascular response to temporal and spatial variations in soil conditions.
Vertex Models of Epithelial Morphogenesis
Fletcher, Alexander G.; Osterfield, Miriam; Baker, Ruth E.; Shvartsman, Stanislav Y.
2014-01-01
The dynamic behavior of epithelial cell sheets plays a central role during numerous developmental processes. Genetic and imaging studies of epithelial morphogenesis in a wide range of organisms have led to increasingly detailed mechanisms of cell sheet dynamics. Computational models offer a useful means by which to investigate and test these mechanisms, and have played a key role in the study of cell-cell interactions. A variety of modeling approaches can be used to simulate the balance of forces within an epithelial sheet. Vertex models are a class of such models that consider cells as individual objects, approximated by two-dimensional polygons representing cellular interfaces, in which each vertex moves in response to forces due to growth, interfacial tension, and pressure within each cell. Vertex models are used to study cellular processes within epithelia, including cell motility, adhesion, mitosis, and delamination. This review summarizes how vertex models have been used to provide insight into developmental processes and highlights current challenges in this area, including progressing these models from two to three dimensions and developing new tools for model validation. PMID:24896108
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.
In this paper, we investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking onlymore » $$(\\alpha ,\\gamma )$$ reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles; inconsistent thermodynamic evolution, including misestimation of expansion timescales; and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. Finally, we present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 $${M}_{\\odot }$$ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.« less
Planning for the next influenza pandemic: using the science and art of logistics.
Cupp, O Shawn; Predmore, Brad G
2011-01-01
The complexities and challenges for healthcare providers and their efforts to provide fundamental basic items to meet the logistical demands of an influenza pandemic are discussed in this article. The supply chain, planning, and alternatives for inevitable shortages are some of the considerations associated with this emergency mass critical care situation. The planning process and support for such events are discussed in detail with several recommendations obtained from the literature and the experience from recent mass casualty incidents (MCIs). The first step in this planning process is the development of specific triage requirements during an influenza pandemic. The second step is identification of logistical resources required during such a pandemic, which are then analyzed within the proposed logistics science and art model for planning purposes. Resources highlighted within the model include allocation and use of work force, bed space, intensive care unit assets, ventilators, personal protective equipment, and oxygen. The third step is using the model to discuss in detail possible workarounds, suitable substitutes, and resource allocation. An examination is also made of the ethics surrounding palliative care within the construction of an MCI and the factors that will inevitably determine rationing and prioritizing of these critical assets to palliative care patients.
Using floating car data to analyse the effects of its measures and eco-driving.
Garcia-Castro, Alvaro; Monzon, Andres
2014-11-11
The road transportation sector is responsible for around 25% of total man-made CO2 emissions worldwide. Considerable efforts are therefore underway to reduce these emissions using several approaches, including improved vehicle technologies, traffic management and changing driving behaviour. Detailed traffic and emissions models are used extensively to assess the potential effects of these measures. However, if the input and calibration data are not sufficiently detailed there is an inherent risk that the results may be inaccurate. This article presents the use of Floating Car Data to derive useful speed and acceleration values in the process of traffic model calibration as a means of ensuring more accurate results when simulating the effects of particular measures. The data acquired includes instantaneous GPS coordinates to track and select the itineraries, and speed and engine performance extracted directly from the on-board diagnostics system. Once the data is processed, the variations in several calibration parameters can be analyzed by comparing the base case model with the measure application scenarios. Depending on the measure, the results show changes of up to 6.4% in maximum speed values, and reductions of nearly 15% in acceleration and braking levels, especially when eco-driving is applied.
Using Floating Car Data to Analyse the Effects of ITS Measures and Eco-Driving
Garcia-Castro, Alvaro; Monzon, Andres
2014-01-01
The road transportation sector is responsible for around 25% of total man-made CO2 emissions worldwide. Considerable efforts are therefore underway to reduce these emissions using several approaches, including improved vehicle technologies, traffic management and changing driving behaviour. Detailed traffic and emissions models are used extensively to assess the potential effects of these measures. However, if the input and calibration data are not sufficiently detailed there is an inherent risk that the results may be inaccurate. This article presents the use of Floating Car Data to derive useful speed and acceleration values in the process of traffic model calibration as a means of ensuring more accurate results when simulating the effects of particular measures. The data acquired includes instantaneous GPS coordinates to track and select the itineraries, and speed and engine performance extracted directly from the on-board diagnostics system. Once the data is processed, the variations in several calibration parameters can be analyzed by comparing the base case model with the measure application scenarios. Depending on the measure, the results show changes of up to 6.4% in maximum speed values, and reductions of nearly 15% in acceleration and braking levels, especially when eco-driving is applied. PMID:25393787
NASA Astrophysics Data System (ADS)
Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.; Lee, C. T.; Lentz, Eric J.; Messer, O. E. Bronson
2017-07-01
We investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking only (α ,γ ) reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles inconsistent thermodynamic evolution, including misestimation of expansion timescales and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 {M}⊙ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.
Response properties in the adsorption-desorption model on a triangular lattice
NASA Astrophysics Data System (ADS)
Šćepanović, J. R.; Stojiljković, D.; Jakšić, Z. M.; Budinski-Petković, Lj.; Vrhovac, S. B.
2016-06-01
The out-of-equilibrium dynamical processes during the reversible random sequential adsorption (RSA) of objects of various shapes on a two-dimensional triangular lattice are studied numerically by means of Monte Carlo simulations. We focused on the influence of the order of symmetry axis of the shape on the response of the reversible RSA model to sudden perturbations of the desorption probability Pd. We provide a detailed discussion of the significance of collective events for governing the time coverage behavior of shapes with different rotational symmetries. We calculate the two-time density-density correlation function C(t ,tw) for various waiting times tw and show that longer memory of the initial state persists for the more symmetrical shapes. Our model displays nonequilibrium dynamical effects such as aging. We find that the correlation function C(t ,tw) for all objects scales as a function of single variable ln(tw) / ln(t) . We also study the short-term memory effects in two-component mixtures of extended objects and give a detailed analysis of the contribution to the densification kinetics coming from each mixture component. We observe the weakening of correlation features for the deposition processes in multicomponent systems.
Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek A.; ...
2017-06-26
In this paper, we investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the neutrino hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species α-network capable of tracking onlymore » $$(\\alpha ,\\gamma )$$ reactions from 4He to 60Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks into post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles; inconsistent thermodynamic evolution, including misestimation of expansion timescales; and uncertain determination of the multidimensional mass cut at the end of the simulation impose uncertainties inherent to this approach. Finally, we present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from solar-metallicity, nonrotating progenitors of 12, 15, 20, and 25 $${M}_{\\odot }$$ and evolved with the smaller α-network to more than 1 s after the launch of an explosion.« less
Toward Theory-Based Instruction in Scientific Problem Solving.
ERIC Educational Resources Information Center
Heller, Joan I.; And Others
Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…
Representing Energy. II. Energy Tracking Representations
ERIC Educational Resources Information Center
Scherr, Rachel E.; Close, Hunter G.; Close, Eleanor W.; Vokos, Stamatis
2012-01-01
The Energy Project at Seattle Pacific University has developed representations that embody the substance metaphor and support learners in conserving and tracking energy as it flows from object to object and changes form. Such representations enable detailed modeling of energy dynamics in complex physical processes. We assess student learning by…
An Administrative Model for Virtual Website Hosting.
ERIC Educational Resources Information Center
Kandies, Jerry
The process of creating and maintaining a World Wide Web homepage for a national organization--the Association of Collegiate Business Schools and Programs (ACBSP)--is detailed in this paper. The logical design confines the conceptual relationships among the components of the Web pages and their hyperlinks, whereas the physical design concerns…
Analysis and modeling of leakage current sensor under pulsating direct current
NASA Astrophysics Data System (ADS)
Li, Kui; Dai, Yihua; Wang, Yao; Niu, Feng; Chen, Zhao; Huang, Shaopo
2017-05-01
In this paper, the transformation characteristics of current sensor under pulsating DC leakage current is investigated. The mathematical model of current sensor is proposed to accurately describe the secondary side current and excitation current. The transformation process of current sensor is illustrated in details and the transformation error is analyzed from multi aspects. A simulation model is built and a sensor prototype is designed to conduct comparative evaluation, and both simulation and experimental results are presented to verify the correctness of theoretical analysis.
Numerical simulation of water injection into vapor-dominated reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.
1995-01-01
Water injection into vapor-dominated reservoirs is a means of condensate disposal, as well as a reservoir management tool for enhancing energy recovery and reservoir life. We review different approaches to modeling the complex fluid and heat flow processes during injection into vapor-dominated systems. Vapor pressure lowering, grid orientation effects, and physical dispersion of injection plumes from reservoir heterogeneity are important considerations for a realistic modeling of injection effects. An example of detailed three-dimensional modeling of injection experiments at The Geysers is given.
Empirical modeling of Single-Event Upset (SEU) in NMOS depletion-mode-load static RAM (SRAM) chips
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Smith, S. L.; Atwood, G. E.
1986-01-01
A detailed experimental investigation of single-event upset (SEU) in static RAM (SRAM) chips fabricated using a family of high-performance NMOS (HMOS) depletion-mode-load process technologies, has been done. Empirical SEU models have been developed with the aid of heavy-ion data obtained with a three-stage tandem van de Graaff accelerator. The results of this work demonstrate a method by which SEU may be empirically modeled in NMOS integrated circuits.
Edge detection - Image-plane versus digital processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.; Park, Stephen K.; Triplett, Judith A.
1987-01-01
To optimize edge detection with the familiar Laplacian-of-Gaussian operator, it has become common to implement this operator with a large digital convolution mask followed by some interpolation of the processed data to determine the zero crossings that locate edges. It is generally recognized that this large mask causes substantial blurring of fine detail. It is shown that the spatial detail can be improved by a factor of about four with either the Wiener-Laplacian-of-Gaussian filter or an image-plane processor. The Wiener-Laplacian-of-Gaussian filter minimizes the image-gathering degradations if the scene statistics are at least approximately known and also serves as an interpolator to determine the desired zero crossings directly. The image-plane processor forms the Laplacian-of-Gaussian response by properly combining the optical design of the image-gathering system with a minimal three-by-three lateral-inhibitory processing mask. This approach, which is suggested by Marr's model of early processing in human vision, also reduces data processing by about two orders of magnitude and data transmission by up to an order of magnitude.
2014-01-01
Background There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. Methods This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. Results The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. Conclusions The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. Trial registration number PROSPERO registration number: CRD42013004037. PMID:24885751
Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth
2014-05-10
There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.
Harnessing QbD, Programming Languages, and Automation for Reproducible Biology.
Sadowski, Michael I; Grant, Chris; Fell, Tim S
2016-03-01
Building robust manufacturing processes from biological components is a task that is highly complex and requires sophisticated tools to describe processes, inputs, and measurements and administrate management of knowledge, data, and materials. We argue that for bioengineering to fully access biological potential, it will require application of statistically designed experiments to derive detailed empirical models of underlying systems. This requires execution of large-scale structured experimentation for which laboratory automation is necessary. This requires development of expressive, high-level languages that allow reusability of protocols, characterization of their reliability, and a change in focus from implementation details to functional properties. We review recent developments in these areas and identify what we believe is an exciting trend that promises to revolutionize biotechnology. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rachid B. Slimane; Francis S. Lau; Javad Abbasian
2000-10-01
The objective of this program is to develop an economical process for hydrogen production, with no additional carbon dioxide emission, through the thermal decomposition of hydrogen sulfide (H{sub 2}S) in H{sub 2}S-rich waste streams to high-purity hydrogen and elemental sulfur. The novel feature of the process being developed is the superadiabatic combustion (SAC) of part of the H{sub 2}S in the waste stream to provide the thermal energy required for the decomposition reaction such that no additional energy is required. The program is divided into two phases. In Phase 1, detailed thermochemical and kinetic modeling of the SAC reactor withmore » H{sub 2}S-rich fuel gas and air/enriched air feeds is undertaken to evaluate the effects of operating conditions on exit gas products and conversion efficiency, and to identify key process parameters. Preliminary modeling results are used as a basis to conduct a thorough evaluation of SAC process design options, including reactor configuration, operating conditions, and productivity-product separation schemes, with respect to potential product yields, thermal efficiency, capital and operating costs, and reliability, ultimately leading to the preparation of a design package and cost estimate for a bench-scale reactor testing system to be assembled and tested in Phase 2 of the program. A detailed parametric testing plan was also developed for process design optimization and model verification in Phase 2. During Phase 2 of this program, IGT, UIC, and industry advisors UOP and BP Amoco will validate the SAC concept through construction of the bench-scale unit and parametric testing. The computer model developed in Phase 1 will be updated with the experimental data and used in future scale-up efforts. The process design will be refined and the cost estimate updated. Market survey and assessment will continue so that a commercial demonstration project can be identified.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oßwald, Patrick; Köhler, Markus
A new high-temperature flow reactor experiment utilizing the powerful molecular beam mass spectrometry (MBMS) technique for detailed observation of gas phase kinetics in reacting flows is presented. The reactor design provides a consequent extension of the experimental portfolio of validation experiments for combustion reaction kinetics. Temperatures up to 1800 K are applicable by three individually controlled temperature zones with this atmospheric pressure flow reactor. Detailed speciation data are obtained using the sensitive MBMS technique, providing in situ access to almost all chemical species involved in the combustion process, including highly reactive species such as radicals. Strategies for quantifying the experimentalmore » data are presented alongside a careful analysis of the characterization of the experimental boundary conditions to enable precise numeric reproduction of the experimental results. The general capabilities of this new analytical tool for the investigation of reacting flows are demonstrated for a selected range of conditions, fuels, and applications. A detailed dataset for the well-known gaseous fuels, methane and ethylene, is provided and used to verify the experimental approach. Furthermore, application for liquid fuels and fuel components important for technical combustors like gas turbines and engines is demonstrated. Besides the detailed investigation of novel fuels and fuel components, the wide range of operation conditions gives access to extended combustion topics, such as super rich conditions at high temperature important for gasification processes, or the peroxy chemistry governing the low temperature oxidation regime. These demonstrations are accompanied by a first kinetic modeling approach, examining the opportunities for model validation purposes.« less
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2014-05-01
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
An Agile Systems Engineering Process: The Missing Link?
2011-05-01
has a num- ber of standards available such as ISO 12207 , ISO 9001 and the Capability Maturity Model Integrated (CMMI®) [24,25,26]. The CMMI was a...addressing activities throughout the products lifecycle [24]. ISO 12207 “contains processes, activities and tasks that are to be applied during...the acquisition of a system that contains software” [26]. A limitation identified within ISO 12207 is that it does not specify details on how to