LISP based simulation generators for modeling complex space processes
NASA Technical Reports Server (NTRS)
Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing
1987-01-01
The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.
A method of computer aided design with self-generative models in NX Siemens environment
NASA Astrophysics Data System (ADS)
Grabowik, C.; Kalinowski, K.; Kempa, W.; Paprocka, I.
2015-11-01
Currently in CAD/CAE/CAM systems it is possible to create 3D design virtual models which are able to capture certain amount of knowledge. These models are especially useful in an automation of routine design tasks. These models are known as self-generative or auto generative and they can behave in an intelligent way. The main difference between the auto generative and fully parametric models consists in the auto generative models ability to self-organizing. In this case design model self-organizing means that aside from the possibility of making of automatic changes of model quantitative features these models possess knowledge how these changes should be made. Moreover they are able to change quality features according to specific knowledge. In spite of undoubted good points of self-generative models they are not so often used in design constructional process which is mainly caused by usually great complexity of these models. This complexity makes the process of self-generative time and labour consuming. It also needs a quite great investment outlays. The creation process of self-generative model consists of the three stages it is knowledge and information acquisition, model type selection and model implementation. In this paper methods of the computer aided design with self-generative models in NX Siemens CAD/CAE/CAM software are presented. There are the five methods of self-generative models preparation in NX with: parametric relations model, part families, GRIP language application, knowledge fusion and OPEN API mechanism. In the paper examples of each type of the self-generative model are presented. These methods make the constructional design process much faster. It is suggested to prepare this kind of self-generative models when there is a need of design variants creation. The conducted research on assessing the usefulness of elaborated models showed that they are highly recommended in case of routine tasks automation. But it is still difficult to distinguish which method of self-generative preparation is most preferred. It always depends on a problem complexity. The easiest way for such a model preparation is this with the parametric relations model whilst the hardest one is this with the OPEN API mechanism. From knowledge processing point of view the best choice is application of the knowledge fusion.
Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
2015-08-21
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108
Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.
Exploring the Processes of Generating LOD (0-2) Citygml Models in Greater Municipality of Istanbul
NASA Astrophysics Data System (ADS)
Buyuksalih, I.; Isikdag, U.; Zlatanova, S.
2013-08-01
3D models of cities, visualised and exploded in 3D virtual environments have been available for several years. Currently a large number of impressive realistic 3D models have been regularly presented at scientific, professional and commercial events. One of the most promising developments is OGC standard CityGML. CityGML is object-oriented model that support 3D geometry and thematic semantics, attributes and relationships, and offers advanced options for realistic visualization. One of the very attractive characteristics of the model is the support of 5 levels of detail (LOD), starting from 2.5D less accurate model (LOD0) and ending with very detail indoor model (LOD4). Different local government offices and municipalities have different needs when utilizing the CityGML models, and the process of model generation depends on local and domain specific needs. Although the processes (i.e. the tasks and activities) for generating the models differs depending on its utilization purpose, there are also some common tasks (i.e. common denominator processes) in the model generation of City GML models. This paper focuses on defining the common tasks in generation of LOD (0-2) City GML models and representing them in a formal way with process modeling diagrams.
Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support
NASA Astrophysics Data System (ADS)
Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar
This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.
Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?
NASA Astrophysics Data System (ADS)
Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2018-01-01
Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.
Industrial process surveillance system
Gross, Kenneth C.; Wegerich, Stephan W.; Singer, Ralph M.; Mott, Jack E.
1998-01-01
A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.
Industrial process surveillance system
Gross, K.C.; Wegerich, S.W.; Singer, R.M.; Mott, J.E.
1998-06-09
A system and method are disclosed for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy. 96 figs.
Industrial Process Surveillance System
Gross, Kenneth C.; Wegerich, Stephan W; Singer, Ralph M.; Mott, Jack E.
2001-01-30
A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.
Learning as a Generative Process
ERIC Educational Resources Information Center
Wittrock, M. C.
2010-01-01
A cognitive model of human learning with understanding is introduced. Empirical research supporting the model, which is called the generative model, is summarized. The model is used to suggest a way to integrate some of the research in cognitive development, human learning, human abilities, information processing, and aptitude-treatment…
Electron beam generation in the turbulent plasma of Z-pinch discharges
NASA Astrophysics Data System (ADS)
Vikhrev, Victor V.; Baronova, Elena O.
1997-05-01
Numerical modeling of the process of electron beam generation in z-pinch discharges are presented. The proposed model represents the electron beam generation under turbulent plasma conditions. Strong current distribution inhomogeneity in the plasma column has been accounted for the adequate generation process investigation. Electron beam is generated near the maximum of compression due to run away mechanism and it is not related with the current break effect.
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
NASA Astrophysics Data System (ADS)
Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III
2005-11-01
Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.
A model of human decision making in multiple process monitoring situations
NASA Technical Reports Server (NTRS)
Greenstein, J. S.; Rouse, W. B.
1982-01-01
Human decision making in multiple process monitoring situations is considered. It is proposed that human decision making in many multiple process monitoring situations can be modeled in terms of the human's detection of process related events and his allocation of attention among processes once he feels event have occurred. A mathematical model of human event detection and attention allocation performance in multiple process monitoring situations is developed. An assumption made in developing the model is that, in attempting to detect events, the human generates estimates of the probabilities that events have occurred. An elementary pattern recognition technique, discriminant analysis, is used to model the human's generation of these probability estimates. The performance of the model is compared to that of four subjects in a multiple process monitoring situation requiring allocation of attention among processes.
A model of oil-generation in a waterlogged and closed system
NASA Astrophysics Data System (ADS)
Zhigao, He
This paper presents a new model on synthetic effects on oil-generation in a waterlogged and closed system. It is suggested based on information about oil in high pressure layers (including gas dissolved in oil), marsh gas and its fermentative solution, fermentation processes and mechanisms, gaseous hydrocarbons of carbonate rocks by acid treatment, oil-field water, recent and ancient sediments, and simulation experiments of artificial marsh gas and biological action. The model differs completely from the theory of oil-generation by thermal degradation of kerogen but stresses the synthetic effects of oil-generation in special waterlogged and closed geological systems, the importance of pressure in oil-forming processes, and direct oil generation by micro-organisms. Oil generated directly by micro-organisms is a particular biochemical reaction. Another feature of this model is that generation, migration and accumulation of petroleum are considered as a whole.
NASA Astrophysics Data System (ADS)
Jang, Sa-Han
Galton-Watson branching processes of relevance to human population dynamics are the subject of this thesis. We begin with an historical survey of the invention of the invention of this model in the middle of the 19th century, for the purpose of modelling the extinction of unusual surnames in France and Britain. We then review the principal developments and refinements of this model, and their applications to a wide variety of problems in biology and physics. Next, we discuss in detail the case where the probability generating function for a Galton-Watson branching process is a geometric series, which can be summed in closed form to yield a fractional linear generating function that can be iterated indefinitely in closed form. We then describe the matrix method of Keyfitz and Tyree, and use it to determine how large a matrix must be chosen to model accurately a Galton-Watson branching process for a very large number of generations, of the order of hundreds or even thousands. Finally, we show that any attempt to explain the recent evidence for the existence thousands of generations ago of a 'mitochondrial Eve' and a 'Y-chromosomal Adam' in terms of a the standard Galton-Watson branching process, or indeed any statistical model that assumes equality of probabilities of passing one's genes to one's descendents in later generations, is unlikely to be successful. We explain that such models take no account of the advantages that the descendents of the most successful individuals in earlier generations enjoy over their contemporaries, which must play a key role in human evolution.
Evaluation of grid generation technologies from an applied perspective
NASA Technical Reports Server (NTRS)
Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.
1995-01-01
An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.
van Strien, Maarten J; Slager, Cornelis T J; de Vries, Bauke; Grêt-Regamey, Adrienne
2016-06-01
Many studies have assessed the effect of landscape patterns on spatial ecological processes by simulating these processes in computer-generated landscapes with varying composition and configuration. To generate such landscapes, various neutral landscape models have been developed. However, the limited set of landscape-level pattern variables included in these models is often inadequate to generate landscapes that reflect real landscapes. In order to achieve more flexibility and variability in the generated landscapes patterns, a more complete set of class- and patch-level pattern variables should be implemented in these models. These enhancements have been implemented in Landscape Generator (LG), which is a software that uses optimization algorithms to generate landscapes that match user-defined target values. Developed for participatory spatial planning at small scale, we enhanced the usability of LG and demonstrated how it can be used for larger scale ecological studies. First, we used LG to recreate landscape patterns from a real landscape (i.e., a mountainous region in Switzerland). Second, we generated landscape series with incrementally changing pattern variables, which could be used in ecological simulation studies. We found that LG was able to recreate landscape patterns that approximate those of real landscapes. Furthermore, we successfully generated landscape series that would not have been possible with traditional neutral landscape models. LG is a promising novel approach for generating neutral landscapes and enables testing of new hypotheses regarding the influence of landscape patterns on ecological processes. LG is freely available online.
Schiek, Richard [Albuquerque, NM
2006-06-20
A method of generating two-dimensional masks from a three-dimensional model comprises providing a three-dimensional model representing a micro-electro-mechanical structure for manufacture and a description of process mask requirements, reducing the three-dimensional model to a topological description of unique cross sections, and selecting candidate masks from the unique cross sections and the cross section topology. The method further can comprise reconciling the candidate masks based on the process mask requirements description to produce two-dimensional process masks.
The wandering self: Tracking distracting self-generated thought in a cognitively demanding context.
Huijser, Stefan; van Vugt, Marieke K; Taatgen, Niels A
2018-02-01
We investigated how self-referential processing (SRP) affected self-generated thought in a complex working memory task (CWM) to test the predictions of a computational cognitive model. This model described self-generated thought as resulting from competition between task- and distracting processes, and predicted that self-generated thought interferes with rehearsal, reducing memory performance. SRP was hypothesized to influence this goal competition process by encouraging distracting self-generated thinking. We used a spatial CWM task to examine if SRP instigated such thoughts, and employed eye-tracking to examine rehearsal interference in eye-movement and self-generated thinking in pupil size. The results showed that SRP was associated with lower performance and higher rates of self-generated thought. Self-generated thought was associated with less rehearsal and we observed a smaller pupil size for mind wandering. We conclude that SRP can instigate self-generated thought and that goal competition provides a likely explanation for how self-generated thoughts arises in a demanding task. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André
2016-01-01
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Reduced Order Models Via Continued Fractions Applied to Control Systems,
1980-09-01
a simple * model of a nuclear reactor power generator [20, 21]. The heat generating process of a nuclear reactor is dependent upon the mechanism...called fission (a fragmentation of matter). The power generated by this process is directly related to the population of neutrons, n~t) and can be...150) 6(t ()n~t) - c(t) (151) where 6k(t) 6 kc(t)-an(t) (152) The variable 6k(t) is the input to the process and is given the name "reactivity". It is
Influence of winding construction on starter-generator thermal processes
NASA Astrophysics Data System (ADS)
Grachev, P. Yu; Bazarov, A. A.; Tabachinskiy, A. S.
2018-01-01
Dynamic processes in starter-generators features high winding are overcurrent. It can lead to insulation overheating and fault operation mode. For hybrid and electric vehicles, new high efficiency construction of induction machines windings is proposed. Stator thermal processes need be considered in the most difficult operation modes. The article describes construction features of new compact stator windings, electromagnetic and thermal models of processes in stator windings and explains the influence of innovative construction on thermal processes. Models are based on finite element method.
Frank, Steven A.
2010-01-01
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344
A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.
Richter, Mathis; Lins, Jonas; Schöner, Gregor
2017-01-01
Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition. Copyright © 2017 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
On storm movement and its applications
NASA Astrophysics Data System (ADS)
Niemczynowicz, Janusz
Rainfall-runoff models applicable for design and analysis of sewage systems in urban areas are further developed in order to represent better different physical processes going on on an urban catchment. However, one important part of the modelling procedure, the generation of the rainfall input is still a weak point. The main problem is lack of adequate rainfall data which represent temporal and spatial variations of the natural rainfall process. Storm movement is a natural phenomenon which influences urban runoff. However, the rainfall movement and its influence on runoff generation process is not represented in presently available urban runoff simulation models. Physical description of the rainfall movement and its parameters is given based on detailed measurements performed on twelve gauges in Lund, Sweden. The paper discusses the significance of the rainfall movement on the runoff generation process and gives suggestions how the rainfall movement parameters may be used in runoff modelling.
NASA Astrophysics Data System (ADS)
Nguyen, Duy
2012-07-01
Digital Elevation Models (DEMs) are used in many applications in the context of earth sciences such as in topographic mapping, environmental modeling, rainfall-runoff studies, landslide hazard zonation, seismic source modeling, etc. During the last years multitude of scientific applications of Synthetic Aperture Radar Interferometry (InSAR) techniques have evolved. It has been shown that InSAR is an established technique of generating high quality DEMs from space borne and airborne data, and that it has advantages over other methods for the generation of large area DEM. However, the processing of InSAR data is still a challenging task. This paper describes InSAR operational steps and processing chain for DEM generation from Single Look Complex (SLC) SAR data and compare a satellite SAR estimate of surface elevation with a digital elevation model (DEM) from Topography map. The operational steps are performed in three major stages: Data Search, Data Processing, and product Validation. The Data processing stage is further divided into five steps of Data Pre-Processing, Co-registration, Interferogram generation, Phase unwrapping, and Geocoding. The Data processing steps have been tested with ERS 1/2 data using Delft Object-oriented Interferometric (DORIS) InSAR processing software. Results of the outcome of the application of the described processing steps to real data set are presented.
On the next generation of reliability analysis tools
NASA Technical Reports Server (NTRS)
Babcock, Philip S., IV; Leong, Frank; Gai, Eli
1987-01-01
The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.
TLS for generating multi-LOD of 3D building model
NASA Astrophysics Data System (ADS)
Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.
2014-02-01
The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market.
Cocco, Luisanna; Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the "Bitcoin Blockchain", creating the first block of the chain hashing on his computer's central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU's generation. They are GPU's, FPGA's and ASIC's generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU's generation, the first with economic significance. The model reproduces some "stylized facts" found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network.
A Learning Framework for Winner-Take-All Networks with Stochastic Synapses.
Mostafa, Hesham; Cauwenberghs, Gert
2018-06-01
Many recent generative models make use of neural networks to transform the probability distribution of a simple low-dimensional noise process into the complex distribution of the data. This raises the question of whether biological networks operate along similar principles to implement a probabilistic model of the environment through transformations of intrinsic noise processes. The intrinsic neural and synaptic noise processes in biological networks, however, are quite different from the noise processes used in current abstract generative networks. This, together with the discrete nature of spikes and local circuit interactions among the neurons, raises several difficulties when using recent generative modeling frameworks to train biologically motivated models. In this letter, we show that a biologically motivated model based on multilayer winner-take-all circuits and stochastic synapses admits an approximate analytical description. This allows us to use the proposed networks in a variational learning setting where stochastic backpropagation is used to optimize a lower bound on the data log likelihood, thereby learning a generative model of the data. We illustrate the generality of the proposed networks and learning technique by using them in a structured output prediction task and a semisupervised learning task. Our results extend the domain of application of modern stochastic network architectures to networks where synaptic transmission failure is the principal noise mechanism.
Effective stochastic generator with site-dependent interactions
NASA Astrophysics Data System (ADS)
Khamehchi, Masoumeh; Jafarpour, Farhad H.
2017-11-01
It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.
NASA Astrophysics Data System (ADS)
Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji
Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.
The heuristic-analytic theory of reasoning: extension and evaluation.
Evans, Jonathan St B T
2006-06-01
An extensively revised heuristic-analytic theory of reasoning is presented incorporating three principles of hypothetical thinking. The theory assumes that reasoning and judgment are facilitated by the formation of epistemic mental models that are generated one at a time (singularity principle) by preconscious heuristic processes that contextualize problems in such a way as to maximize relevance to current goals (relevance principle). Analytic processes evaluate these models but tend to accept them unless there is good reason to reject them (satisficing principle). At a minimum, analytic processing of models is required so as to generate inferences or judgments relevant to the task instructions, but more active intervention may result in modification or replacement of default models generated by the heuristic system. Evidence for this theory is provided by a review of a wide range of literature on thinking and reasoning.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Error Generation in CATS-Based Agents
NASA Technical Reports Server (NTRS)
Callantine, Todd
2003-01-01
This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.
ERIC Educational Resources Information Center
Bogiages, Christopher A.; Lotter, Christine
2011-01-01
In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…
An ontology model for nursing narratives with natural language generation technology.
Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung
2013-01-01
The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Griffin, William A.; Li, Xun
2016-01-01
Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319
A New Model that Generates Lotka's Law.
ERIC Educational Resources Information Center
Huber, John C.
2002-01-01
Develops a new model for a process that generates Lotka's Law. Topics include measuring scientific productivity through the number of publications; rate of production; career duration; randomness; Poisson distribution; computer simulations; goodness-of-fit; theoretical support for the model; and future research. (Author/LRW)
Generative electronic background music system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazurowski, Lukasz
In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.
Modeling and Simulation of the Economics of Mining in the Bitcoin Market
Marchesi, Michele
2016-01-01
In January 3, 2009, Satoshi Nakamoto gave rise to the “Bitcoin Blockchain”, creating the first block of the chain hashing on his computer’s central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU’s generation. They are GPU’s, FPGA’s and ASIC’s generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU’s generation, the first with economic significance. The model reproduces some “stylized facts” found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network. PMID:27768691
Fite, Jennifer E.; Bates, John E.; Holtzworth-Munroe, Amy; Dodge, Kenneth A.; Nay, Sandra Y.; Pettit, Gregory S.
2012-01-01
This study explored the K. A. Dodge (1986) model of social information processing as a mediator of the association between interparental relationship conflict and subsequent offspring romantic relationship conflict in young adulthood. The authors tested 4 social information processing stages (encoding, hostile attributions, generation of aggressive responses, and positive evaluation of aggressive responses) in separate models to explore their independent effects as potential mediators. There was no evidence of mediation for encoding and attributions. However, there was evidence of significant mediation for both the response generation and response evaluation stages of the model. Results suggest that the ability of offspring to generate varied social responses and effectively evaluate the potential outcome of their responses at least partially mediates the intergenerational transmission of relationship conflict. PMID:18540765
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
Rethinking the Default Construction of Multimodel Climate Ensembles
Rauser, Florian; Gleckler, Peter; Marotzke, Jochem
2015-07-21
Here, we discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process.more » If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. Finally, with an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.« less
Experimental investigation and model verification for a GAX absorber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, S.C.; Christensen, R.N.
1996-12-31
In the ammonia-water generator-absorber heat exchange (GAX) absorption heat pump, the heat and mass transfer processes which occur between the generator and absorber are the most crucial in assuring that the heat pump will achieve COPs competitive with those of current technologies. In this study, a model is developed for the heat and mass transfer processes that occur in a counter-current vertical fluted tube absorber (VFTA) with inserts. Correlations for heat and mass transfer in annuli are used to model the processes in the VTA. Experimental data is used to validate the model for three different insert geometries. Comparison ofmore » model results with experimental data provides insight into model corrections necessary to bring the model into agreement with the physical phenomena observed in the laboratory.« less
An experimental study of factors affecting the selective inhibition of sintering process
NASA Astrophysics Data System (ADS)
Asiabanpour, Bahram
Selective Inhibition of Sintering (SIS) is a new rapid prototyping method that builds parts in a layer-by-layer fabrication basis. SIS works by joining powder particles through sintering in the part's body, and by sintering inhibition of some selected powder areas. The objective of this research has been to improve the new SIS process, which has been invented at USC. The process improvement is based on statistical design of experiments. To conduct the needed experiments a working machine and related path generator software were needed. The machine and its control software were made available prior to this research. The path generator algorithms and software had to be created. This program should obtain model geometry data from a CAD file and generate an appropriate path file for the printer nozzle. Also, the program should generate a simulation file for path file inspection using virtual prototyping. The activities related to path generator constitute the first part of this research, which has resulted in an efficient path generator. In addition, to reach an acceptable level of accuracy, strength, and surface quality in the fabricated parts, all effective factors in the SIS process should be identified and controlled. Simultaneous analytical and experimental studies were conducted to recognize effective factors and to control the SIS process. Also, it was known that polystyrene was the most appropriate polymer powder and saturated potassium iodide was the most effective inhibitor among the available candidate materials. In addition, statistical tools were applied to improve the desirable properties of the parts fabricated by the SIS process. An investigation of part strength was conducted using the Response Surface Methodology (RSM) and a region of acceptable operating conditions for the part strength was found. Then, through analysis of the experimental results, the impact of the factors on the final part surface quality and dimensional accuracy was modeled. After developing a desirability function model, process operating conditions for maximum desirability were identified. Finally, the desirability model was validated.
McBride, Dawn M; Anne Dosher, Barbara
2002-09-01
Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.
Using Apex To Construct CPM-GOMS Models
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2006-01-01
process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.
Hodge, N. E.; Ferencz, R. M.; Vignes, R. M.
2016-05-30
Selective laser melting (SLM) is an additive manufacturing process in which multiple, successive layers of metal powders are heated via laser in order to build a part. Modeling of SLM requires consideration of the complex interaction between heat transfer and solid mechanics. Here, the present work describes the authors initial efforts to validate their first generation model. In particular, the comparison of model-generated solid mechanics results, including both deformation and stresses, is presented. Additionally, results of various perturbations of the process parameters and modeling strategies are discussed.
Future requirements in surface modeling and grid generation
NASA Technical Reports Server (NTRS)
Cosner, Raymond R.
1995-01-01
The past ten years have seen steady progress in surface modeling procedures, and wholesale changes in grid generation technology. Today, it seems fair to state that a satisfactory grid can be developed to model nearly any configuration of interest. The issues at present focus on operational concerns such as cost and quality. Continuing evolution of the engineering process is placing new demands on the technologies of surface modeling and grid generation. In the evolution toward a multidisciplinary analysis-bascd design environment, methods developed for Computational Fluid Dynamics are finding acceptance in many additional applications. These two trends, the normal evolution of the process and a watershed shift toward concurrent and multidisciplinary analysis, will be considered in assessing current capabilities and needed technological improvements.
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.; Leung, L. R.
2013-11-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into Earth system models (e.g., community land models (CLMs)), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module, Next Generation BioGeoChemical Module (NGBGC), version 1.0, with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter, and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into CLM. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems. The method presented here could in theory be applied to simulate biogeochemical cycles in other Earth system models.
Stochastic modeling of the hypothalamic pulse generator activity.
Camproux, A C; Thalabard, J C; Thomas, G
1994-11-01
Luteinizing hormone (LH) is released by the pituitary in discrete pulses. In the monkey, the appearance of LH pulses in the plasma is invariably associated with sharp increases (i.e, volleys) in the frequency of the hypothalamic pulse generator electrical activity, so that continuous monitoring of this activity by telemetry provides a unique means to study the temporal structure of the mechanism generating the pulses. To assess whether the times of occurrence and durations of previous volleys exert significant influence on the timing of the next volley, we used a class of periodic counting process models that specify the stochastic intensity of the process as the product of two factors: 1) a periodic baseline intensity and 2) a stochastic regression function with covariates representing the influence of the past. This approach allows the characterization of circadian modulation and memory range of the process underlying hypothalamic pulse generator activity, as illustrated by fitting the model to experimental data from two ovariectomized rhesus monkeys.
Flagg, Jennifer L; Lane, Joseph P; Lockett, Michelle M
2013-02-15
Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace.The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts.
2013-01-01
Background Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. Discussion The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace. The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. Summary The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts. PMID:23414369
Charged lepton flavor violation in a class of radiative neutrino mass generation models
NASA Astrophysics Data System (ADS)
Chowdhury, Talal Ahmed; Nasri, Salah
2018-04-01
We investigate the charged lepton flavor violating processes μ →e γ , μ →e e e ¯, and μ -e conversion in nuclei for a class of three-loop radiative neutrino mass generation models with electroweak multiplets of increasing order. We find that, because of certain cancellations among various one-loop diagrams which give the dipole and nondipole contributions in an effective μ e γ vertex and a Z-penguin contribution in an effective μ e Z vertex, the flavor violating processes μ →e γ and μ -e conversion in nuclei become highly suppressed compared to μ →e e e ¯ process. Therefore, the observation of such a pattern in LFV processes may reveal the radiative mechanism behind neutrino mass generation.
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
2000-09-01
Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.
Yatagai, Tomonori; Ohkawa, Yoshiko; Kubo, Daichi; Kawase, Yoshinori
2017-01-02
The hydroxyl radical generation in an electro-Fenton process with a gas-diffusion electrode which is strongly linked with electro-chemical generation of hydrogen peroxide and iron redox cycle was studied. The OH radical generation subsequent to electro-chemical generations of H 2 O 2 was examined under the constant potential in the range of Fe 2+ dosage from 0 to 1.0 mM. The amount of generated OH radical initially increased and gradually decreased after the maximum was reached. The initial rate of OH radical generation increased for the Fe 2+ dosage <0.25 mM and at higher Fe 2+ dosages remained constant. At higher Fe 2+ dosages the precipitation of Fe might inhibit the enhancement of OH radical generation. The experiments for decolorization and total organic carbon (TOC) removal of azo-dye Orange II by the electro-Fenton process were conducted and the quick decolorization and slow TOC removal of Orange II were found. To quantify the linkages of OH radical generation with dynamic behaviors of electro-chemically generated H 2 O 2 and iron redox cycle and to investigate effects of OH radical generation on the decolorization and TOC removal of Orange II, novel reaction kinetic models were developed. The proposed models could satisfactory clarify the linkages of OH radical generation with electro-chemically generated H 2 O 2 and iron redox cycle and simulate the decolorization and TOC removal of Orange II by the electro-Fenton process.
MATTS- A Step Towards Model Based Testing
NASA Astrophysics Data System (ADS)
Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.
2016-08-01
In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
1999-01-01
GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.
Reliability Analysis and Standardization of Spacecraft Command Generation Processes
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Grenander, Sven; Evensen, Ken
2011-01-01
center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.
Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas
2013-01-01
The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.
NASA Astrophysics Data System (ADS)
Zangori, Laura; Forbes, Cory T.; Schwarz, Christina V.
2015-10-01
Opportunities to generate model-based explanations are crucial for elementary students, yet are rarely foregrounded in elementary science learning environments despite evidence that early learners can reason from models when provided with scaffolding. We used a quasi-experimental research design to investigate the comparative impact of a scaffold test condition consisting of embedded physical scaffolds within a curricular modeling task on third-grade (age 8-9) students' formulation of model-based explanations for the water cycle. This condition was contrasted to the control condition where third-grade students used a curricular modeling task with no embedded physical scaffolds. Students from each condition ( n scaffold = 60; n unscaffold = 56) generated models of the water cycle before and after completion of a 10-week water unit. Results from quantitative analyses suggest that students in the scaffolded condition represented and linked more subsurface water process sequences with surface water process sequences than did students in the unscaffolded condition. However, results of qualitative analyses indicate that students in the scaffolded condition were less likely to build upon these process sequences to generate model-based explanations and experienced difficulties understanding their models as abstracted representations rather than recreations of real-world phenomena. We conclude that embedded curricular scaffolds may support students to consider non-observable components of the water cycle but, alone, may be insufficient for generation of model-based explanations about subsurface water movement.
Auto Code Generation for Simulink-Based Attitude Determination Control System
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Huan; Cheng, Liang; Chuah, Mooi Choo
In the generation, transmission, and distribution sectors of the smart grid, intelligence of field devices is realized by programmable logic controllers (PLCs). Many smart-grid subsystems are essentially cyber-physical energy systems (CPES): For instance, the power system process (i.e., the physical part) within a substation is monitored and controlled by a SCADA network with hosts running miscellaneous applications (i.e., the cyber part). To study the interactions between the cyber and physical components of a CPES, several co-simulation platforms have been proposed. However, the network simulators/emulators of these platforms do not include a detailed traffic model that takes into account the impactsmore » of the execution model of PLCs on traffic characteristics. As a result, network traces generated by co-simulation only reveal the impacts of the physical process on the contents of the traffic generated by SCADA hosts, whereas the distinction between PLCs and computing nodes (e.g., a hardened computer running a process visualization application) has been overlooked. To generate realistic network traces using co-simulation for the design and evaluation of applications relying on accurate traffic profiles, it is necessary to establish a traffic model for PLCs. In this work, we propose a parameterized model for PLCs that can be incorporated into existing co-simulation platforms. We focus on the DNP3 subsystem of slave PLCs, which automates the processing of packets from the DNP3 master. To validate our approach, we extract model parameters from both the configuration and network traces of real PLCs. Simulated network traces are generated and compared against those from PLCs. Our evaluation shows that our proposed model captures the essential traffic characteristics of DNP3 slave PLCs, which can be used to extend existing co-simulation platforms and gain further insights into the behaviors of CPES.« less
This document summarizes the process followed to utilize GT-POWER modeled engine and laboratory engine dyno test data to generate a full engine fuel consumption map which can be used by EPA's ALPHA vehicle simulations.
Supervised Gamma Process Poisson Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dylan Zachary
This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling andmore » several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.« less
Toward a Generative Model of the Teaching-Learning Process.
ERIC Educational Resources Information Center
McMullen, David W.
Until the rise of cognitive psychology, models of the teaching-learning process (TLP) stressed external rather than internal variables. Models remained general descriptions until control theory introduced explicit system analyses. Cybernetic models emphasize feedback and adaptivity but give little attention to creativity. Research on artificial…
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis
2013-01-01
Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…
Validating EHR documents: automatic schematron generation using archetypes.
Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph
2014-01-01
The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
Studies and comparison of currently utilized models for ablation in Electrothermal-chemical guns
NASA Astrophysics Data System (ADS)
Jia, Shenli; Li, Rui; Li, Xingwen
2009-10-01
Wall ablation is a key process taking place in the capillary plasma generator in Electrothermal-Chemical (ETC) guns, whose characteristic directly decides the generator's performance. In the present article, this ablation process is theoretically studied. Currently widely used mathematical models designed to describe such process are analyzed and compared, including a recently developed kinetic model which takes into account the unsteady state in plasma-wall transition region by dividing it into two sub-layers, a Knudsen layer and a collision dominated non-equilibrium Hydrodynamic layer, a model based on Langmuir Law, as well as a simplified model widely used in arc-wall interaction process in circuit breakers, which assumes a proportional factor and an ablation enthalpy obtained empirically. Bulk plasma state and parameters are assumed to be consistent while analyzing and comparing each model, in order to take into consideration only the difference caused by model itself. Finally ablation rate is calculated in each method respectively and differences are discussed.
Multi-Topic Tracking Model for dynamic social network
NASA Astrophysics Data System (ADS)
Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun
2016-07-01
The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.
Models of charge pair generation in organic solar cells.
Few, Sheridan; Frost, Jarvist M; Nelson, Jenny
2015-01-28
Efficient charge pair generation is observed in many organic photovoltaic (OPV) heterojunctions, despite nominal electron-hole binding energies which greatly exceed the average thermal energy. Empirically, the efficiency of this process appears to be related to the choice of donor and acceptor materials, the resulting sequence of excited state energy levels and the structure of the interface. In order to establish a suitable physical model for the process, a range of different theoretical studies have addressed the nature and energies of the interfacial states, the energetic profile close to the heterojunction and the dynamics of excited state transitions. In this paper, we review recent developments underpinning the theory of charge pair generation and phenomena, focussing on electronic structure calculations, electrostatic models and approaches to excited state dynamics. We discuss the remaining challenges in achieving a predictive approach to charge generation efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S.; Alam, Maksudul
A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for single instruction multiple data (SIMD) style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. In one of the best cases, when executed on an NVidiamore » GeForce 1080 GPU, cuPPA generates a scale free network of a billion edges in less than 2 seconds.« less
Aspects of Mathematical Modelling of Pressure Retarded Osmosis
Anissimov, Yuri G.
2016-01-01
In power generating terms, a pressure retarded osmosis (PRO) energy generating plant, on a river entering a sea or ocean, is equivalent to a hydroelectric dam with a height of about 60 meters. Therefore, PRO can add significantly to existing renewable power generation capacity if economical constrains of the method are resolved. PRO energy generation relies on a semipermeable membrane that is permeable to water and impermeable to salt. Mathematical modelling plays an important part in understanding flows of water and salt near and across semipermeable membranes and helps to optimize PRO energy generation. Therefore, the modelling can help realizing PRO energy generation potential. In this work, a few aspects of mathematical modelling of the PRO process are reviewed and discussed. PMID:26848696
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J.; Moon, T.J.; Howell, J.R.
This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less
The Use of Uas for Rapid 3d Mapping in Geomatics Education
NASA Astrophysics Data System (ADS)
Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan
2016-06-01
With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.
PAST-TENSE GENERATION FROM FORM VERSUS MEANING: BEHAVIOURAL DATA AND SIMULATION EVIDENCE
Woollams, Anna M.; Joanisse, Marc; Patterson, Karalyn
2009-01-01
The standard task used to study inflectional processing of verbs involves presentation of the stem form from which the participant is asked to generate the past tense. This task reveals a processing disadvantage for irregular relative to regular English verbs, more pronounced for lower-frequency items. Dual- and single-mechanism theories of inflectional morphology are both able to account for this pattern; but the models diverge in their predictions concerning the magnitude of the regularity effect expected when the task involves past-tense generation from meaning. In this study, we asked normal speakers to generate the past tense from either form (verb stem) or meaning (action picture). The robust regularity effect observed in the standard form condition was no longer reliable when participants were required to generate the past tense from meaning. This outcome would appear problematic for dual-mechanism theories to the extent that they assume the process of inflection requires stem retrieval. By contrast, it supports single-mechanism models that consider stem retrieval to be task-dependent. We present a single-mechanism model of verb inflection incorporating distributed phonological and semantic representations that reproduces this task-dependent pattern. PMID:20161125
ERIC Educational Resources Information Center
Higgins, Derrick; Futagi, Yoko; Deane, Paul
2005-01-01
This paper reports on the process of modifying the ModelCreator item generation system to produce output in multiple languages. In particular, Japanese and Spanish are now supported in addition to English. The addition of multilingual functionality was considerably facilitated by the general formulation of our natural language generation system,…
ERIC Educational Resources Information Center
Bakirlioglu, Yekta; Ogur, Dilruba; Dogan, Cagla; Turhan, Senem
2016-01-01
Understanding people's experiences and the context of use of a product at the earliest stages of the design process has in the last decade become an important aspect of both the design profession and design education. Generative design research helps designers understand user experiences, while also throwing light on their current needs,…
Long-Term Interactions of Streamflow Generation and River Basin Morphology
NASA Astrophysics Data System (ADS)
Huang, X.; Niemann, J.
2005-12-01
It is well known that the spatial patterns and dynamics of streamflow generation processes depend on river basin topography, but the impact of streamflow generation processes on the long-term evolution of river basins has not drawn as much attention. Fluvial erosion processes are driven by streamflow, which can be produced by Horton runoff, Dunne runoff, and groundwater discharge. In this analysis, we hypothesize that the dominant streamflow generation process in a basin affects the spatial patterns of fluvial erosion and that the nature of these patterns changes for storm events with differing return periods. Furthermore, we hypothesize that differences in the erosion patterns modify the topography over the long term in a way that promotes and/or inhibits the other streamflow generation mechanisms. In order to test these hypotheses, a detailed hydrologic model is imbedded into an existing landscape evolution model. Precipitation events are simulated with a Poisson process and have random intensities and durations. The precipitation is partitioned between Horton runoff and infiltration to groundwater using a specified infiltration capacity. Groundwater flow is described by a two-dimensional Dupuit equation for a homogeneous, isotropic, unconfined aquifer with an irregular underlying impervious layer. Dunne runoff occurs when precipitation falls on locations where the water table reaches the land surface. The combined hydrologic/geomorphic model is applied to the WE-38 basin, an experimental watershed in Pennsylvania that has substantial available hydrologic data. First, the hydrologic model is calibrated to reproduce the observed streamflow for 1990 using the observed rainfall as the input. Then, the relative roles of Horton runoff, Dunne runoff, and groundwater discharge are controlled by varying the infiltration capacity of the soil. For each infiltration capacity, the hydrologic and geomorphic behavior of the current topography is analyzed and the long-term evolution of the basin is simulated. The results indicate that the topography can be divided into three types of locations (unsaturated, saturated, and intermittently saturated) which control the patterns of streamflow generation for events with different return periods. The results also indicate that the streamflow generation processes can produce different geomorphic effective events at upstream and downstream locations. The model also suggests that a topography dominated by groundwater discharge evolves over a long period of time to a shape that tends to inhibit the development of saturated areas and Dunne runoff.
Lange, Nicholas D.; Thomas, Rick P.; Davelaar, Eddy J.
2012-01-01
The pre-decisional process of hypothesis generation is a ubiquitous cognitive faculty that we continually employ in an effort to understand our environment and thereby support appropriate judgments and decisions. Although we are beginning to understand the fundamental processes underlying hypothesis generation, little is known about how various temporal dynamics, inherent in real world generation tasks, influence the retrieval of hypotheses from long-term memory. This paper presents two experiments investigating three data acquisition dynamics in a simulated medical diagnosis task. The results indicate that the mere serial order of data, data consistency (with previously generated hypotheses), and mode of responding influence the hypothesis generation process. An extension of the HyGene computational model endowed with dynamic data acquisition processes is forwarded and explored to provide an account of the present data. PMID:22754547
Relational and item-specific influences on generate-recognize processes in recall.
Guynn, Melissa J; McDaniel, Mark A; Strosser, Garrett L; Ramirez, Juan M; Castleberry, Erica H; Arnett, Kristen H
2014-02-01
The generate-recognize model and the relational-item-specific distinction are two approaches to explaining recall. In this study, we consider the two approaches in concert. Following Jacoby and Hollingshead (Journal of Memory and Language 29:433-454, 1990), we implemented a production task and a recognition task following production (1) to evaluate whether generation and recognition components were evident in cued recall and (2) to gauge the effects of relational and item-specific processing on these components. An encoding task designed to augment item-specific processing (anagram-transposition) produced a benefit on the recognition component (Experiments 1-3) but no significant benefit on the generation component (Experiments 1-3), in the context of a significant benefit to cued recall. By contrast, an encoding task designed to augment relational processing (category-sorting) did produce a benefit on the generation component (Experiment 3). These results converge on the idea that in recall, item-specific processing impacts a recognition component, whereas relational processing impacts a generation component.
Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek
2012-07-30
The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
NASA Astrophysics Data System (ADS)
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
NCC Simulation Model: Simulating the operations of the network control center, phase 2
NASA Technical Reports Server (NTRS)
Benjamin, Norman M.; Paul, Arthur S.; Gill, Tepper L.
1992-01-01
The simulation of the network control center (NCC) is in the second phase of development. This phase seeks to further develop the work performed in phase one. Phase one concentrated on the computer systems and interconnecting network. The focus of phase two will be the implementation of the network message dialogues and the resources controlled by the NCC. These resources are requested, initiated, monitored and analyzed via network messages. In the NCC network messages are presented in the form of packets that are routed across the network. These packets are generated, encoded, decoded and processed by the network host processors that generate and service the message traffic on the network that connects these hosts. As a result, the message traffic is used to characterize the work done by the NCC and the connected network. Phase one of the model development represented the NCC as a network of bi-directional single server queues and message generating sources. The generators represented the external segment processors. The served based queues represented the host processors. The NCC model consists of the internal and external processors which generate message traffic on the network that links these hosts. To fully realize the objective of phase two it is necessary to identify and model the processes in each internal processor. These processes live in the operating system of the internal host computers and handle tasks such as high speed message exchanging, ISN and NFE interface, event monitoring, network monitoring, and message logging. Inter process communication is achieved through the operating system facilities. The overall performance of the host is determined by its ability to service messages generated by both internal and external processors.
Comparative Analysis of InSAR Digital Surface Models for Test Area Bucharest
NASA Astrophysics Data System (ADS)
Dana, Iulia; Poncos, Valentin; Teleaga, Delia
2010-03-01
This paper presents the results of the interferometric processing of ERS Tandem, ENVISAT and TerraSAR- X for digital surface model (DSM) generation. The selected test site is Bucharest (Romania), a built-up area characterized by the usual urban complex pattern: mixture of buildings with different height levels, paved roads, vegetation, and water bodies. First, the DSMs were generated following the standard interferometric processing chain. Then, the accuracy of the DSMs was analyzed against the SPOT HRS model (30 m resolution at the equator). A DSM derived by optical stereoscopic processing of SPOT 5 HRG data and also the SRTM (3 arc seconds resolution at the equator) DSM have been included in the comparative analysis.
Quantum description of the high-order harmonic generation in multiphoton and tunneling regimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Hernandez, J. A.; Plaja, L.
2007-08-15
We employ a recently developed S-matrix approach [L. Plaja and J. A. Perez-Hernandez, Opt. Express 15, 3629 (2007)] to investigate the process of harmonic generation in tunnel and multiphoton ionization regimes. In contrast with most of the previous approaches, this model is developed without the stationary phase approximation and including the relevant continuum-continuum transitions. Therefore, it provides a full quantum description of the harmonic generation process in these two ionization regimes, with a good quantitative accuracy with the exact results of the time-dependent Schroedinger equation. We show how this model can be used to investigate the contribution of the electronicmore » population ionized at different times, thus giving a time-resolved description that, up to now, was reserved only to semiclassical models. In addition, we will show some aspects of harmonic generation beyond the semiclassical predictions as, for instance, the emission of radiation while the electron is leaving the parent ion and the generation of harmonics in semiclassically forbidden situations.« less
Functional model of biological neural networks.
Lo, James Ting-Ho
2010-12-01
A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.
The application of a generativity model for older adults.
Ehlman, Katie; Ligon, Mary
2012-01-01
Generativity is a concept first introduced by Erik Erikson as a part of his psychosocial theory which outlines eight stages of development in the human life. Generativity versus stagnation is the main developmental concern of middle adulthood; however, generativity is also recognized as an important theme in the lives of older adults. Building on the work of Erikson, McAdams and de St. Aubin (1992) developed a model explaining the generative process. The aims of this article are: (a) to explore the relationship between generativity and older adults as it appears in research literature; and (b) to examine McAdam's model and use it to explain the role of generativity in older adults who share life stories with gerontology students through an oral history project.
A Statistical Method to Distinguish Functional Brain Networks
Fujita, André; Vidal, Maciel C.; Takahashi, Daniel Y.
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism (p < 0.001). PMID:28261045
A Statistical Method to Distinguish Functional Brain Networks.
Fujita, André; Vidal, Maciel C; Takahashi, Daniel Y
2017-01-01
One major problem in neuroscience is the comparison of functional brain networks of different populations, e.g., distinguishing the networks of controls and patients. Traditional algorithms are based on search for isomorphism between networks, assuming that they are deterministic. However, biological networks present randomness that cannot be well modeled by those algorithms. For instance, functional brain networks of distinct subjects of the same population can be different due to individual characteristics. Moreover, networks of subjects from different populations can be generated through the same stochastic process. Thus, a better hypothesis is that networks are generated by random processes. In this case, subjects from the same group are samples from the same random process, whereas subjects from different groups are generated by distinct processes. Using this idea, we developed a statistical test called ANOGVA to test whether two or more populations of graphs are generated by the same random graph model. Our simulations' results demonstrate that we can precisely control the rate of false positives and that the test is powerful to discriminate random graphs generated by different models and parameters. The method also showed to be robust for unbalanced data. As an example, we applied ANOGVA to an fMRI dataset composed of controls and patients diagnosed with autism or Asperger. ANOGVA identified the cerebellar functional sub-network as statistically different between controls and autism ( p < 0.001).
Shukla, Nagesh; Keast, John E; Ceglarek, Darek
2014-10-01
The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.
2017-09-01
We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.
Predictive model for CO2 generation and decay in building envelopes
NASA Astrophysics Data System (ADS)
Aglan, Heshmat A.
2003-01-01
Understanding carbon dioxide generation and decay patterns in buildings with high occupancy levels is useful to identify their indoor air quality, air change rates, percent fresh air makeup, occupancy pattern, and how a variable air volume system to off-set undesirable CO2 level can be modulated. A mathematical model governing the generation and decay of CO2 in building envelopes with forced ventilation due to high occupancy is developed. The model has been verified experimentally in a newly constructed energy efficient healthy house. It was shown that the model accurately predicts the CO2 concentration at any time during the generation and decay processes.
Neural Sequence Generation Using Spatiotemporal Patterns of Inhibition.
Cannon, Jonathan; Kopell, Nancy; Gardner, Timothy; Markowitz, Jeffrey
2015-11-01
Stereotyped sequences of neural activity are thought to underlie reproducible behaviors and cognitive processes ranging from memory recall to arm movement. One of the most prominent theoretical models of neural sequence generation is the synfire chain, in which pulses of synchronized spiking activity propagate robustly along a chain of cells connected by highly redundant feedforward excitation. But recent experimental observations in the avian song production pathway during song generation have shown excitatory activity interacting strongly with the firing patterns of inhibitory neurons, suggesting a process of sequence generation more complex than feedforward excitation. Here we propose a model of sequence generation inspired by these observations in which a pulse travels along a spatially recurrent excitatory chain, passing repeatedly through zones of local feedback inhibition. In this model, synchrony and robust timing are maintained not through redundant excitatory connections, but rather through the interaction between the pulse and the spatiotemporal pattern of inhibition that it creates as it circulates the network. These results suggest that spatially and temporally structured inhibition may play a key role in sequence generation.
A model of human event detection in multiple process monitoring situations
NASA Technical Reports Server (NTRS)
Greenstein, J. S.; Rouse, W. B.
1978-01-01
It is proposed that human decision making in many multi-task situations might be modeled in terms of the manner in which the human detects events related to his tasks and the manner in which he allocates his attention among his tasks once he feels events have occurred. A model of human event detection performance in such a situation is presented. An assumption of the model is that, in attempting to detect events, the human generates the probability that events have occurred. Discriminant analysis is used to model the human's generation of these probabilities. An experimental study of human event detection performance in a multiple process monitoring situation is described and the application of the event detection model to this situation is addressed. The experimental study employed a situation in which subjects simulataneously monitored several dynamic processes for the occurrence of events and made yes/no decisions on the presence of events in each process. Input to the event detection model of the information displayed to the experimental subjects allows comparison of the model's performance with the performance of the subjects.
This document summarizes the process followed to utilize the fuel consumption map of a Ricardo modeled engine and vehicle fuel consumption data to generate a full engine fuel consumption map which can be used by EPA's ALPHA vehicle simulations.
Modeling DNP3 Traffic Characteristics of Field Devices in SCADA Systems of the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Huan; Cheng, Liang; Chuah, Mooi Choo
In the generation, transmission, and distribution sectors of the smart grid, intelligence of field devices is realized by programmable logic controllers (PLCs). Many smart-grid subsystems are essentially cyber-physical energy systems (CPES): For instance, the power system process (i.e., the physical part) within a substation is monitored and controlled by a SCADA network with hosts running miscellaneous applications (i.e., the cyber part). To study the interactions between the cyber and physical components of a CPES, several co-simulation platforms have been proposed. However, the network simulators/emulators of these platforms do not include a detailed traffic model that takes into account the impactsmore » of the execution model of PLCs on traffic characteristics. As a result, network traces generated by co-simulation only reveal the impacts of the physical process on the contents of the traffic generated by SCADA hosts, whereas the distinction between PLCs and computing nodes (e.g., a hardened computer running a process visualization application) has been overlooked. To generate realistic network traces using co-simulation for the design and evaluation of applications relying on accurate traffic profiles, it is necessary to establish a traffic model for PLCs. In this work, we propose a parameterized model for PLCs that can be incorporated into existing co-simulation platforms. We focus on the DNP3 subsystem of slave PLCs, which automates the processing of packets from the DNP3 master. To validate our approach, we extract model parameters from both the configuration and network traces of real PLCs. Simulated network traces are generated and compared against those from PLCs. Our evaluation shows that our proposed model captures the essential traffic characteristics of DNP3 slave PLCs, which can be used to extend existing co-simulation platforms and gain further insights into the behaviors of CPES.« less
Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality
NASA Astrophysics Data System (ADS)
Lee, I.-C.; Tsai, F.
2015-05-01
A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.
Hollow laser plasma self-confined microjet generation
NASA Astrophysics Data System (ADS)
Sizyuk, Valeryi; Hassanein, Ahmed; CenterMaterials under Extreme Environment Team
2017-10-01
Hollow laser beam produced plasma (LPP) devices are being used for the generation of the self-confined cumulative microjet. Most important place by this LPP device construction is achieving of an annular distribution of the laser beam intensity by spot. An integrated model is being developed to detailed simulation of the plasma generation and evolution inside the laser beam channel. The model describes in two temperature approximation hydrodynamic processes in plasma, laser absorption processes, heat conduction, and radiation energy transport. The total variation diminishing scheme in the Lax-Friedrich formulation for the description of plasma hydrodynamic is used. Laser absorption and radiation transport models on the base of Monte Carlo method are being developed. Heat conduction part on the implicit scheme with sparse matrixes using is realized. The developed models are being integrated into HEIGHTS-LPP computer simulation package. The integrated modeling of the hollow beam laser plasma generation showed the self-confinement and acceleration of the plasma microjet inside the laser channel. It was found dependence of the microjet parameters including radiation emission on the hole and beam radiuses ratio. This work is supported by the National Science Foundation, PIRE project.
Sandia MEMS Visualization Tools v. 3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarberry, Victor; Jorgensen, Craig R.; Young, Andrew I.
This is a revision to the Sandia MEMS Visualization Tools. It replaces all previous versions. New features in this version: Support for AutoCAD 2014 and 2015 . This CD contains an integrated set of electronic files that: a) Provides a 2D Process Visualizer that generates cross-section images of devices constructed using the SUMMiT V fabrication process. b) Provides a 3D Visualizer that generates 3D images of devices constructed using the SUMMiT V fabrication process. c) Provides a MEMS 3D Model generator that creates 3D solid models of devices constructed using the SUMMiT V fabrication process. While there exists some filesmore » on the CD that are used in conjunction with software package AutoCAD , these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less
Terai, Asuka; Nakagawa, Masanori
2007-08-01
The purpose of this paper is to construct a model that represents the human process of understanding metaphors, focusing specifically on similes of the form an "A like B". Generally speaking, human beings are able to generate and understand many sorts of metaphors. This study constructs the model based on a probabilistic knowledge structure for concepts which is computed from a statistical analysis of a large-scale corpus. Consequently, this model is able to cover the many kinds of metaphors that human beings can generate. Moreover, the model implements the dynamic process of metaphor understanding by using a neural network with dynamic interactions. Finally, the validity of the model is confirmed by comparing model simulations with the results from a psychological experiment.
NASA Astrophysics Data System (ADS)
Howitt, R. E.
2016-12-01
Hydro-economic models have been used to analyze optimal supply management and groundwater use for the past 25 years. They are characterized by an objective function that usually maximizes economic measures such as consumer and producer surplus subject to hydrologic equations of motion or water distribution systems. The hydrologic and economic components are sometimes fully integrated. Alternatively they may use an iterative interactive process. Environmental considerations have been included in hydro-economic models as inequality constraints. Representing environmental requirements as constraints is a rigid approximation of the range of management alternatives that could be used to implement environmental objectives. The next generation of hydro-economic models, currently being developed, require that the environmental alternatives be represented by continuous or semi-continuous functions which relate water resource use allocated to the environment with the probabilities of achieving environmental objectives. These functions will be generated by process models of environmental and biological systems which are now advanced to the state that they can realistically represent environmental systems and flexibility to interact with economic models. Examples are crop growth models, climate modeling, and biological models of forest, fish, and fauna systems. These process models can represent environmental outcomes in a form that is similar to economic production functions. When combined with economic models the interacting process models can reproduce a range of trade-offs between economic and environmental objectives, and thus optimize social value of many water and environmental resources. Some examples of this next-generation of hydro-enviro- economic models are reviewed. In these models implicit production functions for environmental goods are combined with hydrologic equations of motion and economic response functions. We discuss models that show interaction between environmental goods and agricultural production, and others that address alternative climate change policies, or habitat provision.
Generation of topographic terrain models utilizing synthetic aperture radar and surface level data
NASA Technical Reports Server (NTRS)
Imhoff, Marc L. (Inventor)
1991-01-01
Topographical terrain models are generated by digitally delineating the boundary of the region under investigation from the data obtained from an airborne synthetic aperture radar image and surface elevation data concurrently acquired either from an airborne instrument or at ground level. A set of coregistered boundary maps thus generated are then digitally combined in three dimensional space with the acquired surface elevation data by means of image processing software stored in a digital computer. The method is particularly applicable for generating terrain models of flooded regions covered entirely or in part by foliage.
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.
Zhou, Yang; Wu, Dewei
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
Generating High Resolution Climate Scenarios Through Regional Climate Modelling Over Southern Africa
NASA Astrophysics Data System (ADS)
Ndhlovu, G. Z.; Woyessa, Y. E.; Vijayaraghavan, S.
2017-12-01
limate change has impacted the global environment and the Continent of Africa, especially Southern Africa, regarded as one of the most vulnerable regions in Africa, has not been spared from these impacts. Global Climate Models (GCMs) with coarse horizontal resolutions of 150-300 km do not provide sufficient details at the local basin scale due to mismatch between the size of river basins and the grid cell of the GCM. This makes it difficult to apply the outputs of GCMs directly to impact studies such as hydrological modelling. This necessitates the use of regional climate modelling at high resolutions that provide detailed information at regional and local scales to study both climate change and its impacts. To this end, an experiment was set up and conducted with PRECIS, a regional climate model, to generate climate scenarios at a high resolution of 25km for the local region in Zambezi River basin of Southern Africa. The major input data used included lateral and surface boundary conditions based on the GCMs. The data is processed, analysed and compared with CORDEX climate change project data generated for Africa. This paper, highlights the major differences of the climate scenarios generated by PRECIS Model and CORDEX Project for Africa and further gives recommendations for further research on generation of climate scenarios. The climatic variables such as precipitation and temperatures have been analysed for flood and droughts in the region. The paper also describes the setting up and running of an experiment using a high-resolution PRECIS model. In addition, a description has been made in running the model and generating the output variables on a sub basin scale. Regional climate modelling which provides information on climate change impact may lead to enhanced understanding of adaptive water resources management. Understanding the regional climate modelling results on sub basin scale is the first step in analysing complex hydrological processes and a basis for designing of adaptation and mitigation strategies in the region. Key words: Climate change, regional climate modelling, hydrological processes, extremes, scenarios [1] Corresponding author: Email:gndhlovu@cut.ac.za Tel:+27 (0) 51 507 3072
AgMIP: Next Generation Models and Assessments
NASA Astrophysics Data System (ADS)
Rosenzweig, C.
2014-12-01
Next steps in developing next-generation crop models fall into several categories: significant improvements in simulation of important crop processes and responses to stress; extension from simplified crop models to complex cropping systems models; and scaling up from site-based models to landscape, national, continental, and global scales. Crop processes that require major leaps in understanding and simulation in order to narrow uncertainties around how crops will respond to changing atmospheric conditions include genetics; carbon, temperature, water, and nitrogen; ozone; and nutrition. The field of crop modeling has been built on a single crop-by-crop approach. It is now time to create a new paradigm, moving from 'crop' to 'cropping system.' A first step is to set up the simulation technology so that modelers can rapidly incorporate multiple crops within fields, and multiple crops over time. Then the response of these more complex cropping systems can be tested under different sustainable intensification management strategies utilizing the updated simulation environments. Model improvements for diseases, pests, and weeds include developing process-based models for important diseases, frameworks for coupling air-borne diseases to crop models, gathering significantly more data on crop impacts, and enabling the evaluation of pest management strategies. Most smallholder farming in the world involves integrated crop-livestock systems that cannot be represented by crop modeling alone. Thus, next-generation cropping system models need to include key linkages to livestock. Livestock linkages to be incorporated include growth and productivity models for grasslands and rangelands as well as the usual annual crops. There are several approaches for scaling up, including use of gridded models and development of simpler quasi-empirical models for landscape-scale analysis. On the assessment side, AgMIP is leading a community process for coordinated contributions to IPCC AR6 that involves the key modeling groups from around the world including North America, Europe, South America, Sub-Saharan Africa, South Asia, East Asia, and Australia and Oceania. This community process will lead to mutually agreed protocols for coordinated global and regional assessments.
NASA Astrophysics Data System (ADS)
Hennig, Hanna; Rödiger, Tino; Laronne, Jonathan B.; Geyer, Stefan; Merz, Ralf
2016-04-01
Flash floods in (semi-) arid regions are fascinating in their suddenness and can be harmful for humans, infrastructure, industry and tourism. Generated within minutes, an early warning system is essential. A hydrological model is required to quantify flash floods. Current models to predict flash floods are often based on simplified concepts and/or on concepts which were developed for humid regions. To more closely relate such models to local conditions, processes within catchments where flash floods occur require consideration. In this study we present a monitoring approach to decipher different flash flood generating processes in the ephemeral Wadi Arugot on the western side of the Dead Sea. To understand rainfall input a dense rain gauge network was installed. Locations of rain gauges were chosen based on land use, slope and soil cover. The spatiotemporal variation of rain intensity will also be available from radar backscatter. Level pressure sensors located at the outlet of major tributaries have been deployed to analyze in which part of the catchment water is generated. To identify the importance of soil moisture preconditions, two cosmic ray sensors have been deployed. At the outlet of the Arugot water is sampled and level is monitored. To more accurately determine water discharge, water velocity is measured using portable radar velocimetry. A first analysis of flash flood processes will be presented following the FLEX-Topo concept .(Savenije, 2010), where each landscape type is represented using an individual hydrological model according to the processes within the three hydrological response units: plateau, desert and outlet. References: Savenije, H. H. G.: HESS Opinions "Topography driven conceptual modelling (FLEX-Topo)", Hydrol. Earth Syst. Sci., 14, 2681-2692, doi:10.5194/hess-14-2681-2010, 2010.
Using CASE to Exploit Process Modeling in Technology Transfer
NASA Technical Reports Server (NTRS)
Renz-Olar, Cheryl
2003-01-01
A successful business will be one that has processes in place to run that business. Creating processes, reengineering processes, and continually improving processes can be accomplished through extensive modeling. Casewise(R) Corporate Modeler(TM) CASE is a computer aided software engineering tool that will enable the Technology Transfer Department (TT) at NASA Marshall Space Flight Center (MSFC) to capture these abilities. After successful implementation of CASE, it could then go on to be applied in other departments at MSFC and other centers at NASA. The success of a business process is dependent upon the players working as a team and continuously improving the process. A good process fosters customer satisfaction as well as internal satisfaction in the organizational infrastructure. CASE provides a method for business process success through functions consisting of systems and processes business models; specialized diagrams; matrix management; simulation; report generation and publishing; and, linking, importing, and exporting documents and files. The software has an underlying repository or database to support these functions. The Casewise. manual informs us that dynamics modeling is a technique used in business design and analysis. Feedback is used as a tool for the end users and generates different ways of dealing with the process. Feedback on this project resulted from collection of issues through a systems analyst interface approach of interviews with process coordinators and Technical Points of Contact (TPOCs).
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
NASA Astrophysics Data System (ADS)
Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.
2016-11-01
The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.
Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?
Reichert, David P.; Seriès, Peggy; Storkey, Amos J.
2013-01-01
Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e.g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain. PMID:23874177
The neural component-process architecture of endogenously generated emotion
Kanske, Philipp; Singer, Tania
2017-01-01
Abstract Despite the ubiquity of endogenous emotions and their role in both resilience and pathology, the processes supporting their generation are largely unknown. We propose a neural component process model of endogenous generation of emotion (EGE) and test it in two functional magnetic resonance imaging (fMRI) experiments (N = 32/293) where participants generated and regulated positive and negative emotions based on internal representations, usin self-chosen generation methods. EGE activated nodes of salience (SN), default mode (DMN) and frontoparietal control (FPCN) networks. Component processes implemented by these networks were established by investigating their functional associations, activation dynamics and integration. SN activation correlated with subjective affect, with midbrain nodes exclusively distinguishing between positive and negative affect intensity, showing dynamics consistent generation of core affect. Dorsomedial DMN, together with ventral anterior insula, formed a pathway supporting multiple generation methods, with activation dynamics suggesting it is involved in the generation of elaborated experiential representations. SN and DMN both coupled to left frontal FPCN which in turn was associated with both subjective affect and representation formation, consistent with FPCN supporting the executive coordination of the generation process. These results provide a foundation for research into endogenous emotion in normal, pathological and optimal function. PMID:27522089
NASA Technical Reports Server (NTRS)
Cariapa, Vikram
1993-01-01
The trend in the modern global economy towards free market policies has motivated companies to use rapid prototyping technologies to not only reduce product development cycle time but also to maintain their competitive edge. A rapid prototyping technology is one which combines computer aided design with computer controlled tracking of focussed high energy source (eg. lasers, heat) on modern ceramic powders, metallic powders, plastics or photosensitive liquid resins in order to produce prototypes or models. At present, except for the process of shape melting, most rapid prototyping processes generate products that are only dimensionally similar to those of the desired end product. There is an urgent need, therefore, to enhance the understanding of the characteristics of these processes in order to realize their potential for production. Currently, the commercial market is dominated by four rapid prototyping processes, namely selective laser sintering, stereolithography, fused deposition modelling and laminated object manufacturing. This phase of the research has focussed on the selective laser sintering and stereolithography rapid prototyping processes. A theoretical model for these processes is under development. Different rapid prototyping sites supplied test specimens (based on ASTM 638-84, Type I) that have been measured and tested to provide a data base on surface finish, dimensional variation and ultimate tensile strength. Further plans call for developing and verifying the theoretical models by carefully designed experiments. This will be a joint effort between NASA and other prototyping centers to generate a larger database, thus encouraging more widespread usage by product designers.
NASA Technical Reports Server (NTRS)
Tijidjian, Raffi P.
2010-01-01
The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.
Dreams Fulfilled, Dreams Shattered: Determinants of Segmented Assimilation in the Second Generation
ERIC Educational Resources Information Center
Haller, William; Portes, Alejandro; Lynch, Scott M.
2011-01-01
We summarize prior theories on the adaptation process of the contemporary immigrant second generation as a prelude to presenting additive and interactive models showing the impact of family variables, school contexts and academic outcomes on the process. For this purpose, we regress indicators of educational and occupational achievement in early…
ERIC Educational Resources Information Center
London, Manuel; Sessa, Valerie I.
2007-01-01
This article integrates the literature on group interaction process analysis and group learning, providing a framework for understanding how patterns of interaction develop. The model proposes how adaptive, generative, and transformative learning processes evolve and vary in their functionality. Environmental triggers for learning, the group's…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, S.; Lerche, I.
1988-01-01
Geological processes related to petroleum generation, migration, and accumulation are very complicated in terms of time and variables involved, and are very difficult to simulate by laboratory experiments. For this reason, many mathematic/computer models have been developed to simulate these geological processes based on geological, geophysical, and geochemical principles. Unfortunately, none of these models can exactly simulate these processes because of the assumptions and simplifications made in these models and the errors in the input for the models. The sensitivity analysis is a comprehensive examination on how geological, geophysical, and geochemical parameters affect the reconstructions of geohistory, thermal history, andmore » hydrocarbon generation history. In this study, a one-dimensional fluid flow/compaction model has been used to run the sensitivity analysis. The authors will show the effects of some commonly used parameters such as depth, age, lithology, porosity, permeability, unconformity (time and eroded thickness), temperature at sediment surface, bottom hole temperature, present day heat flow, thermal gradient, thermal conductivity and kerogen type, and content on the evolutions of formation thickness, porosity, permeability, pressure with time and depth, heat flow with time, temperature with time and depth, vitrinite reflectance (R/sub 0/) and TTI with time and depth, oil window in terms of time and depth, and amount of hydrocarbon generated with time and depth.« less
Optical droplet vaporization of nanoparticle-loaded stimuli-responsive microbubbles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Si, Ting; Department of Biomedical Engineering, The Ohio State University, Columbus, Ohio 43210; Li, Guangbin
2016-03-14
A capillary co-flow focusing process is developed to generate stimuli-responsive microbubbles (SRMs) that comprise perfluorocarbon (PFC) suspension of silver nanoparticles (SNPs) in a lipid shell. Upon continuous laser irradiation at around their surface plasmon resonance band, the SNPs effectively absorb electromagnetic energy, induce heat accumulation in SRMs, trigger PFC vaporization, and eventually lead to thermal expansion and fragmentation of the SRMs. This optical droplet vaporization (ODV) process is further simulated by a theoretical model that combines heat generation of SNPs, phase change of PFC, and thermal expansion of SRMs. The model is validated by benchtop experiments, where the ODV processmore » is monitored by microscopic imaging. The effects of primary process parameters on behaviors of ODV are predicted by the theoretical model, indicating the technical feasibility for process control and optimization in future drug delivery applications.« less
Collaborative Platform for DFM
2007-12-20
generation litho hotspot checkers have also been implemented in automated hotspot fixers that can automatically fix designs by making small changes...processing side (ex. new CMP models, etch models, litho models) and on the circuit side (ex. Process aware circuit analysis or yield optimization...Since final gate CD is a function of not only litho , but Post Exposure Bake, ashing, and etch, the processing module can be augmented with more
Model Based Document and Report Generation for Systems Engineering
NASA Technical Reports Server (NTRS)
Delp, Christopher; Lam, Doris; Fosse, Elyse; Lee, Cin-Young
2013-01-01
As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.
Model based document and report generation for systems engineering
NASA Astrophysics Data System (ADS)
Delp, C.; Lam, D.; Fosse, E.; Lee, Cin-Young
As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.
Generating Models of Surgical Procedures using UMLS Concepts and Multiple Sequence Alignment
Meng, Frank; D’Avolio, Leonard W.; Chen, Andrew A.; Taira, Ricky K.; Kangarloo, Hooshang
2005-01-01
Surgical procedures can be viewed as a process composed of a sequence of steps performed on, by, or with the patient’s anatomy. This sequence is typically the pattern followed by surgeons when generating surgical report narratives for documenting surgical procedures. This paper describes a methodology for semi-automatically deriving a model of conducted surgeries, utilizing a sequence of derived Unified Medical Language System (UMLS) concepts for representing surgical procedures. A multiple sequence alignment was computed from a collection of such sequences and was used for generating the model. These models have the potential of being useful in a variety of informatics applications such as information retrieval and automatic document generation. PMID:16779094
2015-09-01
63 Figure 30. Order processing state diagram (after Fowler and Scott 1997). ......................64 x Figure 32. Four of...events, precedence and inclusion. Figure 30 shows an OV-6b for order processing states. 64 Figure 30. Order processing state diagram (after Fowler... Order Processing State Transition Starts at checking order Ends at order delivered
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
Self-Exciting Point Process Modeling of Conversation Event Sequences
NASA Astrophysics Data System (ADS)
Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo
Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.
2013-01-01
Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333
Automation on the generation of genome-scale metabolic models.
Reyes, R; Gamermann, D; Montagud, A; Fuente, D; Triana, J; Urchueguía, J F; de Córdoba, P Fernández
2012-12-01
Nowadays, the reconstruction of genome-scale metabolic models is a nonautomatized and interactive process based on decision making. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze, and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic, and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. This work presents the automation of a methodology for the reconstruction of genome-scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome-scale metabolic model of a photosynthetic organism, Synechocystis sp. PCC6803. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models, like connectivity and average shortest mean path of the different models, have been compared and analyzed.
Neuroscientific Model of Motivational Process
Kim, Sung-il
2013-01-01
Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment. PMID:23459598
Neuroscientific model of motivational process.
Kim, Sung-Il
2013-01-01
Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.
Foreign Language Methods and an Information Processing Model of Memory.
ERIC Educational Resources Information Center
Willebrand, Julia
The major approaches to language teaching (audiolingual method, generative grammar, Community Language Learning and Silent Way) are investigated to discover whether or not they are compatible in structure with an information-processing model of memory (IPM). The model of memory used was described by Roberta Klatzky in "Human Memory:…
Chen, Yi- Ping Phoebe; Hanan, Jim
2002-01-01
Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly.
NASA Technical Reports Server (NTRS)
Cross, James H., II
1991-01-01
The main objective is the investigation, formulation, and generation of graphical representations of algorithms, structures, and processes for Ada (GRASP/Ada). The presented task, in which various graphical representations that can be extracted or generated from source code are described and categorized, is focused on reverse engineering. The following subject areas are covered: the system model; control structure diagram generator; object oriented design diagram generator; user interface; and the GRASP library.
Effect of Entropy Generation on Wear Mechanics and System Reliability
NASA Astrophysics Data System (ADS)
Gidwani, Akshay; James, Siddanth; Jagtap, Sagar; Karthikeyan, Ram; Vincent, S.
2018-04-01
Wear is an irreversible phenomenon. Processes such as mutual sliding and rolling between materials involve entropy generation. These processes are monotonic with respect to time. The concept of entropy generation is further quantified using Degradation Entropy Generation theorem formulated by Michael D. Bryant. The sliding-wear model can be extrapolated to different instances in order to further provide a potential analysis of machine prognostics as well as system and process reliability for various processes besides even mere mechanical processes. In other words, using the concept of ‘entropy generation’ and wear, one can quantify the reliability of a system with respect to time using a thermodynamic variable, which is the basis of this paper. Thus in the present investigation, a unique attempt has been made to establish correlation between entropy-wear-reliability which can be useful technique in preventive maintenance.
The tale of hearts and reason: the influence of mood on decision making.
Laborde, Sylvain; Raab, Markus
2013-08-01
In decision-making research, one important aspect of real-life decisions has so far been neglected: the mood of the decision maker when generating options. The authors tested the use of the take-the-first (TTF) heuristic and extended the TTF model to understand how mood influences the option-generation process of individuals in two studies, the first using a between-subjects design (30 nonexperts, 30 near-experts, and 30 experts) and the second conceptually replicating the first using a within-subject design (30 nonexperts). Participants took part in an experimental option-generation task, with 31 three-dimensional videos of choices in team handball. Three moods were elicited: positive, neutral, and negative. The findings (a) replicate previous results concerning TTF and (b) show that the option-generation process was associated with the physiological component of mood, supporting the neurovisceral integration model. The extension of TTF to processing emotional factors is an important step forward in explaining fast choices in real-life situations.
NASA Astrophysics Data System (ADS)
Kubalska, J. L.; Preuss, R.
2013-12-01
Digital Surface Models (DSM) are used in GIS data bases as single product more often. They are also necessary to create other products such as3D city models, true-ortho and object-oriented classification. This article presents results of DSM generation for classification of vegetation in urban areas. Source data allowed producing DSM with using of image matching method and ALS data. The creation of DSM from digital images, obtained by Ultra Cam-D digital Vexcel camera, was carried out in Match-T by INPHO. This program optimizes the configuration of images matching process, which ensures high accuracy and minimize gap areas. The analysis of the accuracy of this process was made by comparison of DSM generated in Match-T with DSM generated from ALS data. Because of further purpose of generated DSM it was decided to create model in GRID structure with cell size of 1 m. With this parameter differential model from both DSMs was also built that allowed determining the relative accuracy of the compared models. The analysis indicates that the generation of DSM with multi-image matching method is competitive for the same surface model creation from ALS data. Thus, when digital images with high overlap are available, the additional registration of ALS data seems to be unnecessary.
Global optimization framework for solar building design
NASA Astrophysics Data System (ADS)
Silva, N.; Alves, N.; Pascoal-Faria, P.
2017-07-01
The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.
Membrane Packing Problems: A short Review on computational Membrane Modeling Methods and Tools
Sommer, Björn
2013-01-01
The use of model membranes is currently part of the daily workflow for many biochemical and biophysical disciplines. These membranes are used to analyze the behavior of small substances, to simulate transport processes, to study the structure of macromolecules or for illustrative purposes. But, how can these membrane structures be generated? This mini review discusses a number of ways to obtain these structures. First, the problem will be formulated as the Membrane Packing Problem. It will be shown that the theoretical problem of placing proteins and lipids onto a membrane area differ significantly. Thus, two sub-problems will be defined and discussed. Then, different – partly historical – membrane modeling methods will be introduced. And finally, membrane modeling tools will be evaluated which are able to semi-automatically generate these model membranes and thus, drastically accelerate and simplify the membrane generation process. The mini review concludes with advice about which tool is appropriate for which application case. PMID:24688707
The Monash University Interactive Simple Climate Model
NASA Astrophysics Data System (ADS)
Dommenget, D.
2013-12-01
The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.
Ecohydrologic process modeling of mountain block groundwater recharge.
Magruder, Ian A; Woessner, William W; Running, Steve W
2009-01-01
Regional mountain block recharge (MBR) is a key component of alluvial basin aquifer systems typical of the western United States. Yet neither water scientists nor resource managers have a commonly available and reasonably invoked quantitative method to constrain MBR rates. Recent advances in landscape-scale ecohydrologic process modeling offer the possibility that meteorological data and land surface physical and vegetative conditions can be used to generate estimates of MBR. A water balance was generated for a temperate 24,600-ha mountain watershed, elevation 1565 to 3207 m, using the ecosystem process model Biome-BGC (BioGeochemical Cycles) (Running and Hunt 1993). Input data included remotely sensed landscape information and climate data generated with the Mountain Climate Simulator (MT-CLIM) (Running et al. 1987). Estimated mean annual MBR flux into the crystalline bedrock terrain is 99,000 m(3) /d, or approximately 19% of annual precipitation for the 2003 water year. Controls on MBR predictions include evapotranspiration (radiation limited in wet years and moisture limited in dry years), soil properties, vegetative ecotones (significant at lower elevations), and snowmelt (dominant recharge process). The ecohydrologic model is also used to investigate how climatic and vegetative controls influence recharge dynamics within three elevation zones. The ecohydrologic model proves useful for investigating controls on recharge to mountain blocks as a function of climate and vegetation. Future efforts will need to investigate the uncertainty in the modeled water balance by incorporating an advanced understanding of mountain recharge processes, an ability to simulate those processes at varying scales, and independent approaches to calibrating MBR estimates. Copyright © 2009 The Author(s). Journal compilation © 2009 National Ground Water Association.
Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter
NASA Technical Reports Server (NTRS)
Belknap, Shannon; Zhang, Michael
2013-01-01
The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.
Geologic and climatic controls on streamflow generation processes in a complex eogenetic karst basin
NASA Astrophysics Data System (ADS)
Vibhava, F.; Graham, W. D.; Maxwell, R. M.
2012-12-01
Streamflow at any given location and time is representative of surface and subsurface contributions from various sources. The ability to fully identify the factors controlling these contributions is key to successfully understanding the transport of contaminants through the system. In this study we developed a fully integrated 3D surface water-groundwater-land surface model, PARFLOW, to evaluate geologic and climatic controls on streamflow generation processes in a complex eogenetic karst basin in North Central Florida. In addition to traditional model evaluation criterion, such as comparing field observations to model simulated streamflow and groundwater elevations, we quantitatively evaluated the model's predictions of surface-groundwater interactions over space and time using a suite of binary end-member mixing models that were developed using observed specific conductivity differences among surface and groundwater sources throughout the domain. Analysis of model predictions showed that geologic heterogeneity exerts a strong control on both streamflow generation processes and land atmospheric fluxes in this watershed. In the upper basin, where the karst aquifer is overlain by a thick confining layer, approximately 92% of streamflow is "young" event flow, produced by near stream rainfall. Throughout the upper basin the confining layer produces a persistent high surficial water table which results in high evapotranspiration, low groundwater recharge and thus negligible "inter-event" streamflow. In the lower basin, where the karst aquifer is unconfined, deeper water tables result in less evapotranspiration. Thus, over 80% of the streamflow is "old" subsurface flow produced by diffuse infiltration through the epikarst throughout the lower basin, and all surface contributions to streamflow originate in the upper confined basin. Climatic variability provides a secondary control on surface-subsurface and land-atmosphere fluxes, producing significant seasonal and interannual variability in these processes. Spatial and temporal patterns of evapotranspiration, groundwater recharge and streamflow generation processes reveal potential hot spots and hot moments for surface and groundwater contamination in this basin.
Iwamoto, Derek Kenji; Negi, Nalini Junko; Partiali, Rachel Negar; Creswell, John W
2013-10-01
This phenomenological study elucidates the identity development processes of 12 second-generation adult Asian Indian Americans. The results identify salient sociocultural factors and multidimensional processes of racial and ethnic identity development. Discrimination, parental, and community factors seemed to play a salient role in influencing participants' racial and ethnic identity development. The emergent Asian Indian American racial and ethnic identity model provides a contextualized overview of key developmental periods and turning points within the process of identity development.
Iwamoto, Derek Kenji; Negi, Nalini Junko; Partiali, Rachel Negar; Creswell, John W.
2014-01-01
This phenomenological study elucidates the identity development processes of 12 second-generation adult Asian Indian Americans. The results identify salient sociocultural factors and multidimensional processes of racial and ethnic identity development. Discrimination, parental, and community factors seemed to play a salient role in influencing participants’ racial and ethnic identity development. The emergent Asian Indian American racial and ethnic identity model provides a contextualized overview of key developmental periods and turning points within the process of identity development. PMID:25298617
3D Model Generation From the Engineering Drawing
NASA Astrophysics Data System (ADS)
Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav
2010-01-01
The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.
NASA Technical Reports Server (NTRS)
Kopasakis, George
1997-01-01
Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.
GANViz: A Visual Analytics Approach to Understand the Adversarial Game.
Wang, Junpeng; Gou, Liang; Yang, Hao; Shen, Han-Wei
2018-06-01
Generative models bear promising implications to learn data representations in an unsupervised fashion with deep learning. Generative Adversarial Nets (GAN) is one of the most popular frameworks in this arena. Despite the promising results from different types of GANs, in-depth understanding on the adversarial training process of the models remains a challenge to domain experts. The complexity and the potential long-time training process of the models make it hard to evaluate, interpret, and optimize them. In this work, guided by practical needs from domain experts, we design and develop a visual analytics system, GANViz, aiming to help experts understand the adversarial process of GANs in-depth. Specifically, GANViz evaluates the model performance of two subnetworks of GANs, provides evidence and interpretations of the models' performance, and empowers comparative analysis with the evidence. Through our case studies with two real-world datasets, we demonstrate that GANViz can provide useful insight into helping domain experts understand, interpret, evaluate, and potentially improve GAN models.
Elementary model of severe plastic deformation by KoBo process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusak, A.; Storozhuk, N.; Danielewski, M., E-mail: daniel@agh.edu.pl
2014-01-21
Self-consistent model of generation, interaction, and annihilation of point defects in the gradient of oscillating stresses is presented. This model describes the recently suggested method of severe plastic deformation by combination of pressure and oscillating rotations of the die along the billet axis (KoBo process). Model provides the existence of distinct zone of reduced viscosity with sharply increased concentration of point defects. This zone provides the high extrusion velocity. Presented model confirms that the Severe Plastic Deformation (SPD) in KoBo may be treated as non-equilibrium phase transition of abrupt drop of viscosity in rather well defined spatial zone. In thismore » very zone, an intensive lateral rotational movement proceeds together with generation of point defects which in self-organized manner make rotation possible by the decrease of viscosity. The special properties of material under KoBo version of SPD can be described without using the concepts of nonequilibrium grain boundaries, ballistic jumps and amorphization. The model can be extended to include different SPD processes.« less
NASA Astrophysics Data System (ADS)
Panu, U. S.; Ng, W.; Rasmussen, P. F.
2009-12-01
The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in data through the concepts of autocorrelation, average mutual information, and Hurst exponent, (ii) the ability of models to preserve the persistency within the homogenous dry/wet weather states through analysis of dry/wet-spell lengths between the observed and generated data, and (iii) the ability to assesses the goodness-of-fit of models through the likelihood estimates (i.e., AIC and BIC). Past 30 years of observed daily precipitation records from 10 Canadian meteorological stations were utilized for comparative analyses of the three models. In general, the Markov chain model performed well. The remainders of the models were found to be competitive from one another depending upon the scope and purpose of the comparison. Although the Markov chain model has a certain advantage in the generation of daily precipitation occurrences, the structural flexibility offered by the Dictionary approach in modeling the varied segment lengths of heterogeneous weather states provides a distinct and powerful advantage in the generation of precipitation sequences.
On the Limiting Markov Process of Energy Exchanges in a Rarely Interacting Ball-Piston Gas
NASA Astrophysics Data System (ADS)
Bálint, Péter; Gilbert, Thomas; Nándori, Péter; Szász, Domokos; Tóth, Imre Péter
2017-02-01
We analyse the process of energy exchanges generated by the elastic collisions between a point-particle, confined to a two-dimensional cell with convex boundaries, and a `piston', i.e. a line-segment, which moves back and forth along a one-dimensional interval partially intersecting the cell. This model can be considered as the elementary building block of a spatially extended high-dimensional billiard modeling heat transport in a class of hybrid materials exhibiting the kinetics of gases and spatial structure of solids. Using heuristic arguments and numerical analysis, we argue that, in a regime of rare interactions, the billiard process converges to a Markov jump process for the energy exchanges and obtain the expression of its generator.
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
NASA Technical Reports Server (NTRS)
Booth, E., Jr.; Yu, J. C.
1986-01-01
An experimental investigation of two dimensional blade vortex interaction was held at NASA Langley Research Center. The first phase was a flow visualization study to document the approach process of a two dimensional vortex as it encountered a loaded blade model. To accomplish the flow visualization study, a method for generating two dimensional vortex filaments was required. The numerical study used to define a new vortex generation process and the use of this process in the flow visualization study were documented. Additionally, photographic techniques and data analysis methods used in the flow visualization study are examined.
Process modelling of biomass conversion to biofuels with combined heat and power.
Sharma, Abhishek; Shinde, Yogesh; Pareek, Vishnu; Zhang, Dongke
2015-12-01
A process model has been developed to study the pyrolysis of biomass to produce biofuel with heat and power generation. The gaseous and solid products were used to generate heat and electrical power, whereas the bio-oil was stored and supplied for other applications. The overall efficiency of the base case model was estimated for conversion of biomass into useable forms of bio-energy. It was found that the proposed design is not only significantly efficient but also potentially suitable for distributed operation of pyrolysis plants having centralised post processing facilities for production of other biofuels and chemicals. It was further determined that the bio-oil quality improved using a multi-stage condensation system. However, the recycling of flue gases coming from combustor instead of non-condensable gases in the pyrolyzer led to increase in the overall efficiency of the process with degradation of bio-oil quality. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automatic Earth observation data service based on reusable geo-processing workflow
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min
2008-12-01
A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.
Worklist handling in workflow-enabled radiological application systems
NASA Astrophysics Data System (ADS)
Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim; von Berg, Jens
2000-05-01
For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.
Supervised Learning Based Hypothesis Generation from Biomedical Literature.
Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei
2015-01-01
Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.
Modeling Renewable Penertration Using a Network Economic Model
NASA Astrophysics Data System (ADS)
Lamont, A.
2001-03-01
This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.
Information-based models for finance and insurance
NASA Astrophysics Data System (ADS)
Hoyle, Edward
2010-10-01
In financial markets, the information that traders have about an asset is reflected in its price. The arrival of new information then leads to price changes. The `information-based framework' of Brody, Hughston and Macrina (BHM) isolates the emergence of information, and examines its role as a driver of price dynamics. This approach has led to the development of new models that capture a broad range of price behaviour. This thesis extends the work of BHM by introducing a wider class of processes for the generation of the market filtration. In the BHM framework, each asset is associated with a collection of random cash flows. The asset price is the sum of the discounted expectations of the cash flows. Expectations are taken with respect (i) an appropriate measure, and (ii) the filtration generated by a set of so-called information processes that carry noisy or imperfect market information about the cash flows. To model the flow of information, we introduce a class of processes termed Lévy random bridges (LRBs), generalising the Brownian and gamma information processes of BHM. Conditioned on its terminal value, an LRB is identical in law to a Lévy bridge. We consider in detail the case where the asset generates a single cash flow X_T at a fixed date T. The flow of information about X_T is modelled by an LRB with random terminal value X_T. An explicit expression for the price process is found by working out the discounted conditional expectation of X_T with respect to the natural filtration of the LRB. New models are constructed using information processes related to the Poisson process, the Cauchy process, the stable-1/2 subordinator, the variance-gamma process, and the normal inverse-Gaussian process. These are applied to the valuation of credit-risky bonds, vanilla and exotic options, and non-life insurance liabilities.
Reconstruction of dynamical systems from resampled point processes produced by neuron models
NASA Astrophysics Data System (ADS)
Pavlova, Olga N.; Pavlov, Alexey N.
2018-04-01
Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.
NASA Astrophysics Data System (ADS)
Sawicki, J.; Siedlaczek, P.; Staszczyk, A.
2018-03-01
A numerical three-dimensional model for computing residual stresses generated in cross section of steel 42CrMo4 after nitriding is presented. The diffusion process is analyzed by the finite-element method. The internal stresses are computed using the obtained profile of the distribution of the nitrogen concentration. The special features of the intricate geometry of the treated articles including edges and angles are considered. Comparative analysis of the results of the simulation and of the experimental measurement of residual stresses is performed by the Waisman-Philips method.
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
High resolution climate scenarios for snowmelt modelling in small alpine catchments
NASA Astrophysics Data System (ADS)
Schirmer, M.; Peleg, N.; Burlando, P.; Jonas, T.
2017-12-01
Snow in the Alps is affected by climate change with regard to duration, timing and amount. This has implications with respect to important societal issues as drinking water supply or hydropower generation. In Switzerland, the latter received a lot of attention following the political decision to phase out of nuclear electricity production. An increasing number of authorization requests for small hydropower plants located in small alpine catchments was observed in the recent years. This situation generates ecological conflicts, while the expected climate change poses a threat to water availability thus putting at risk investments in such hydropower plants. Reliable high-resolution climate scenarios are thus required, which account for small-scale processes to achieve realistic predictions of snowmelt runoff and its variability in small alpine catchments. We therefore used a novel model chain by coupling a stochastic 2-dimensional weather generator (AWE-GEN-2d) with a state-of-the-art energy balance snow cover model (FSM). AWE-GEN-2d was applied to generate ensembles of climate variables at very fine temporal and spatial resolution, thus providing all climatic input variables required for the energy balance modelling. The land-surface model FSM was used to describe spatially variable snow cover accumulation and melt processes. The FSM was refined to allow applications at very high spatial resolution by specifically accounting for small-scale processes, such as a subgrid-parametrization of snow covered area or an improved representation of forest-snow processes. For the present study, the model chain was tested for current climate conditions using extensive observational dataset of different spatial and temporal coverage. Small-scale spatial processes such as elevation gradients or aspect differences in the snow distribution were evaluated using airborne LiDAR data. 40-year of monitoring data for snow water equivalent, snowmelt and snow-covered area for entire Switzerland was used to verify snow distribution patterns at coarser spatial and temporal scale. The ability of the model chain to reproduce current climate conditions in small alpine catchments makes this model combination an outstanding candidate to produce high resolution climate scenarios of snowmelt in small alpine catchments.
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing func...
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling
NASA Astrophysics Data System (ADS)
Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo
2017-06-01
Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.
Kollikkathara, Naushad; Feng, Huan; Yu, Danlin
2010-11-01
As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to form a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollikkathara, Naushad, E-mail: naushadkp@gmail.co; Feng Huan; Yu Danlin
2010-11-15
As planning for sustainable municipal solid waste management has to address several inter-connected issues such as landfill capacity, environmental impacts and financial expenditure, it becomes increasingly necessary to understand the dynamic nature of their interactions. A system dynamics approach designed here attempts to address some of these issues by fitting a model framework for Newark urban region in the US, and running a forecast simulation. The dynamic system developed in this study incorporates the complexity of the waste generation and management process to some extent which is achieved through a combination of simpler sub-processes that are linked together to formmore » a whole. The impact of decision options on the generation of waste in the city, on the remaining landfill capacity of the state, and on the economic cost or benefit actualized by different waste processing options are explored through this approach, providing valuable insights into the urban waste-management process.« less
Song, Yan; Wu, Weijie; Xie, Feng; Liu, Yilun; Wang, Tiejun
2017-01-01
Residual stress arisen in fabrication process of Double-Ceramic-Layer Thermal Barrier Coating System (DCL-TBCs) has a significant effect on its quality and reliability. In this work, based on the practical fabrication process of DCL-TBCs and the force and moment equilibrium, a theoretical model was proposed at first to predict residual stress generation in its fabrication process, in which the temperature dependent material properties of DCL-TBCs were incorporated. Then, a Finite Element method (FEM) has been carried out to verify our theoretical model. Afterwards, some important geometric parameters for DCL-TBCs, such as the thickness ratio of stabilized Zirconia (YSZ, ZrO2-8%Y2O3) layer to Lanthanum Zirconate (LZ, La2Zr2O7) layer, which is adjustable in a wide range in the fabrication process, have a remarkable effect on its performance, therefore, the effect of this thickness ratio on residual stress generation in the fabrication process of DCL-TBCs has been systematically studied. In addition, some thermal spray treatment, such as the pre-heating treatment, its effect on residual stress generation has also been studied in this work. It is found that, the final residual stress mainly comes from the cooling down process in the fabrication of DCL-TBCs. Increasing the pre-heating temperature can obviously decrease the magnitude of residual stresses in LZ layer, YSZ layer and substrate. With the increase of the thickness ratio of YSZ layer to LZ layer, magnitudes of residual stresses arisen in LZ layer and YSZ layer will increase while residual stress in substrate will decrease.
Song, Yan; Wu, Weijie; Xie, Feng; Liu, Yilun; Wang, Tiejun
2017-01-01
Residual stress arisen in fabrication process of Double-Ceramic-Layer Thermal Barrier Coating System (DCL-TBCs) has a significant effect on its quality and reliability. In this work, based on the practical fabrication process of DCL-TBCs and the force and moment equilibrium, a theoretical model was proposed at first to predict residual stress generation in its fabrication process, in which the temperature dependent material properties of DCL-TBCs were incorporated. Then, a Finite Element method (FEM) has been carried out to verify our theoretical model. Afterwards, some important geometric parameters for DCL-TBCs, such as the thickness ratio of stabilized Zirconia (YSZ, ZrO2-8%Y2O3) layer to Lanthanum Zirconate (LZ, La2Zr2O7) layer, which is adjustable in a wide range in the fabrication process, have a remarkable effect on its performance, therefore, the effect of this thickness ratio on residual stress generation in the fabrication process of DCL-TBCs has been systematically studied. In addition, some thermal spray treatment, such as the pre-heating treatment, its effect on residual stress generation has also been studied in this work. It is found that, the final residual stress mainly comes from the cooling down process in the fabrication of DCL-TBCs. Increasing the pre-heating temperature can obviously decrease the magnitude of residual stresses in LZ layer, YSZ layer and substrate. With the increase of the thickness ratio of YSZ layer to LZ layer, magnitudes of residual stresses arisen in LZ layer and YSZ layer will increase while residual stress in substrate will decrease. PMID:28103275
Next-generation concurrent engineering: developing models to complement point designs
NASA Technical Reports Server (NTRS)
Morse, Elizabeth; Leavens, Tracy; Cohanim, Barbak; Harmon, Corey; Mahr, Eric; Lewis, Brian
2006-01-01
Concurrent Engineering Design teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a next generation CED; nin addition to a point design, the team develops a model of the local trade space. The process is a balance between the power of model-developing tools and the creativity of human experts, enabling the development of a variety of trade models for any space mission.
Using a simulation assistant in modeling manufacturing systems
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, S. X.; Wolfsberger, John W.
1988-01-01
Numerous simulation languages exist for modeling discrete event processes, and are now ported to microcomputers. Graphic and animation capabilities were added to many of these languages to assist the users build models and evaluate the simulation results. With all these languages and added features, the user is still plagued with learning the simulation language. Futhermore, the time to construct and then to validate the simulation model is always greater than originally anticipated. One approach to minimize the time requirement is to use pre-defined macros that describe various common processes or operations in a system. The development of a simulation assistant for modeling discrete event manufacturing processes is presented. A simulation assistant is defined as an interactive intelligent software tool that assists the modeler in writing a simulation program by translating the modeler's symbolic description of the problem and then automatically generating the corresponding simulation code. The simulation assistant is discussed with emphasis on an overview of the simulation assistant, the elements of the assistant, and the five manufacturing simulation generators. A typical manufacturing system will be modeled using the simulation assistant and the advantages and disadvantages discussed.
Lijun Liu; V. Missirian; Matthew S. Zinkgraf; Andrew Groover; V. Filkov
2014-01-01
Background: One of the great advantages of next generation sequencing is the ability to generate large genomic datasets for virtually all species, including non-model organisms. It should be possible, in turn, to apply advanced computational approaches to these datasets to develop models of biological processes. In a practical sense, working with non-model organisms...
OPC model generation procedure for different reticle vendors
NASA Astrophysics Data System (ADS)
Jost, Andrew M.; Belova, Nadya; Callan, Neal P.
2003-12-01
The challenge of delivering acceptable semiconductor products to customers in timely fashion becomes more difficult as design complexity increases. The requirements of current generation designs tax OPC engineers greater than ever before since the readiness of high-quality OPC models can delay new process qualifications or lead to respins, which add to the upward-spiraling costs of new reticle sets, extend time-to-market, and disappoint customers. In their efforts to extend the printability of new designs, OPC engineers generally focus on the data-to-wafer path, ignoring data-to-mask effects almost entirely. However, it is unknown whether reticle makers' disparate processes truly yield comparable reticles, even with identical tools. This approach raises the question of whether a single OPC model is applicable to all reticle vendors. LSI Logic has developed a methodology for quantifying vendor-to-vendor reticle manufacturing differences and adapting OPC models for use at several reticle vendors. This approach allows LSI Logic to easily adapt existing OPC models for use with several reticle vendors and obviates the generation of unnecessary models, allowing OPC engineers to focus their efforts on the most critical layers.
Boubakar, Leila; Falk, Julien; Ducuing, Hugo; Thoinet, Karine; Reynaud, Florie; Derrington, Edmund; Castellani, Valérie
2017-08-16
Transmission of polarity established early during cell lineage history is emerging as a key process guiding cell differentiation. Highly polarized neurons provide a fascinating model to study inheritance of polarity over cell generations and across morphological transitions. Neural crest cells (NCCs) migrate to the dorsal root ganglia to generate neurons directly or after cell divisions in situ. Using live imaging of vertebrate embryo slices, we found that bipolar NCC progenitors lose their polarity, retracting their processes to round for division, but generate neurons with bipolar morphology by emitting processes from the same locations as the progenitor. Monitoring the dynamics of Septins, which play key roles in yeast polarity, indicates that Septin 7 tags process sites for re-initiation of process growth following mitosis. Interfering with Septins blocks this mechanism. Thus, Septins store polarity features during mitotic rounding so that daughters can reconstitute the initial progenitor polarity. Copyright © 2017 Elsevier Inc. All rights reserved.
Hamker, Fred H; Wiltschut, Jan
2007-09-01
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
Generation capacity expansion planning in deregulated electricity markets
NASA Astrophysics Data System (ADS)
Sharma, Deepak
With increasing demand of electric power in the context of deregulated electricity markets, a good strategic planning for the growth of the power system is critical for our tomorrow. There is a need to build new resources in the form of generation plants and transmission lines while considering the effects of these new resources on power system operations, market economics and the long-term dynamics of the economy. In deregulation, the exercise of generation planning has undergone a paradigm shift. The first stage of generation planning is now undertaken by the individual investors. These investors see investments in generation capacity as an increasing business opportunity because of the increasing market prices. Therefore, the main objective of such a planning exercise, carried out by individual investors, is typically that of long-term profit maximization. This thesis presents some modeling frameworks for generation capacity expansion planning applicable to independent investor firms in the context of power industry deregulation. These modeling frameworks include various technical and financing issues within the process of power system planning. The proposed modeling frameworks consider the long-term decision making process of investor firms, the discrete nature of generation capacity addition and incorporates transmission network modeling. Studies have been carried out to examine the impact of the optimal investment plans on transmission network loadings in the long-run by integrating the generation capacity expansion planning framework within a modified IEEE 30-bus transmission system network. The work assesses the importance of arriving at an optimal IRR at which the firm's profit maximization objective attains an extremum value. The mathematical model is further improved to incorporate binary variables while considering discrete unit sizes, and subsequently to include the detailed transmission network representation. The proposed models are novel in the sense that the planning horizon is split into plan sub-periods so as to minimize the overall risks associated with long-term plan models, particularly in the context of deregulation.
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
Backus-Gilbert inversion of travel time data
NASA Technical Reports Server (NTRS)
Johnson, L. E.
1972-01-01
Application of the Backus-Gilbert theory for geophysical inverse problems to the seismic body wave travel-time problem is described. In particular, it is shown how to generate earth models that fit travel-time data to within one standard error and having generated such models how to describe their degree of uniqueness. An example is given to illustrate the process.
ERIC Educational Resources Information Center
Cresswell-Yeager, Tiffany J.
2012-01-01
College choice is the three-stage process of aspiring, searching and choosing to attend college. There are many models pertaining to college choice, however, this study uses the Hossler and Gallagher Model---aspiration, search and choice. This qualitative study explored first-generation college students' perceptions about the influences…
Muirhead, David; Aoun, Patricia; Powell, Michael; Juncker, Flemming; Mollerup, Jens
2010-08-01
The need for higher efficiency, maximum quality, and faster turnaround time is a continuous focus for anatomic pathology laboratories and drives changes in work scheduling, instrumentation, and management control systems. To determine the costs of generating routine, special, and immunohistochemical microscopic slides in a large, academic anatomic pathology laboratory using a top-down approach. The Pathology Economic Model Tool was used to analyze workflow processes at The Nebraska Medical Center's anatomic pathology laboratory. Data from the analysis were used to generate complete cost estimates, which included not only materials, consumables, and instrumentation but also specific labor and overhead components for each of the laboratory's subareas. The cost data generated by the Pathology Economic Model Tool were compared with the cost estimates generated using relative value units. Despite the use of automated systems for different processes, the workflow in the laboratory was found to be relatively labor intensive. The effect of labor and overhead on per-slide costs was significantly underestimated by traditional relative-value unit calculations when compared with the Pathology Economic Model Tool. Specific workflow defects with significant contributions to the cost per slide were identified. The cost of providing routine, special, and immunohistochemical slides may be significantly underestimated by traditional methods that rely on relative value units. Furthermore, a comprehensive analysis may identify specific workflow processes requiring improvement.
Category Cued Recall Evokes a Generate-Recognize Retrieval Process
Hunt, R. Reed; Smith, Rebekah E.; Toth, Jeffrey P.
2015-01-01
The experiments reported here were designed to replicate and extend McCabe, Roediger, and Karpicke’s (2011) finding that retrieval in category cued recall involves both controlled and automatic processes. The extension entailed identifying whether distinctive encoding affected one or both of these two processes. The first experiment successfully replicated McCabe et al., but the second, which added a critical baseline condition, produced data inconsistent with a two independent process model of recall. The third experiment provided evidence that retrieval in category cued recall reflects a generate-recognize strategy, with the effect of distinctive processing being localized to recognition. Overall, the data suggest that category cued recall evokes a generate-recognize retrieval strategy and that the sub-processes underlying this strategy can be dissociated as a function of distinctive versus relational encoding processes. PMID:26280355
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Auto-recognition of surfaces and auto-generation of material removal volume for finishing process
NASA Astrophysics Data System (ADS)
Kataraki, Pramod S.; Salman Abu Mansor, Mohd
2018-03-01
Auto-recognition of a surface and auto-generation of material removal volumes for the so recognised surfaces has become a need to achieve successful downstream manufacturing activities like automated process planning and scheduling. Few researchers have contributed to generation of material removal volume for a product but resulted in material removal volume discontinuity between two adjacent material removal volumes generated from two adjacent faces that form convex geometry. The need for limitation free material removal volume generation was attempted and an algorithm that automatically recognises computer aided design (CAD) model’s surface and also auto-generate material removal volume for finishing process of the recognised surfaces was developed. The surfaces of CAD model are successfully recognised by the developed algorithm and required material removal volume is obtained. The material removal volume discontinuity limitation that occurred in fewer studies is eliminated.
Echtermeyer, Alexander; Amar, Yehia; Zakrzewski, Jacek; Lapkin, Alexei
2017-01-01
A recently described C(sp 3 )-H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE) and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.
3D Compressible Melt Transport with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo
2015-04-01
Melt generation and migration have been the subject of numerous investigations, but their typical time and length-scales are vastly different from mantle convection, which makes it difficult to study these processes in a unified framework. The equations that describe coupled Stokes-Darcy flow have been derived a long time ago and they have been successfully implemented and applied in numerical models (Keller et al., 2013). However, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. In addition, previous models neglect the compressibility of both the solid and the fluid phase. However, experiments have shown that the melt density change from the depth of melt generation to the surface leads to a volume increase of up to 20%. Considering these volume changes in both phases also ensures self-consistency of models that strive to link melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We describe our extension of the finite-element mantle convection code ASPECT (Kronbichler et al., 2012) that allows for solving additional equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects. We evaluate the functionality and potential of this method using a series of simple model setups and benchmarks, comparing results of the compressible and incompressible formulation and showing the potential of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. Keller, T., D. A. May, and B. J. P. Kaus (2013), Numerical modelling of magma dynamics coupled to tectonic deformation of lithosphere and crust, Geophysical Journal International, 195 (3), 1406-1442. Kronbichler, M., T. Heister, and W. Bangerth (2012), High accuracy mantle convection simulation through modern numerical methods, Geophysical Journal International, 191 (1), 12-29.
ERIC Educational Resources Information Center
Cassotti, Mathieu; Agogué, Marine; Camarda, Anaëlle; Houdé, Olivier; Borst, Grégoire
2016-01-01
Developmental cognitive neuroscience studies tend to show that the prefrontal brain regions (known to be involved in inhibitory control) are activated during the generation of creative ideas. In the present article, we discuss how a dual-process model of creativity--much like the ones proposed to account for decision making and reasoning--could…
ERIC Educational Resources Information Center
Coker, Cindy E.
2015-01-01
The purpose of this exploratory phenomenological narrative qualitative study was to investigate the influence of Facebook on first-generation college students' selection of a college framed within Hossler and Gallagher's (1987) college process model. The three questions which guided this research explored the influence of the social media website…
NASA Technical Reports Server (NTRS)
Wilson, Larry
1991-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.
Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.
Blutke, Andreas; Wanke, Rüdiger
2018-03-06
In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical uniform random (VUR) sections.
Compressible magma/mantle dynamics: 3-D, adaptive simulations in ASPECT
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo
2016-12-01
Melt generation and migration are an important link between surface processes and the thermal and chemical evolution of the Earth's interior. However, their vastly different timescales make it difficult to study mantle convection and melt migration in a unified framework, especially for 3-D global models. And although experiments suggest an increase in melt volume of up to 20 per cent from the depth of melt generation to the surface, previous computations have neglected the individual compressibilities of the solid and the fluid phase. Here, we describe our extension of the finite element mantle convection code ASPECT that adds melt generation and migration. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in areas where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high-resolution, 3-D, compressible, global mantle convection simulations coupled with melt migration. We evaluate the functionality and potential of this method using a series of benchmarks and model setups, compare results of the compressible and incompressible formulation, and show the effectiveness of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. The implementation is available in the Open Source ASPECT repository.
Sukumaran, Jeet; Economo, Evan P; Lacey Knowles, L
2016-05-01
Current statistical biogeographical analysis methods are limited in the ways ecology can be related to the processes of diversification and geographical range evolution, requiring conflation of geography and ecology, and/or assuming ecologies that are uniform across all lineages and invariant in time. This precludes the possibility of studying a broad class of macroevolutionary biogeographical theories that relate geographical and species histories through lineage-specific ecological and evolutionary dynamics, such as taxon cycle theory. Here we present a new model that generates phylogenies under a complex of superpositioned geographical range evolution, trait evolution, and diversification processes that can communicate with each other. We present a likelihood-free method of inference under our model using discriminant analysis of principal components of summary statistics calculated on phylogenies, with the discriminant functions trained on data generated by simulations under our model. This approach of model selection by classification of empirical data with respect to data generated under training models is shown to be efficient, robust, and performs well over a broad range of parameter space defined by the relative rates of dispersal, trait evolution, and diversification processes. We apply our method to a case study of the taxon cycle, that is testing for habitat and trophic level constraints in the dispersal regimes of the Wallacean avifaunal radiation. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
From SHG to mid-infrared SPDC generation in strained silicon waveguides
NASA Astrophysics Data System (ADS)
Castellan, Claudio; Trenti, Alessandro; Mancinelli, Mattia; Marchesini, Alessandro; Ghulinyan, Mher; Pucker, Georg; Pavesi, Lorenzo
2017-08-01
The centrosymmetric crystalline structure of Silicon inhibits second order nonlinear optical processes in this material. We report here that, by breaking the silicon symmetry with a stressing silicon nitride over-layer, Second Harmonic Generation (SHG) is obtained in suitably designed waveguides where multi-modal phase-matching is achieved. The modeling of the generated signal provides an effective strain-induced second order nonlinear coefficient of χ(2) = (0.30 +/- 0.02) pm/V. Our work opens also interesting perspectives on the reverse process, the Spontaneous Parametric Down Conversion (SPDC), through which it is possible to generate mid-infrared entangled photon pairs.
NASA Astrophysics Data System (ADS)
Borne, Adrien; Katsura, Tomotaka; Félix, Corinne; Doppagne, Benjamin; Segonds, Patricia; Bencheikh, Kamel; Levenson, Juan Ariel; Boulanger, Benoit
2016-01-01
Several third-harmonic generation processes were performed in a single step-index germanium-doped silica optical fiber under intermodal phase-matching conditions. The nanosecond fundamental beam range between 1400 and 1600 nm. The transverse distributions of the energy were successfully modeled in the form of Ince-Gauss modes, pointing out some ellipticity of fiber core. From these experiments and theoretical calculations, we discuss the implementation of frequency degenerated triple photon generation that shares the same phase-matching condition as third-harmonic generation, which is its reverse process.
Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES
NASA Astrophysics Data System (ADS)
Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu
2016-11-01
Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.
A statistical shape model of the human second cervical vertebra.
Clogenson, Marine; Duff, John M; Luethi, Marcel; Levivier, Marc; Meuli, Reto; Baur, Charles; Henein, Simon
2015-07-01
Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.
The importance of topographically corrected null models for analyzing ecological point processes.
McDowall, Philip; Lynch, Heather J
2017-07-01
Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.
Generating Focused Molecule Libraries for Drug Discovery with Recurrent Neural Networks
2017-01-01
In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery. PMID:29392184
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
Simulated annealing in networks for computing possible arrangements for red and green cones
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1987-01-01
Attention is given to network models in which each of the cones of the retina is given a provisional color at random, and then the cones are allowed to determine the colors of their neighbors through an iterative process. A symmetric-structure spin-glass model has allowed arrays to be generated from completely random arrangements of red and green to arrays with approximately as much disorder as the parafoveal cones. Simulated annealing has also been added to the process in an attempt to generate color arrangements with greater regularity and hence more revealing moirepatterns than than the arrangements yielded by quenched spin-glass processes. Attention is given to the perceptual implications of these results.
Improving the process of process modelling by the use of domain process patterns
NASA Astrophysics Data System (ADS)
Koschmider, Agnes; Reijers, Hajo A.
2015-01-01
The use of business process models has become prevalent in a wide area of enterprise applications. But while their popularity is expanding, concerns are growing with respect to their proper creation and maintenance. An obvious way to boost the efficiency of creating high-quality business process models would be to reuse relevant parts of existing models. At this point, however, limited support exists to guide process modellers towards the usage of appropriate model content. In this paper, a set of content-oriented patterns is presented, which is extracted from a large set of process models from the order management and manufacturing production domains. The patterns are derived using a newly proposed set of algorithms, which are being discussed in this paper. The authors demonstrate how such Domain Process Patterns, in combination with information on their historic usage, can support process modellers in generating new models. To support the wider dissemination and development of Domain Process Patterns within and beyond the studied domains, an accompanying website has been set up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, Andy; /Edinburgh U.; Butterworth, Jonathan
We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard-scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays;more » the inclusion of QED radiation and beyond-Standard-Model processes. We describe the principal features of the Ariadne, Herwig++, Pythia 8 and Sherpa generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists wanting a deeper insight into the tools available for signal and background simulation at the LHC.« less
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.; Zagidulin, Dmitri; Rauser, Richard W.
2000-01-01
Capabilities and expertise related to the development of links between nondestructive evaluation (NDE) and finite element analysis (FEA) at Glenn Research Center (GRC) are demonstrated. Current tools to analyze data produced by computed tomography (CT) scans are exercised to help assess the damage state in high temperature structural composite materials. A utility translator was written to convert velocity (an image processing software) STL data file to a suitable CAD-FEA type file. Finite element analyses are carried out with MARC, a commercial nonlinear finite element code, and the analytical results are discussed. Modeling was established by building MSC/Patran (a pre and post processing finite element package) generated model and comparing it to a model generated by Velocity in conjunction with MSC/Patran Graphics. Modeling issues and results are discussed in this paper. The entire process that outlines the tie between the data extracted via NDE and the finite element modeling and analysis is fully described.
Obtaining Accurate Probabilities Using Classifier Calibration
ERIC Educational Resources Information Center
Pakdaman Naeini, Mahdi
2016-01-01
Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…
NASA Astrophysics Data System (ADS)
Xiao, Heng; Gou, Xiaolong; Yang, Suwen
2011-05-01
Thermoelectric (TE) power generation technology, due to its several advantages, is becoming a noteworthy research direction. Many researchers conduct their performance analysis and optimization of TE devices and related applications based on the generalized thermoelectric energy balance equations. These generalized TE equations involve the internal irreversibility of Joule heating inside the thermoelectric device and heat leakage through the thermoelectric couple leg. However, it is assumed that the thermoelectric generator (TEG) is thermally isolated from the surroundings except for the heat flows at the cold and hot junctions. Since the thermoelectric generator is a multi-element device in practice, being composed of many fundamental TE couple legs, the effect of heat transfer between the TE couple leg and the ambient environment is not negligible. In this paper, based on basic theories of thermoelectric power generation and thermal science, detailed modeling of a thermoelectric generator taking account of the phenomenon of energy loss from the TE couple leg is reported. The revised generalized thermoelectric energy balance equations considering the effect of heat transfer between the TE couple leg and the ambient environment have been derived. Furthermore, characteristics of a multi-element thermoelectric generator with irreversibility have been investigated on the basis of the new derived TE equations. In the present investigation, second-law-based thermodynamic analysis (exergy analysis) has been applied to the irreversible heat transfer process in particular. It is found that the existence of the irreversible heat convection process causes a large loss of heat exergy in the TEG system, and using thermoelectric generators for low-grade waste heat recovery has promising potential. The results of irreversibility analysis, especially irreversible effects on generator system performance, based on the system model established in detail have guiding significance for the development and application of thermoelectric generators, particularly for the design and optimization of TE modules.
NASA Astrophysics Data System (ADS)
Babb, Grace
2017-11-01
This work aims to produce a higher fidelity model of the blades for NASA's X-57 all electric propeller driven experimental aircraft. This model will, in turn, allow for more accurate calculations of the thrust each propeller can generate. This work uses computational fluid dynamics (CFD) to first analyze the propeller blades as a series of 11 differently shaped airfoils and calculate, among other things, the coefficients for lift and drag associated with each airfoil at different angles of attack. OpenFOAM-a C + + library that can be used to create series of applications for pre-processing, solving, and post-processing-is one of the primary tools utilized in these calculations. By comparing the data OpenFOAM generates about the NACA 23012 airfoil with existing experimental data about the NACA 23012 airfoil, the reliability of our model is measured and verified. A trustworthy model can then be used to generate more data and sent to NASA to aid in the design of the actual aircraft.
Quantifying the indirect impacts of climate on agriculture: an inter-method comparison
Calvin, Kate; Fisher-Vanden, Karen
2017-10-27
Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparisonmore » between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between - 12% and + 15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.« less
Quantifying the indirect impacts of climate on agriculture: an inter-method comparison
NASA Astrophysics Data System (ADS)
Calvin, Kate; Fisher-Vanden, Karen
2017-11-01
Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparison between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between -12% and +15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.
Quantifying the indirect impacts of climate on agriculture: an inter-method comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvin, Kate; Fisher-Vanden, Karen
Climate change and increases in CO2 concentration affect the productivity of land, with implications for land use, land cover, and agricultural production. Much of the literature on the effect of climate on agriculture has focused on linking projections of changes in climate to process-based or statistical crop models. However, the changes in productivity have broader economic implications that cannot be quantified in crop models alone. How important are these socio-economic feedbacks to a comprehensive assessment of the impacts of climate change on agriculture? In this paper, we attempt to measure the importance of these interaction effects through an inter-method comparisonmore » between process models, statistical models, and integrated assessment model (IAMs). We find the impacts on crop yields vary widely between these three modeling approaches. Yield impacts generated by the IAMs are 20%-40% higher than the yield impacts generated by process-based or statistical crop models, with indirect climate effects adjusting yields by between - 12% and + 15% (e.g. input substitution and crop switching). The remaining effects are due to technological change.« less
Toward mechanistic models of action-oriented and detached cognition.
Pezzulo, Giovanni
2016-01-01
To be successful, the research agenda for a novel control view of cognition should foresee more detailed, computationally specified process models of cognitive operations including higher cognition. These models should cover all domains of cognition, including those cognitive abilities that can be characterized as online interactive loops and detached forms of cognition that depend on internally generated neuronal processing.
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or st...
ERIC Educational Resources Information Center
Kotsari, Constantina; Smyrnaiou, Zacharoula
2017-01-01
The central roles that modelling plays in the processes of scientific enquiry and that models play as the outcomes of that enquiry are well established (Gilbert & Boulter, 1998). Besides, there are considerable similarities between the processes and outcomes of science and technology (Cinar, 2016). In this study, we discuss how the use of…
NASA Astrophysics Data System (ADS)
Wang, Zi Shuai; Sha, Wei E. I.; Choy, Wallace C. H.
2016-12-01
Modeling the charge-generation process is highly important to understand device physics and optimize power conversion efficiency of bulk-heterojunction organic solar cells (OSCs). Free carriers are generated by both ultrafast exciton delocalization and slow exciton diffusion and dissociation at the heterojunction interface. In this work, we developed a systematic numerical simulation to describe the charge-generation process by a modified drift-diffusion model. The transport, recombination, and collection of free carriers are incorporated to fully capture the device response. The theoretical results match well with the state-of-the-art high-performance organic solar cells. It is demonstrated that the increase of exciton delocalization ratio reduces the energy loss in the exciton diffusion-dissociation process, and thus, significantly improves the device efficiency, especially for the short-circuit current. By changing the exciton delocalization ratio, OSC performances are comprehensively investigated under the conditions of short-circuit and open-circuit. Particularly, bulk recombination dependent fill factor saturation is unveiled and understood. As a fundamental electrical analysis of the delocalization mechanism, our work is important to understand and optimize the high-performance OSCs.
On the Development of a Hospital-Patient Web-Based Communication Tool: A Case Study From Norway.
Granja, Conceição; Dyb, Kari; Bolle, Stein Roald; Hartvigsen, Gunnar
2015-01-01
Surgery cancellations are undesirable in hospital settings as they increase costs, reduce productivity and efficiency, and directly affect the patient. The problem of elective surgery cancellations in a North Norwegian University Hospital is addressed. Based on a three-step methodology conducted at the hospital, the preoperative planning process was modeled taking into consideration the narratives from different health professions. From the analysis of the generated process models, it is concluded that in order to develop a useful patient centered web-based communication tool, it is necessary to fully understand how hospitals plan and organize surgeries today. Moreover, process reengineering is required to generate a standard process that can serve as a tool for health ICT designers to define the requirements for a robust and useful system.
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
NASA Astrophysics Data System (ADS)
Luo, Keqin
1999-11-01
The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and accurately what is going on in each tank, and (iii) identify all WM opportunities through process improvement. This work has formed a solid foundation for the further development of powerful WM technologies for comprehensive WM in the following decade.
Sun, Bo; Dong, Hongyu; He, Di; Rao, Dandan; Guan, Xiaohong
2016-02-02
Permanganate can be activated by bisulfite to generate soluble Mn(III) (noncomplexed with ligands other than H2O and OH(-)) which oxidizes organic contaminants at extraordinarily high rates. However, the generation of Mn(III) in the permanganate/bisulfite (PM/BS) process and the reactivity of Mn(III) toward emerging contaminants have never been quantified. In this work, Mn(III) generated in the PM/BS process was shown to absorb at 230-290 nm for the first time and disproportionated more easily at higher pH, and thus, the utilization rate of Mn(III) for decomposing organic contaminant was low under alkaline conditions. A Mn(III) generation and utilization model was developed to get the second-order reaction rate parameters of benzene oxidation by soluble Mn(III), and then, benzene was chosen as the reference probe to build a competition kinetics method, which was employed to obtain the second-order rate constants of organic contaminants oxidation by soluble Mn(III). The results revealed that the second-order rate constants of aniline and bisphenol A oxidation by soluble Mn(III) were in the range of 10(5)-10(6) M(-1) s(-1). With the presence of soluble Mn(III) at micromolar concentration, contaminants could be oxidized with the observed rates several orders of magnitude higher than those by common oxidation processes, implying the great potential application of the PM/BS process in water and wastewater treatment.
Next-generation genome-scale models for metabolic engineering.
King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O
2015-12-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.
Distributed Modelling of Stormflow Generation: Assessing the Effect of Ground Cover
NASA Astrophysics Data System (ADS)
Jarihani, B.; Sidle, R. C.; Roth, C. H.; Bartley, R.; Wilkinson, S. N.
2017-12-01
Understanding the effects of grazing management and land cover changes on surface hydrology is important for water resources and land management. A distributed hydrological modelling platform, wflow, (that was developed as part of Deltares's OpenStreams project) is used to assess the effect of land management practices on runoff generation processes. The model was applied to Weany Creek, a small catchment (13.6 km2) of the Burdekin Basin, North Australia, which is being studied to understand sources of sediment and nutrients to the Great Barrier Reef. Satellite and drone-based ground cover data, high resolution topography from LiDAR, soil properties, and distributed rainfall data were used to parameterise the model. Wflow was used to predict total runoff, peak runoff, time of rise, and lag time for several events of varying magnitudes and antecedent moisture conditions. A nested approach was employed to calibrate the model by using recorded flow hydrographs at three scales: (1) a hillslope sub-catchment: (2) a gullied sub-catchment; and the 13.6 km2 catchment outlet. Model performance was evaluated by comparing observed and predicted stormflow hydrograph attributes using the Nash Sutcliffe efficiency metric. By using a nested approach, spatiotemporal patterns of overland flow occurrence across the catchment can also be evaluated. The results show that a process-based distributed model can be calibrated to simulate spatial and temporal patterns of runoff generation processes, to help identify dominant processes which may be addressed by land management to improve rainfall retention. The model will be used to assess the effects of ground cover changes due to management practices in grazed lands on storm runoff.
Category cued recall evokes a generate-recognize retrieval process.
Hunt, R Reed; Smith, Rebekah E; Toth, Jeffrey P
2016-03-01
The experiments reported here were designed to replicate and extend McCabe, Roediger, and Karpicke's (2011) finding that retrieval in category cued recall involves both controlled and automatic processes. The extension entailed identifying whether distinctive encoding affected 1 or both of these 2 processes. The first experiment successfully replicated McCabe et al., but the second, which added a critical baseline condition, produced data inconsistent with a 2 independent process model of recall. The third experiment provided evidence that retrieval in category cued recall reflects a generate-recognize strategy, with the effect of distinctive processing being localized to recognition. Overall, the data suggest that category cued recall evokes a generate-recognize retrieval strategy and that the subprocesses underlying this strategy can be dissociated as a function of distinctive versus relational encoding processes. (c) 2016 APA, all rights reserved).
Modeling and simulation of offshore wind farm O&M processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joschko, Philip, E-mail: joschko@informatik.uni-hamburg.de; Widok, Andi H., E-mail: a.widok@htw-berlin.de; Appel, Susanne, E-mail: susanne.appel@hs-bremen.de
2015-04-15
This paper describes a holistic approach to operation and maintenance (O&M) processes in the domain of offshore wind farm power generation. The acquisition and process visualization is followed by a risk analysis of all relevant processes. Hereafter, a tool was designed, which is able to model the defined processes in a BPMN 2.0 notation, as well as connect and simulate them. Furthermore, the notation was enriched with new elements, representing other relevant factors that were, to date, only displayable with much higher effort. In that regard a variety of more complex situations were integrated, such as for example new processmore » interactions depending on different weather influences, in which case a stochastic weather generator was combined with the business simulation or other wind farm aspects important to the smooth running of the offshore wind farms. In addition, the choices for different methodologies, such as the simulation framework or the business process notation will be presented and elaborated depending on the impact they had on the development of the approach and the software solution. - Highlights: • Analysis of operation and maintenance processes of offshore wind farms • Process modeling with BPMN 2.0 • Domain-specific simulation tool.« less
Original data preprocessor for Femap/Nastran
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra
2016-12-01
Automatic data processing and visualization in the finite elements analysis of the structural problems is a long run concern in mechanical engineering. The paper presents the `common database' concept according to which the same information may be accessed from an analytical model, as well as from a numerical one. In this way, input data expressed as comma-separated-value (CSV) files are loaded into the Femap/Nastran environment using original API codes, being automatically generated: the geometry of the model, the loads and the constraints. The original API computer codes are general, being possible to generate the input data of any model. In the next stages, the user may create the discretization of the model, set the boundary conditions and perform a given analysis. If additional accuracy is needed, the analyst may delete the previous discretizations and using the same information automatically loaded, other discretizations and analyses may be done. Moreover, if new more accurate information regarding the loads or constraints is acquired, they may be modelled and then implemented in the data generating program which creates the `common database'. This means that new more accurate models may be easily generated. Other facility consists of the opportunity to control the CSV input files, several loading scenarios being possible to be generated in Femap/Nastran. In this way, using original intelligent API instruments the analyst is focused to accurately model the phenomena and on creative aspects, the repetitive and time-consuming activities being performed by the original computer-based instruments. Using this data processing technique we apply to the best Asimov's principle `minimum change required / maximum desired response'.
NASA Astrophysics Data System (ADS)
Molnar, I. L.; Krol, M.; Mumford, K. G.
2017-12-01
Developing numerical models for subsurface thermal remediation techniques - such as Electrical Resistive Heating (ERH) - that include multiphase processes such as in-situ water boiling, gas production and recovery has remained a significant challenge. These subsurface gas generation and recovery processes are driven by physical phenomena such as discrete and unstable gas (bubble) flow as well as water-gas phase mass transfer rates during bubble flow. Traditional approaches to multiphase flow modeling soil remain unable to accurately describe these phenomena. However, it has been demonstrated that Macroscopic Invasion Percolation (MIP) can successfully simulate discrete and unstable gas transport1. This has lead to the development of a coupled Electro Thermal-MIP Model2 (ET-MIP) capable of simulating multiple key processes in the thermal remediation and gas recovery process including: electrical heating of soil and groundwater, water flow, geological heterogeneity, heating-induced buoyant flow, water boiling, gas bubble generation and mobilization, contaminant mass transport and removal, and additional mechanisms such as bubble collapse in cooler regions. This study presents the first rigorous validation of a coupled ET-MIP model against two-dimensional water boiling and water/NAPL co-boiling experiments3. Once validated, the model was used to explore the impact of water and co-boiling events and subsequent gas generation and mobilization on ERH's ability to 1) generate, expand and mobilize gas at boiling and NAPL co-boiling temperatures, 2) efficiently strip contaminants from soil during both boiling and co-boiling. In addition, a quantification of the energy losses arising from steam generation during subsurface water boiling was examined with respect to its impact on the efficacy of thermal remediation. While this study specifically targets ERH, the study's focus on examining the fundamental mechanisms driving thermal remediation (e.g., water boiling) renders these results applicable to a wide range of thermal and gas-based remediation techniques. 1. Mumford, K. G., et al. (2010), Adv. Water Resour. 2010, 33 (4), 504-513. 2. Krol, M. M., et al. (2011), Adv. Water Resour. 2011, 34 (4), 537-549. 3. Hegele, P. R. and Mumford, K. G. Journal of Contaminant Hydrology 2014, 165, 24-36.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Modelling Analysis of Students' Processes of Generating Scientific Explanatory Hypotheses
ERIC Educational Resources Information Center
Park, Jongwon
2006-01-01
It has recently been determined that generating an explanatory hypothesis to explain a discrepant event is important for students' conceptual change. The purpose of this study is to investigate how students' generate new explanatory hypotheses. To achieve this goal, questions are used to identify students prior ideas related to electromagnetic…
A study on predicting network corrections in PPP-RTK processing
NASA Astrophysics Data System (ADS)
Wang, Kan; Khodabandeh, Amir; Teunissen, Peter
2017-10-01
In PPP-RTK processing, the network corrections including the satellite clocks, the satellite phase biases and the ionospheric delays are provided to the users to enable fast single-receiver integer ambiguity resolution. To solve the rank deficiencies in the undifferenced observation equations, the estimable parameters are formed to generate full-rank design matrix. In this contribution, we firstly discuss the interpretation of the estimable parameters without and with a dynamic satellite clock model incorporated in a Kalman filter during the network processing. The functionality of the dynamic satellite clock model is tested in the PPP-RTK processing. Due to the latency generated by the network processing and data transfer, the network corrections are delayed for the real-time user processing. To bridge the latencies, we discuss and compare two prediction approaches making use of the network corrections without and with the dynamic satellite clock model, respectively. The first prediction approach is based on the polynomial fitting of the estimated network parameters, while the second approach directly follows the dynamic model in the Kalman filter of the network processing and utilises the satellite clock drifts estimated in the network processing. Using 1 Hz data from two networks in Australia, the influences of the two prediction approaches on the user positioning results are analysed and compared for latencies ranging from 3 to 10 s. The accuracy of the positioning results decreases with the increasing latency of the network products. For a latency of 3 s, the RMS of the horizontal and the vertical coordinates (with respect to the ground truth) do not show large differences applying both prediction approaches. For a latency of 10 s, the prediction approach making use of the satellite clock model has generated slightly better positioning results with the differences of the RMS at mm-level. Further advantages and disadvantages of both prediction approaches are also discussed in this contribution.
NASA Astrophysics Data System (ADS)
Ala-aho, Pertti; Soulsby, Chris; Wang, Hailong; Tetzlaff, Doerthe
2017-04-01
Understanding the role of groundwater for runoff generation in headwater catchments is a challenge in hydrology, particularly so in data-scarce areas. Fully-integrated surface-subsurface modelling has shown potential in increasing process understanding for runoff generation, but high data requirements and difficulties in model calibration are typically assumed to preclude their use in catchment-scale studies. We used a fully integrated surface-subsurface hydrological simulator to enhance groundwater-related process understanding in a headwater catchment with a rich background in empirical data. To set up the model we used minimal data that could be reasonably expected to exist for any experimental catchment. A novel aspect of our approach was in using simplified model parameterisation and including parameters from all model domains (surface, subsurface, evapotranspiration) in automated model calibration. Calibration aimed not only to improve model fit, but also to test the information content of the observations (streamflow, remotely sensed evapotranspiration, median groundwater level) used in calibration objective functions. We identified sensitive parameters in all model domains (subsurface, surface, evapotranspiration), demonstrating that model calibration should be inclusive of parameters from these different model domains. Incorporating groundwater data in calibration objectives improved the model fit for groundwater levels, but simulations did not reproduce well the remotely sensed evapotranspiration time series even after calibration. Spatially explicit model output improved our understanding of how groundwater functions in maintaining streamflow generation primarily via saturation excess overland flow. Steady groundwater inputs created saturated conditions in the valley bottom riparian peatlands, leading to overland flow even during dry periods. Groundwater on the hillslopes was more dynamic in its response to rainfall, acting to expand the saturated area extent and thereby promoting saturation excess overland flow during rainstorms. Our work shows the potential of using integrated surface-subsurface modelling alongside with rigorous model calibration to better understand and visualise the role of groundwater in runoff generation even with limited datasets.
Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.
2002-04-01
An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.
Anomalous single production of the fourth generation quarks at the CERN LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciftci, R.
Possible anomalous single productions of the fourth standard model generation up and down type quarks at CERN Large Hadron Collider are studied. Namely, pp{yields}u{sub 4}(d{sub 4})X with subsequent u{sub 4}{yields}bW{sup +} process followed by the leptonic decay of the W boson and d{sub 4}{yields}b{gamma} (and its H.c.) decay channel are considered. Signatures of these processes and corresponding standard model backgrounds are discussed in detail. Discovery limits for the quark mass and achievable values of the anomalous coupling strength are determined.
Simulating Local Area Network Protocols with the General Purpose Simulation System (GPSS)
1990-03-01
generation 15 3.1.2 Frame delivery . 15 3.2 Model artifices 16 3.3 Model variables 17 3.4 Simulation results 18 4. EXTERNAL PROCEDURES USED IN SIMULATION 19...46 15. Token Ring: Frame generation process 47 16. Token Ring: Frame delivery process 48 17 . Token Ring: Mean transfer delay vs mean throughput 49...assumed to be zero were replaced by the maximum values specified in the ANSI 802.3 standard (viz &MI=6, &M2=3, &M3= 17 , &D1=18, &D2=3, &D4=4, &D7=3, and
NASA Astrophysics Data System (ADS)
Han, C. Y.; Qian, L. X.; Leung, C. H.; Che, C. M.; Lai, P. T.
2013-07-01
By including the generation-recombination process of charge carriers in conduction channel, a model for low-frequency noise in pentacene organic thin-film transistors (OTFTs) is proposed. In this model, the slope and magnitude of power spectral density for low-frequency noise are related to the traps in the gate dielectric and accumulation layer of the OTFT for the first time. The model can well fit the measured low-frequency noise data of pentacene OTFTs with HfO2 or HfLaO gate dielectric, which validates this model, thus providing an estimate on the densities of traps in the gate dielectric and accumulation layer. It is revealed that the traps in the accumulation layer are much more than those in the gate dielectric, and so dominate the low-frequency noise of pentacene OTFTs.
von Hecker, Ulrich; McIntosh, Daniel N; Sedek, Grzegorz
2015-01-01
We challenge the idea that a cognitive perspective on therapeutic change concerns only memory processes. We argue that inclusion of impairments in more generative cognitive processes is necessary for complete understanding of cases such as depression. In such cases what is identified in the target article as an "integrative memory structure" is crucially supported by processes of mental model construction.
Stone, Vathsala I; Lane, Joseph P
2012-05-16
Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact-that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and "bench to bedside" expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.
2012-01-01
Background Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. Methods This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. Results The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. Conclusions High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits. PMID:22591638
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
Model-based adaptive 3D sonar reconstruction in reverberating environments.
Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le
2015-10-01
In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.
Band, Leah R.; Fozard, John A.; Godin, Christophe; Jensen, Oliver E.; Pridmore, Tony; Bennett, Malcolm J.; King, John R.
2012-01-01
Over recent decades, we have gained detailed knowledge of many processes involved in root growth and development. However, with this knowledge come increasing complexity and an increasing need for mechanistic modeling to understand how those individual processes interact. One major challenge is in relating genotypes to phenotypes, requiring us to move beyond the network and cellular scales, to use multiscale modeling to predict emergent dynamics at the tissue and organ levels. In this review, we highlight recent developments in multiscale modeling, illustrating how these are generating new mechanistic insights into the regulation of root growth and development. We consider how these models are motivating new biological data analysis and explore directions for future research. This modeling progress will be crucial as we move from a qualitative to an increasingly quantitative understanding of root biology, generating predictive tools that accelerate the development of improved crop varieties. PMID:23110897
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
NASA Astrophysics Data System (ADS)
Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres
2006-11-01
Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.
Rosenfeld, Alan L; Mandelaris, George A; Tardieu, Philippe B
2006-08-01
The purpose of this paper is to expand on part 1 of this series (published in the previous issue) regarding the emerging future of computer-guided implant dentistry. This article will introduce the concept of rapid-prototype medical modeling as well as describe the utilization and fabrication of computer-generated surgical drilling guides used during implant surgery. The placement of dental implants has traditionally been an intuitive process, whereby the surgeon relies on mental navigation to achieve optimal implant positioning. Through rapid-prototype medical modeling and the ste-reolithographic process, surgical drilling guides (eg, SurgiGuide) can be created. These guides are generated from a surgical implant plan created with a computer software system that incorporates all relevant prosthetic information from which the surgical plan is developed. The utilization of computer-generated planning and stereolithographically generated surgical drilling guides embraces the concept of collaborative accountability and supersedes traditional mental navigation on all levels of implant therapy.
Mechanism of the free charge carrier generation in the dielectric breakdown
NASA Astrophysics Data System (ADS)
Rahim, N. A. A.; Ranom, R.; Zainuddin, H.
2017-12-01
Many studies have been conducted to investigate the effect of environmental, mechanical and electrical stresses on insulator. However, studies on physical process of discharge phenomenon, leading to the breakdown of the insulator surface are lacking and difficult to comprehend. Therefore, this paper analysed charge carrier generation mechanism that can cause free charge carrier generation, leading toward surface discharge development. Besides, this paper developed a model of surface discharge based on the charge generation mechanism on the outdoor insulator. Nernst’s Planck theory was used in order to model the behaviour of the charge carriers while Poisson’s equation was used to determine the distribution of electric field on insulator surface. In the modelling of surface discharge on the outdoor insulator, electric field dependent molecular ionization was used as the charge generation mechanism. A mathematical model of the surface discharge was solved using method of line technique (MOL). The result from the mathematical model showed that the behaviour of net space charge density was correlated with the electric field distribution.
DECISION SUPPORT SYSTEM TO ENHANCE AND ENCOURAGE SUSTAINABLE CHEMICAL PROCESS DESIGN
There is an opportunity to minimize the potential environmental impacts (PEIs) of industrial chemical processes by providing process designers with timely data nad models elucidating environmentally favorable design options. The second generation of the Waste Reduction (WAR) algo...
System and method for anomaly detection
Scherrer, Chad
2010-06-15
A system and method for detecting one or more anomalies in a plurality of observations is provided. In one illustrative embodiment, the observations are real-time network observations collected from a stream of network traffic. The method includes performing a discrete decomposition of the observations, and introducing derived variables to increase storage and query efficiencies. A mathematical model, such as a conditional independence model, is then generated from the formatted data. The formatted data is also used to construct frequency tables which maintain an accurate count of specific variable occurrence as indicated by the model generation process. The formatted data is then applied to the mathematical model to generate scored data. The scored data is then analyzed to detect anomalies.
Software Quality Assurance and Verification for the MPACT Library Generation Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuxuan; Williams, Mark L.; Wiarda, Dorothea
This report fulfills the requirements for the Consortium for the Advanced Simulation of Light-Water Reactors (CASL) milestone L2:RTM.P14.02, “SQA and Verification for MPACT Library Generation,” by documenting the current status of the software quality, verification, and acceptance testing of nuclear data libraries for MPACT. It provides a brief overview of the library generation process, from general-purpose evaluated nuclear data files (ENDF/B) to a problem-dependent cross section library for modeling of light-water reactors (LWRs). The software quality assurance (SQA) programs associated with each of the software used to generate the nuclear data libraries are discussed; specific tests within the SCALE/AMPX andmore » VERA/XSTools repositories are described. The methods and associated tests to verify the quality of the library during the generation process are described in detail. The library generation process has been automated to a degree to (1) ensure that it can be run without user intervention and (2) to ensure that the library can be reproduced. Finally, the acceptance testing process that will be performed by representatives from the Radiation Transport Methods (RTM) Focus Area prior to the production library’s release is described in detail.« less
Finding the Right Fit: A Comparison of Process Assumptions Underlying Popular Drift-Diffusion Models
ERIC Educational Resources Information Center
Ashby, Nathaniel J. S.; Jekel, Marc; Dickert, Stephan; Glöckner, Andreas
2016-01-01
Recent research makes increasing use of eye-tracking methodologies to generate and test process models. Overall, such research suggests that attention, generally indexed by fixations (gaze duration), plays a critical role in the construction of preference, although the methods used to support this supposition differ substantially. In two studies…
An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application
ERIC Educational Resources Information Center
Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth
2016-01-01
Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…
Modelling the effects of Prairie wetlands on streamflow
NASA Astrophysics Data System (ADS)
Shook, K.; Pomeroy, J. W.
2015-12-01
Recent research has demonstrated that the contributing areas of Prairie streams dominated by depressional (wetland) storage demonstrate hysteresis with respect to catchment water storage. As such contributing fractions can vary over time from a very small percentage of catchment area to the entire catchment during floods. However, catchments display complex memories of past storage states and their contributing fractions cannot be modelled accurately by any single-valued function. The Cold Regions Hydrological Modelling platform, CRHM, which is capable of modelling all of the hydrological processes of cold regions using a hydrological response unit discretization of the catchment, was used to further investigate dynamical contributing area response to hydrological processes. Contributing fraction in CRHM is also controlled by the episodic nature of runoff generation in this cold, sub-humid environment where runoff is dominated by snowmelt over frozen soils, snowdrifts define the contributing fraction in late spring, unfrozen soils have high water holding capacity and baseflow from sub-surface flow does not exist. CRHM was improved by adding a conceptual model of individual Prairie depression fill and spill runoff generation that displays hysteresis in the storage - contributing fraction relationship and memory of storage state. The contributing area estimated by CRHM shows strong sensitivity to hydrological inputs, storage and the threshold runoff rate chosen. The response of the contributing area to inputs from various runoff generating processes from snowmelt to rain-on-snow to rainfall with differing degrees of spatial variation was investigated as was the importance of the memory of storage states on streamflow generation. The importance of selecting hydrologically and ecologically meaningful runoff thresholds in estimating contributing area is emphasized.
NASA Technical Reports Server (NTRS)
Katz, Ira; Mandell, Myron; Roche, James C.; Purvis, Carolyn
1987-01-01
Secondary electrons control a spacecraft's response to a plasma environment. To accurately simulate spacecraft charging, the NASA Charging Analyzer Program (NASCAP) has mathematical models of the generation, emission and transport of secondary electrons. The importance of each of the processes and the physical basis for each of the NASCAP models are discussed. Calculations are presented which show that the NASCAP formulations are in good agreement with both laboratory and space experiments.
NASA Astrophysics Data System (ADS)
Vikharev, A. L.; Gorbachev, A. M.; Ivanov, O. A.; Kolisko, A. L.; Litvak, A. G.
1993-08-01
The plasma chemical processes in the corona discharge formed in air by a series of high voltage pulses of nanosecond duration are investigated experimentally. The experimental conditions (reduced electric field, duration and repetition frequency of the pulses, gas pressure in the chamber) modeled the regime of creation of the artificial ionized layer (AIL) in the upper atmosphere by a nanosecond microwave discharge. It was found that in a nanosecond microwave discharge predominantly generation of ozone occurs, and that the production of nitrogen dioxide is not large. The energy expenditures for the generation of one O 3 molecule were about 15 eV. On the basis of the experimental results the prognosis of the efficiency of ozone generation in AIL was made.
Friction-Stir Welding and Mathematical Modeling
NASA Technical Reports Server (NTRS)
Rostant, Victor D.
1999-01-01
The friction-stir welding process is a remarkable way for making butt and lap joints in aluminum alloys. This process operates by passing a rotating tool between two closely butted plates. Through this process it generates a lot of heat and heated material is stirred from both sides of the plates in which the outcome will one high quality weld. My research has been done to study the FSW through mathematical modeling, and using modeling to better understand what take place during the friction-stir weld.
The GEMS Model of Volunteer Administration.
ERIC Educational Resources Information Center
Culp, Ken, III; Deppe, Catherine A.; Castillo, Jaime X.; Wells, Betty J.
1998-01-01
Describes GEMS, a spiral model that profiles volunteer administration. Components include Generate, Educate, Mobilize, and Sustain, four sets of processes that span volunteer recruitment and selection to retention or disengagement. (SK)
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.
System, method and apparatus for generating phrases from a database
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.
NASA Technical Reports Server (NTRS)
Starr, D. O'C.; Cox, S. K.
1981-01-01
A time-dependent, two-dimensional Eulerian model is presented whose purpose is to obtain more realistic parameterizations of extended high level cloudiness, and the results of a numerical experiment using the model are reported. The model is anelastic and the Bousinesque assumption is invoked. Unresolved subgrid scale processes are parameterized as eddy diffusion processes. Two phases of water are incorporated and equilibrium between them is assumed. The effects of infrared radiative processes are parametrically represented. Two simulations were conducted with identical initial conditions; in one of them, the radiation term was never turned on. The mean values of perturbation potential temperature at each level in the domain are plotted versus height after 15, 30, and 60 minutes of simulated time. The influence of the radiative term is seen to impose a cooling trend, leading to an increased generation of ice water and an increased generation of turbulent kinetic energy in the cloud layer.
Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Mizginov, V. A.
2018-05-01
Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.
Bryan, Rebecca; Nair, Prasanth B; Taylor, Mark
2009-09-18
Interpatient variability is often overlooked in orthopaedic computational studies due to the substantial challenges involved in sourcing and generating large numbers of bone models. A statistical model of the whole femur incorporating both geometric and material property variation was developed as a potential solution to this problem. The statistical model was constructed using principal component analysis, applied to 21 individual computer tomography scans. To test the ability of the statistical model to generate realistic, unique, finite element (FE) femur models it was used as a source of 1000 femurs to drive a study on femoral neck fracture risk. The study simulated the impact of an oblique fall to the side, a scenario known to account for a large proportion of hip fractures in the elderly and have a lower fracture load than alternative loading approaches. FE model generation, application of subject specific loading and boundary conditions, FE processing and post processing of the solutions were completed automatically. The generated models were within the bounds of the training data used to create the statistical model with a high mesh quality, able to be used directly by the FE solver without remeshing. The results indicated that 28 of the 1000 femurs were at highest risk of fracture. Closer analysis revealed the percentage of cortical bone in the proximal femur to be a crucial differentiator between the failed and non-failed groups. The likely fracture location was indicated to be intertrochantic. Comparison to previous computational, clinical and experimental work revealed support for these findings.
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
Banta, Edward R.; Provost, Alden M.
2008-01-01
This report documents HUFPrint, a computer program that extracts and displays information about model structure and hydraulic properties from the input data for a model built using the Hydrogeologic-Unit Flow (HUF) Package of the U.S. Geological Survey's MODFLOW program for modeling ground-water flow. HUFPrint reads the HUF Package and other MODFLOW input files, processes the data by hydrogeologic unit and by model layer, and generates text and graphics files useful for visualizing the data or for further processing. For hydrogeologic units, HUFPrint outputs such hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, vertical hydraulic conductivity or anisotropy, specific storage, specific yield, and hydraulic-conductivity depth-dependence coefficient. For model layers, HUFPrint outputs such effective hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, specific storage, primary direction of anisotropy, and vertical conductance. Text files tabulating hydraulic properties by hydrogeologic unit, by model layer, or in a specified vertical section may be generated. Graphics showing two-dimensional cross sections and one-dimensional vertical sections at specified locations also may be generated. HUFPrint reads input files designed for MODFLOW-2000 or MODFLOW-2005.
Democracy versus dictatorship in self-organized models of financial markets
NASA Astrophysics Data System (ADS)
D'Hulst, R.; Rodgers, G. J.
2000-06-01
Models to mimic the transmission of information in financial markets are introduced. As an attempt to generate the demand process, we distinguish between dictatorship associations, where groups of agents rely on one of them to make decision, and democratic associations, where each agent takes part in the group decision. In the dictatorship model, agents segregate into two distinct populations, while the democratic model is driven towards a critical state where groups of agents of all sizes exist. Hence, both models display a level of organization, but only the democratic model is self-organized. We show that the dictatorship model generates less-volatile markets than the democratic model.
The standard-based open workflow system in GeoBrain (Invited)
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Zhao, P.; Deng, M.
2013-12-01
GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.
Multiscale Processes in Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Surjalal Sharma, A.; Jain, Neeraj
The characteristic scales of the plasma processes in magnetic reconnection range from the elec-tron skin-depth to the magnetohydrodynamic (MHD) scale, and cross-scale coupling among them play a key role. Modeling these processes requires different physical models, viz. kinetic, electron-magnetohydrodynamics (EMHD), Hall-MHD, and MHD. The shortest scale processes are at the electron scale and these are modeled using an EMHD code, which provides many features of the multiscale behavior. In simulations using initial conditions consisting of pertur-bations with many scale sizes the reconnection takes place at many sites and the plasma flows from these interact with each other. This leads to thin current sheets with length less than 10 electron skin depths. The plasma flows also generate current sheets with multiple peaks, as observed by Cluster. The quadrupole structure of the magnetic field during reconnection starts on the electron scale and the interaction of inflow to the secondary sites and outflow from the dominant site generates a nested structure. In the outflow regions, the interaction of the electron outflows generated at the neighboring sites lead to the development of electron vortices. A signature of the nested structure of the Hall field is seen in Cluster observations, and more details of these features are expected from MMS.
Ashrafi, Omid; Yerushalmi, Laleh; Haghighat, Fariborz
2013-03-01
Greenhouse gas (GHG) emission in wastewater treatment plants of the pulp-and-paper industry was estimated by using a dynamic mathematical model. Significant variations were shown in the magnitude of GHG generation in response to variations in operating parameters, demonstrating the limited capacity of steady-state models in predicting the time-dependent emissions of these harmful gases. The examined treatment systems used aerobic, anaerobic, and hybrid-anaerobic/aerobic-biological processes along with chemical coagulation/flocculation, anaerobic digester, nitrification and denitrification processes, and biogas recovery. The pertinent operating parameters included the influent substrate concentration, influent flow rate, and temperature. Although the average predictions by the dynamic model were only 10 % different from those of steady-state model during 140 days of operation of the examined systems, the daily variations of GHG emissions were different up to ± 30, ± 19, and ± 17 % in the aerobic, anaerobic, and hybrid systems, respectively. The variations of process variables caused fluctuations in energy generation from biogas recovery by ± 6, ± 7, and ± 4 % in the three examined systems, respectively. The lowest variations were observed in the hybrid system, showing the stability of this particular process design.
Magmatic Ascent and Eruption Processes on Mercury
NASA Astrophysics Data System (ADS)
Head, J. W.; Wilson, L.
2018-05-01
MESSENGER volcanic landform data and information on crustal composition allow us to model the generation, ascent, and eruption of magma; Mercury explosive and effusive eruption processes differ significantly from other terrestrial planetary bodies.
Zhu, Sha; Degnan, James H; Goldstien, Sharyn J; Eldon, Bjarki
2015-09-15
There has been increasing interest in coalescent models which admit multiple mergers of ancestral lineages; and to model hybridization and coalescence simultaneously. Hybrid-Lambda is a software package that simulates gene genealogies under multiple merger and Kingman's coalescent processes within species networks or species trees. Hybrid-Lambda allows different coalescent processes to be specified for different populations, and allows for time to be converted between generations and coalescent units, by specifying a population size for each population. In addition, Hybrid-Lambda can generate simulated datasets, assuming the infinitely many sites mutation model, and compute the F ST statistic. As an illustration, we apply Hybrid-Lambda to infer the time of subdivision of certain marine invertebrates under different coalescent processes. Hybrid-Lambda makes it possible to investigate biogeographic concordance among high fecundity species exhibiting skewed offspring distribution.
Path generation algorithm for UML graphic modeling of aerospace test software
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
Ezawa, Kiyoshi; Innan, Hideki
2013-07-01
The population genetic behavior of mutations in sperm genes is theoretically investigated. We modeled the processes at two levels. One is the standard population genetic process, in which the population allele frequencies change generation by generation, depending on the difference in selective advantages. The other is the sperm competition during each genetic transmission from one generation to the next generation. For the sperm competition process, we formulate the situation where a huge number of sperm with alleles A and B, produced by a single heterozygous male, compete to fertilize a single egg. This "minimal model" demonstrates that a very slight difference in sperm performance amounts to quite a large difference between the alleles' winning probabilities. By incorporating this effect of paternity-sharing sperm competition into the standard population genetic process, we show that fierce sperm competition can enhance the fixation probability of a mutation with a very small phenotypic effect at the single-sperm level, suggesting a contribution of sperm competition to rapid amino acid substitutions in haploid-expressed sperm genes. Considering recent genome-wide demonstrations that a substantial fraction of the mammalian sperm genes are haploid expressed, our model could provide a potential explanation of rapid evolution of sperm genes with a wide variety of functions (as long as they are expressed in the haploid phase). Another advantage of our model is that it is applicable to a wide range of species, irrespective of whether the species is externally fertilizing, polygamous, or monogamous. The theoretical result was applied to mammalian data to estimate the selection intensity on nonsynonymous mutations in sperm genes.
Generation of action potentials in a mathematical model of corticotrophs.
LeBeau, A P; Robson, A B; McKinnon, A E; Donald, R A; Sneyd, J
1997-01-01
Corticotropin-releasing hormone (CRH) is an important regulator of adrenocorticotropin (ACTH) secretion from pituitary corticotroph cells. The intracellular signaling system that underlies this process involves modulation of voltage-sensitive Ca2+ channel activity, which leads to the generation of Ca2+ action potentials and influx of Ca2+. However, the mechanisms by which Ca2+ channel activity is modulated in corticotrophs are not currently known. We investigated this process in a Hodgkin-Huxley-type mathematical model of corticotroph plasma membrane electrical responses. We found that an increase in the L-type Ca2+ current was sufficient to generate action potentials from a previously resting state of the model. The increase in the L-type current could be elicited by either a shift in the voltage dependence of the current toward more negative potentials, or by an increase in the conductance of the current. Although either of these mechanisms is potentially responsible for the generation of action potentials, previous experimental evidence favors the former mechanism, with the magnitude of the shift required being consistent with the experimental findings. The model also shows that the T-type Ca2+ current plays a role in setting the excitability of the plasma membrane, but does not appear to contribute in a dynamic manner to action potential generation. Inhibition of a K+ conductance that is active at rest also affects the excitability of the plasma membrane. PMID:9284294
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gongalsky, Maxim B., E-mail: mgongalsky@gmail.com; Timoshenko, Victor Yu.
2014-12-28
We propose a phenomenological model to explain photoluminescence degradation of silicon nanocrystals under singlet oxygen generation in gaseous and liquid systems. The model considers coupled rate equations, which take into account the exciton radiative recombination in silicon nanocrystals, photosensitization of singlet oxygen generation, defect formation on the surface of silicon nanocrystals as well as quenching processes for both excitons and singlet oxygen molecules. The model describes well the experimentally observed power law dependences of the photoluminescence intensity, singlet oxygen concentration, and lifetime versus photoexcitation time. The defect concentration in silicon nanocrystals increases by power law with a fractional exponent, whichmore » depends on the singlet oxygen concentration and ambient conditions. The obtained results are discussed in a view of optimization of the photosensitized singlet oxygen generation for biomedical applications.« less
Predictive models of radiative neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julio, J., E-mail: julio@lipi.go.id
2016-06-21
We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
Klinar, Dušan
2016-04-01
Biochar as a soil amendment and carbon sink becomes in last period one of the vast, interesting product of slow pyrolysis. Simplest and most used industrial process arrangement is a production of biochar and heat at the same time. Proposed mass and heat balance model consist of heat consumers (heat demand side) and heat generation-supply side. Direct burning of all generated uncondensed volatiles from biomass provides heat. Calculation of the mass and heat balance of both sides reveals the internal distribution of masses and energy inside process streams and units. Thermodynamic calculations verified not only the concept but also numerical range of the results. The comparisons with recent published scientific and vendors data prove its general applicability and reliability. The model opens the possibility for process efficiency innovations. Finally, the model was adapted to give more investors favorable results and support techno-economic assessments entirely. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improvement of the model for surface process of tritium release from lithium oxide
NASA Astrophysics Data System (ADS)
Yamaki, Daiju; Iwamoto, Akira; Jitsukawa, Shiro
2000-12-01
Among the various tritium transport processes in lithium ceramics, the importance and the detailed mechanism of surface reactions remain to be elucidated. The dynamic adsorption and desorption model for tritium desorption from lithium ceramics, especially Li 2O was constructed. From the experimental results, it was considered that both H 2 and H 2O are dissociatively adsorbed on Li 2O and generate OH - on the surface. In the first model developed in 1994, it was assumed that either the dissociative adsorption of H 2 or H 2O on Li 2O generates two OH - on the surface. However, recent calculation results show that the generation of one OH - and one H - is more stable than that of two OH -s by the dissociative adsorption of H 2. Therefore, assumption of H 2 adsorption and desorption in the first model is improved and the tritium release behavior from Li 2O surface is evaluated again by using the improved model. The tritium residence time on the Li 2O surface is calculated using the improved model, and the results are compared with the experimental results. The calculation results using the improved model agree well with the experimental results than those using the first model.
ERIC Educational Resources Information Center
Hartig, Nadine; Steigerwald, Fran
2007-01-01
This article examines the family roles and ethics of first-generation college students and their families through discussion of a case vignette. London's family roles applied to first-generation college students are discussed. Narrative therapy practices and an ethical model that examines the value process of counselors are explored as possible…
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.
Böttcher, Björn
2010-12-03
We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.
Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond
Böttcher, Björn
2010-01-01
We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931
Learning Orthographic Structure With Sequential Generative Neural Networks.
Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco
2016-04-01
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.
An integrative process model of leadership: examining loci, mechanisms, and event cycles.
Eberly, Marion B; Johnson, Michael D; Hernandez, Morela; Avolio, Bruce J
2013-09-01
Utilizing the locus (source) and mechanism (transmission) of leadership framework (Hernandez, Eberly, Avolio, & Johnson, 2011), we propose and examine the application of an integrative process model of leadership to help determine the psychological interactive processes that constitute leadership. In particular, we identify the various dynamics involved in generating leadership processes by modeling how the loci and mechanisms interact through a series of leadership event cycles. We discuss the major implications of this model for advancing an integrative understanding of what constitutes leadership and its current and future impact on the field of psychological theory, research, and practice. © 2013 APA, all rights reserved.
Cognitive Support During High-Consequence Episodes of Care in Cardiovascular Surgery.
Conboy, Heather M; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Christov, Stefan C; Goldman, Julian M; Yule, Steven J; Zenati, Marco A
2017-03-01
Despite significant efforts to reduce preventable adverse events in medical processes, such events continue to occur at unacceptable rates. This paper describes a computer science approach that uses formal process modeling to provide situationally aware monitoring and management support to medical professionals performing complex processes. These process models represent both normative and non-normative situations, and are validated by rigorous automated techniques such as model checking and fault tree analysis, in addition to careful review by experts. Context-aware Smart Checklists are then generated from the models, providing cognitive support during high-consequence surgical episodes. The approach is illustrated with a case study in cardiovascular surgery.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
Sensitivity analysis of 1-D dynamical model for basin analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, S.
1987-01-01
Geological processes related to petroleum generation, migration and accumulation are very complicated in terms of time and variables involved, and it is very difficult to simulate these processes by laboratory experiments. For this reasons, many mathematic/computer models have been developed to simulate these geological processes based on geological, geophysical and geochemical principles. The sensitivity analysis in this study is a comprehensive examination on how geological, geophysical and geochemical parameters influence the reconstructions of geohistory, thermal history and hydrocarbon generation history using the 1-D fluid flow/compaction model developed in the Basin Modeling Group at the University of South Carolina. This studymore » shows the effects of some commonly used parameter such as depth, age, lithology, porosity, permeability, unconformity (eroded thickness and erosion time), temperature at sediment surface, bottom hole temperature, present day heat flow, thermal gradient, thermal conductivity and kerogen type and content on the evolutions of formation thickness, porosity, permeability, pressure with time and depth, heat flow with time, temperature with time and depth, vitrinite reflectance (Ro) and TTI with time and depth, and oil window in terms of time and depth, amount of hydrocarbons generated with time and depth. Lithology, present day heat flow and thermal conductivity are the most sensitive parameters in the reconstruction of temperature history.« less
NASA Technical Reports Server (NTRS)
Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.
2005-01-01
Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.
Deriving Differential Equations from Process Algebra Models in Reagent-Centric Style
NASA Astrophysics Data System (ADS)
Hillston, Jane; Duguid, Adam
The reagent-centric style of modeling allows stochastic process algebra models of biochemical signaling pathways to be developed in an intuitive way. Furthermore, once constructed, the models are amenable to analysis by a number of different mathematical approaches including both stochastic simulation and coupled ordinary differential equations. In this chapter, we give a tutorial introduction to the reagent-centric style, in PEPA and Bio-PEPA, and the way in which such models can be used to generate systems of ordinary differential equations.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
Observations of cross-Saharan transport of water vapour via cycle of cold pools and moist convection
NASA Astrophysics Data System (ADS)
Trzeciak, Tomasz; Garcia-Carreras, Luis; Marsham, John H.
2017-04-01
Very limited observational data has previously limited our ability to study meteorological processes in the Sahara. The Sahara is a key component of the West African monsoon and the world's largest dust source, but its representation is a major uncertainty in global models. Past studies have shown that there is a persistent warm and dry model bias throughout the Sahara, and this has been attributed to the lack of convectively-generated cold pools in the model, which can ventilate the central Sahara from its margins. Here we present an observed case from June 2012 which explains how cold pools are able to transport water vapour across a large area of the Sahara over a period of several days. A daily cycle is found to occur, where deep convection in the evening generates moist cold pools that then feed the next day's convection; the new convection in turn generates new cold pools, providing a vertical recycling of moisture. Trajectories driven by analyses can capture the general direction of transport, but not its full extent, especially at night when cold pools are most active, highlighting the difficulties for models to capture these processes. These results show the importance of cold pools for moisture transport, dust and clouds in the region, and demonstrate the need to include these processes in models to improve the representation of the Saharan atmosphere.
Tewari, Shivendra G.; Bugenhagen, Scott M.; Palmer, Bradley M.; Beard, Daniel A.
2015-01-01
Despite extensive study over the past six decades the coupling of chemical reaction and mechanical processes in muscle dynamics is not well understood. We lack a theoretical description of how chemical processes (metabolite binding, ATP hydrolysis) influence and are influenced by mechanical processes (deformation and force generation). To address this need, a mathematical model of the muscle cross-bridge (XB) cycle based on Huxley’s sliding filament theory is developed that explicitly accounts for the chemical transformation events and the influence of strain on state transitions. The model is identified based on elastic and viscous moduli data from mouse and rat myocardial strips over a range of perturbation frequencies, and MgATP and inorganic phosphate (Pi) concentrations. Simulations of the identified model reproduce the observed effects of MgATP and MgADP on the rate of force development. Furthermore, simulations reveal that the rate of force re-development measured in slack-restretch experiments is not directly proportional to the rate of XB cycling. For these experiments, the model predicts that the observed increase in the rate of force generation with increased Pi concentration is due to inhibition of cycle turnover by Pi. Finally, the model captures the observed phenomena of force yielding suggesting that it is a result of rapid detachment of stretched attached myosin heads. PMID:25681584
Nakajima, Toshiyuki
2017-12-01
Evolution by natural selection requires the following conditions: (1) a particular selective environment; (2) variation of traits in the population; (3) differential survival/reproduction among the types of organisms; and (4) heritable traits. However, the traditional (standard) model does not clearly explain how and why these conditions are generated or determined. What generates a selective environment? What generates new types? How does a certain type replace, or coexist with, others? In this paper, based on the holistic philosophy of Western and Eastern traditions, I focus on the ecosystem as a higher-level system and generator of conditions that induce the evolution of component populations; I also aim to identify the ecosystem processes that generate those conditions. In particular, I employ what I call the scientific principle of dependent-arising (SDA), which is tailored for scientific use and is based on Buddhism principle called "pratītya-samutpāda" in Sanskrit. The SDA principle asserts that there exists a higher-level system, or entity, which includes a focal process of a system as a part within it; this determines or generates the conditions required for the focal process to work in a particular way. I conclude that the ecosystem generates (1) selective environments for component species through ecosystem dynamics; (2) new genetic types through lateral gene transfer, hybridization, and symbiogenesis among the component species of the ecosystem; (3) mechanistic processes of replacement of an old type with a new one. The results of this study indicate that the ecological extension of the theoretical model of adaptive evolution is required for better understanding of adaptive evolution. Copyright © 2017 Elsevier Ltd. All rights reserved.
Generating structure from experience: A retrieval-based model of language processing.
Johns, Brendan T; Jones, Michael N
2015-09-01
Standard theories of language generally assume that some abstraction of linguistic input is necessary to create higher level representations of linguistic structures (e.g., a grammar). However, the importance of individual experiences with language has recently been emphasized by both usage-based theories (Tomasello, 2003) and grounded and situated theories (e.g., Zwaan & Madden, 2005). Following the usage-based approach, we present a formal exemplar model that stores instances of sentences across a natural language corpus, applying recent advances from models of semantic memory. In this model, an exemplar memory is used to generate expectations about the future structure of sentences, using a mechanism for prediction in language processing (Altmann & Mirković, 2009). The model successfully captures a broad range of behavioral effects-reduced relative clause processing (Reali & Christiansen, 2007), the role of contextual constraint (Rayner & Well, 1996), and event knowledge activation (Ferretti, Kutas, & McRae, 2007), among others. We further demonstrate how perceptual knowledge could be integrated into this exemplar-based framework, with the goal of grounding language processing in perception. Finally, we illustrate how an exemplar memory system could have been used in the cultural evolution of language. The model provides evidence that an impressive amount of language processing may be bottom-up in nature, built on the storage and retrieval of individual linguistic experiences. (c) 2015 APA, all rights reserved).
Modelling small-area inequality in premature mortality using years of life lost rates
NASA Astrophysics Data System (ADS)
Congdon, Peter
2013-04-01
Analysis of premature mortality variations via standardized expected years of life lost (SEYLL) measures raises questions about suitable modelling for mortality data, especially when developing SEYLL profiles for areas with small populations. Existing fixed effects estimation methods take no account of correlations in mortality levels over ages, causes, socio-ethnic groups or areas. They also do not specify an underlying data generating process, or a likelihood model that can include trends or correlations, and are likely to produce unstable estimates for small-areas. An alternative strategy involves a fully specified data generation process, and a random effects model which "borrows strength" to produce stable SEYLL estimates, allowing for correlations between ages, areas and socio-ethnic groups. The resulting modelling strategy is applied to gender-specific differences in SEYLL rates in small-areas in NE London, and to cause-specific mortality for leading causes of premature mortality in these areas.
NASA Astrophysics Data System (ADS)
Hirano, Ryoichi; Iida, Susumu; Amano, Tsuyoshi; Watanabe, Hidehiro; Hatakeyama, Masahiro; Murakami, Takeshi; Yoshikawa, Shoji; Suematsu, Kenichi; Terao, Kenji
2015-07-01
High-sensitivity EUV mask pattern defect detection is one of the major issues in order to realize the device fabrication by using the EUV lithography. We have already designed a novel Projection Electron Microscope (PEM) optics that has been integrated into a new inspection system named EBEYE-V30 ("Model EBEYE" is an EBARA's model code), and which seems to be quite promising for 16 nm hp generation EUVL Patterned mask Inspection (PI). Defect inspection sensitivity was evaluated by capturing an electron image generated at the mask by focusing onto an image sensor. The progress of the novel PEM optics performance is not only about making an image sensor with higher resolution but also about doing a better image processing to enhance the defect signal. In this paper, we describe the experimental results of EUV patterned mask inspection using the above-mentioned system. The performance of the system is measured in terms of defect detectability for 11 nm hp generation EUV mask. To improve the inspection throughput for 11 nm hp generation defect detection, it would require a data processing rate of greater than 1.5 Giga- Pixel-Per-Second (GPPS) that would realize less than eight hours of inspection time including the step-and-scan motion associated with the process. The aims of the development program are to attain a higher throughput, and enhance the defect detection sensitivity by using an adequate pixel size with sophisticated image processing resulting in a higher processing rate.
Inflation, reheating, and dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardenas, Victor H.
2007-04-15
In a recent paper, Liddle and Urena-Lopez suggested that to have a unified model of inflation and dark matter is imperative to have a proper reheating process where part of the inflaton field remains. In this paper I propose a model where this is possible. I found that incorporating the effect of plasma masses generated by the inflaton products enables us to stop the process. A numerical estimated model is presented.
Next-generation concurrent engineering: developing models to complement point designs
NASA Technical Reports Server (NTRS)
Morse, Elizabeth; Leavens, Tracy; Cohanim, Babak; Harmon, Corey; Mahr, Eric; Lewis, Brian
2006-01-01
Concurrent Engineering Design (CED) teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a 'next-generation CED; in addition to a point design, the Team develops a model of the local trade space. The process is a balance between the power of a model developing tools and the creativity of humal experts, enabling the development of a variety of trade models for any space mission. This paper reviews the modeling method and its practical implementation in the ED environment. Example results illustrate the benefit of this approach.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
Toward a Model of Text Comprehension and Production.
ERIC Educational Resources Information Center
Kintsch, Walter; Van Dijk, Teun A.
1978-01-01
Described is the system of mental operations occurring in text comprehension and in recall and summarization. A processing model is outlined: 1) the meaning elements of a text become organized into a coherent whole, 2) the full meaning of the text is condensed into its gist, and 3) new texts are generated from the comprehension processes.…
Pathways to Co-Impact: Action Research and Community Organising
ERIC Educational Resources Information Center
Banks, Sarah; Herrington, Tracey; Carter, Kath
2017-01-01
This article introduces the concept of "co-impact" to characterise the complex and dynamic process of social and economic change generated by participatory action research (PAR). It argues that dominant models of research impact tend to see it as a linear process, based on a donor-recipient model, occurring at the end of a project…
Geospatial application of the Water Erosion Prediction Project (WEPP) Model
D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot
2011-01-01
The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltration, runoff, ET) component, which subsequently impacts the rest of the...
NASA Astrophysics Data System (ADS)
Wu, Xiaoling; Xiang, Xiaohua; Qiu, Chao; Li, Li
2018-06-01
In cold regions, precipitation, air temperature and snow cover significantly influence soil water, heat transfer, the freezing-thawing processes of the active soil layer, and runoff generation. Hydrological regimes of the world's major rivers in cold regions have changed remarkably since the 1960s, but the mechanisms underlying the changes have not yet been fully understood. Using the basic physical processes for water and heat balances and transfers in snow covered soil, a water-heat coupling model for snow cover and its underlying soil layers was established. We found that freezing-thawing processes can affect the thickness of the active layer, storage capacity for liquid water, and subsequent surface runoffs. Based on calculations of thawing-freezing processes, we investigated hydrological processes at Qumalai. The results show that the water-heat coupling model can be used in this region to provide an understanding of the local movement of hydrological regimes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiCarlo, David; Huh, Chun; Johnston, Keith P.
2015-01-31
The goal of this project was to develop a new CO 2 injection enhanced oil recovery (CO 2-EOR) process using engineered nanoparticles with optimized surface coatings that has better volumetric sweep efficiency and a wider application range than conventional CO 2-EOR processes. The main objectives of this project were to (1) identify the characteristics of the optimal nanoparticles that generate extremely stable CO 2 foams in situ in reservoir regions without oil; (2) develop a novel method of mobility control using “self-guiding” foams with smart nanoparticles; and (3) extend the applicability of the new method to reservoirs having a widemore » range of salinity, temperatures, and heterogeneity. Concurrent with our experimental effort to understand the foam generation and transport processes and foam-induced mobility reduction, we also developed mathematical models to explain the underlying processes and mechanisms that govern the fate of nanoparticle-stabilized CO 2 foams in porous media and applied these models to (1) simulate the results of foam generation and transport experiments conducted in beadpack and sandstone core systems, (2) analyze CO 2 injection data received from a field operator, and (3) aid with the design of a foam injection pilot test. Our simulator is applicable to near-injection well field-scale foam injection problems and accounts for the effects due to layered heterogeneity in permeability field, foam stabilizing agents effects, oil presence, and shear-thinning on the generation and transport of nanoparticle-stabilized C/W foams. This report presents the details of our experimental and numerical modeling work and outlines the highlights of our findings.« less
Sensitivity of Attitude Determination on the Model Assumed for ISAR Radar Mappings
NASA Astrophysics Data System (ADS)
Lemmens, S.; Krag, H.
2013-09-01
Inverse synthetic aperture radars (ISAR) are valuable instrumentations for assessing the state of a large object in low Earth orbit. The images generated by these radars can reach a sufficient quality to be used during launch support or contingency operations, e.g. for confirming the deployment of structures, determining the structural integrity, or analysing the dynamic behaviour of an object. However, the direct interpretation of ISAR images can be a demanding task due to the nature of the range-Doppler space in which these images are produced. Recently, a tool has been developed by the European Space Agency's Space Debris Office to generate radar mappings of a target in orbit. Such mappings are a 3D-model based simulation of how an ideal ISAR image would be generated by a ground based radar under given processing conditions. These radar mappings can be used to support a data interpretation process. E.g. by processing predefined attitude scenarios during an observation sequence and comparing them with actual observations, one can detect non-nominal behaviour. Vice versa, one can also estimate the attitude states of the target by fitting the radar mappings to the observations. It has been demonstrated for the latter use case that a coarse approximation of the target through an 3D-model is already sufficient to derive the attitude information from the generated mappings. The level of detail required for the 3D-model is determined by the process of generating ISAR images, which is based on the theory of scattering bodies. Therefore, a complex surface can return an intrinsically noisy ISAR image. E.g. when many instruments on a satellite are visible to the observer, the ISAR image can suffer from multipath reflections. In this paper, we will further analyse the sensitivity of the attitude fitting algorithms to variations in the dimensions and the level of detail of the underlying 3D model. Moreover, we investigate the ability to estimate the orientations of different spacecraft components with respect to each other from the fitting procedure.
Computational approach on PEB process in EUV resist: multi-scale simulation
NASA Astrophysics Data System (ADS)
Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo
2017-03-01
For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.
Introducing the Equiangular Spiral by Using Logo to Model Nature.
ERIC Educational Resources Information Center
Boyadzhiev, Irina; Boyadzhiev, Khristo
1992-01-01
Describes the method for producing the equiangular spiral, the geometric curve generated by modeling an insect's orientation process to an illumination source, utilizing a LOGO Turtle program which is included. (JJK)
Murrell, Daniel S; Cortes-Ciriano, Isidro; van Westen, Gerard J P; Stott, Ian P; Bender, Andreas; Malliavin, Thérèse E; Glen, Robert C
2015-01-01
In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process. camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2). Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.
A Discrete Fracture Network Model with Stress-Driven Nucleation and Growth
NASA Astrophysics Data System (ADS)
Lavoine, E.; Darcel, C.; Munier, R.; Davy, P.
2017-12-01
The realism of Discrete Fracture Network (DFN) models, beyond the bulk statistical properties, relies on the spatial organization of fractures, which is not issued by purely stochastic DFN models. The realism can be improved by injecting prior information in DFN from a better knowledge of the geological fracturing processes. We first develop a model using simple kinematic rules for mimicking the growth of fractures from nucleation to arrest, in order to evaluate the consequences of the DFN structure on the network connectivity and flow properties. The model generates fracture networks with power-law scaling distributions and a percentage of T-intersections that are consistent with field observations. Nevertheless, a larger complexity relying on the spatial variability of natural fractures positions cannot be explained by the random nucleation process. We propose to introduce a stress-driven nucleation in the timewise process of this kinematic model to study the correlations between nucleation, growth and existing fracture patterns. The method uses the stress field generated by existing fractures and remote stress as an input for a Monte-Carlo sampling of nuclei centers at each time step. Networks so generated are found to have correlations over a large range of scales, with a correlation dimension that varies with time and with the function that relates the nucleation probability to stress. A sensibility analysis of input parameters has been performed in 3D to quantify the influence of fractures and remote stress field orientations.
2015-01-01
The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines. PMID:25780759
Newe, Axel
2015-01-01
The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines.
Protein classification using modified n-grams and skip-grams.
Islam, S M Ashiqul; Heil, Benjamin J; Kearney, Christopher Michel; Baker, Erich J
2018-05-01
Classification by supervised machine learning greatly facilitates the annotation of protein characteristics from their primary sequence. However, the feature generation step in this process requires detailed knowledge of attributes used to classify the proteins. Lack of this knowledge risks the selection of irrelevant features, resulting in a faulty model. In this study, we introduce a supervised protein classification method with a novel means of automating the work-intensive feature generation step via a Natural Language Processing (NLP)-dependent model, using a modified combination of n-grams and skip-grams (m-NGSG). A meta-comparison of cross-validation accuracy with twelve training datasets from nine different published studies demonstrates a consistent increase in accuracy of m-NGSG when compared to contemporary classification and feature generation models. We expect this model to accelerate the classification of proteins from primary sequence data and increase the accessibility of protein characteristic prediction to a broader range of scientists. m-NGSG is freely available at Bitbucket: https://bitbucket.org/sm_islam/mngsg/src. A web server is available at watson.ecs.baylor.edu/ngsg. erich_baker@baylor.edu. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun
2018-04-01
A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.
NASA Astrophysics Data System (ADS)
Bethmann, F.; Jepping, C.; Luhmann, T.
2013-04-01
This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.
NASA Astrophysics Data System (ADS)
Maurer, Thomas; Caviedes-Voullième, Daniel; Hinz, Christoph; Gerke, Horst H.
2017-04-01
Landscapes that are heavily disturbed or newly formed by either natural processes or human activity are in a state of disequilibrium. Their initial development is thus characterized by highly dynamic processes under all climatic conditions. The primary distribution and structure of the solid phase (i.e. mineral particles forming the pore space) is one of the decisive factors for the development of hydrological behavior of the eco-hydrological system and therefore (co-) determining for its - more or less - stable final state. The artificially constructed ‚Hühnerwasser' catchment (a 6 ha area located in the open-cast lignite mine Welzow-Süd, southern Brandenburg, Germany) is a landscape laboratory where the initial eco-hydrological development is observed since 2005. The specific formation (or construction) processes generated characteristic sediment structures and distributions, resulting in a spatially heterogeneous initial state of the catchment. We developed a structure generator that simulates the characteristic distribution of the solid phase for such constructed landscapes. The program is able to generate quasi-realistic structures and sediment compositions on multiple spatial levels (1 cm up to 100 m scale). The generated structures can be i) conditioned to actual measurement values (e.g., soil texture and bulk distribution); ii) stochastically generated, and iii) calculated deterministically according to the geology and technical processes at the excavation site. Results are visualized using the GOCAD software package and the free software Paraview. Based on the 3D-spatial sediment distributions, effective hydraulic van-Genuchten parameters are calculated using pedotransfer functions. The hydraulic behavior of different sediment distribution (i.e. versions or variations of the catchment's porous body) is calculated using a numerical model developed by one of us (Caviedes-Voullième). Observation data are available from catchment monitoring are available for i) determining the boundary conditions (e.g., precipitation), and ii) the calibration / validation of the model (catchment discharge, ground water). The analysis of multiple sediment distribution scenarios should allow to approximately determine the influx of starting conditions on initial development of hydrological behavior. We present first flow modeling results for a reference (conditioned) catchment model and variations thereof. We will also give an outlook on further methodical development of our approach.
Transforming BIM to BEM: Generation of Building Geometry for the NASA Ames Sustainability Base BIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Donnell, James T.; Maile, Tobias; Rose, Cody
Typical processes of whole Building Energy simulation Model (BEM) generation are subjective, labor intensive, time intensive and error prone. Essentially, these typical processes reproduce already existing data, i.e. building models already created by the architect. Accordingly, Lawrence Berkeley National Laboratory (LBNL) developed a semi-automated process that enables reproducible conversions of Building Information Model (BIM) representations of building geometry into a format required by building energy modeling (BEM) tools. This is a generic process that may be applied to all building energy modeling tools but to date has only been used for EnergyPlus. This report describes and demonstrates each stage inmore » the semi-automated process for building geometry using the recently constructed NASA Ames Sustainability Base throughout. This example uses ArchiCAD (Graphisoft, 2012) as the originating CAD tool and EnergyPlus as the concluding whole building energy simulation tool. It is important to note that the process is also applicable for professionals that use other CAD tools such as Revit (“Revit Architecture,” 2012) and DProfiler (Beck Technology, 2012) and can be extended to provide geometry definitions for BEM tools other than EnergyPlus. Geometry Simplification Tool (GST) was used during the NASA Ames project and was the enabling software that facilitated semi-automated data transformations. GST has now been superseded by Space Boundary Tool (SBT-1) and will be referred to as SBT-1 throughout this report. The benefits of this semi-automated process are fourfold: 1) reduce the amount of time and cost required to develop a whole building energy simulation model, 2) enable rapid generation of design alternatives, 3) improve the accuracy of BEMs and 4) result in significantly better performing buildings with significantly lower energy consumption than those created using the traditional design process, especially if the simulation model was used as a predictive benchmark during operation. Developing BIM based criteria to support the semi-automated process should result in significant reliable improvements and time savings in the development of BEMs. In order to define successful BIMS, CAD export of IFC based BIMs for BEM must adhere to a standard Model View Definition (MVD) for simulation as provided by the concept design BIM MVD (buildingSMART, 2011). In order to ensure wide scale adoption, companies would also need to develop their own material libraries to support automated activities and undertake a pilot project to improve understanding of modeling conventions and design tool features and limitations.« less
A kinetic model for stress generation in thin films grown from energetic vapor fluxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chason, E.; Karlson, M.; Colin, J. J.
We have developed a kinetic model for residual stress generation in thin films grown from energetic vapor fluxes, encountered, e.g., during sputter deposition. The new analytical model considers sub-surface point defects created by atomic peening, along with processes treated in already existing stress models for non-energetic deposition, i.e., thermally activated diffusion processes at the surface and the grain boundary. According to the new model, ballistically induced sub-surface defects can get incorporated as excess atoms at the grain boundary, remain trapped in the bulk, or annihilate at the free surface, resulting in a complex dependence of the steady-state stress on themore » grain size, the growth rate, as well as the energetics of the incoming particle flux. We compare calculations from the model with in situ stress measurements performed on a series of Mo films sputter-deposited at different conditions and having different grain sizes. The model is able to reproduce the observed increase of compressive stress with increasing growth rate, behavior that is the opposite of what is typically seen under non-energetic growth conditions. On a grander scale, this study is a step towards obtaining a comprehensive understanding of stress generation and evolution in vapor deposited polycrystalline thin films.« less
Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan
2005-08-01
This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).
Stochastic Modeling of Airlines' Scheduled Services Revenue
NASA Technical Reports Server (NTRS)
Hamed, M. M.
1999-01-01
Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers arc able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.
Stochastic Modeling of Airlines' Scheduled Services Revenue
NASA Technical Reports Server (NTRS)
Hamed, M. M.
1999-01-01
Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers are able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.
3D molecular models of whole HIV-1 virions generated with cellPACK
Goodsell, David S.; Autin, Ludovic; Forli, Stefano; Sanner, Michel F.; Olson, Arthur J.
2014-01-01
As knowledge of individual biological processes grows, it becomes increasingly useful to frame new findings within their larger biological contexts in order to generate new systems-scale hypotheses. This report highlights two major iterations of a whole virus model of HIV-1, generated with the cellPACK software. cellPACK integrates structural and systems biology data with packing algorithms to assemble comprehensive 3D models of cell-scale structures in molecular detail. This report describes the biological data, modeling parameters and cellPACK methods used to specify and construct editable models for HIV-1. Anticipating that cellPACK interfaces under development will enable researchers from diverse backgrounds to critique and improve the biological models, we discuss how cellPACK can be used as a framework to unify different types of data across all scales of biology. PMID:25253262
Generating unstructured nuclear reactor core meshes in parallel
Jain, Rajeev; Tautges, Timothy J.
2014-10-24
Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less
Theory of agent-based market models with controlled levels of greed and anxiety
NASA Astrophysics Data System (ADS)
Papadopoulos, P.; Coolen, A. C. C.
2010-01-01
We use generating functional analysis to study minority-game-type market models with generalized strategy valuation updates that control the psychology of agents' actions. The agents' choice between trend-following and contrarian trading, and their vigor in each, depends on the overall state of the market. Even in 'fake history' models, the theory now involves an effective overall bid process (coupled to the effective agent process) which can exhibit profound remanence effects and new phase transitions. For some models the bid process can be solved directly, others require Maxwell-construction-type approximations.
An engineering approach to automatic programming
NASA Technical Reports Server (NTRS)
Rubin, Stuart H.
1990-01-01
An exploratory study of the automatic generation and optimization of symbolic programs using DECOM - a prototypical requirement specification model implemented in pure LISP was undertaken. It was concluded, on the basis of this study, that symbolic processing languages such as LISP can support a style of programming based upon formal transformation and dependent upon the expression of constraints in an object-oriented environment. Such languages can represent all aspects of the software generation process (including heuristic algorithms for effecting parallel search) as dynamic processes since data and program are represented in a uniform format.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Atmospheric, Climatic, and Environmental Research
NASA Technical Reports Server (NTRS)
Broecker, Wallace S.; Gornitz, Vivien M.
1994-01-01
The climate and atmospheric modeling project involves analysis of basic climate processes, with special emphasis on studies of the atmospheric CO2 and H2O source/sink budgets and studies of the climatic role Of CO2, trace gases and aerosols. These studies are carried out, based in part on use of simplified climate models and climate process models developed at GISS. The principal models currently employed are a variable resolution 3-D general circulation model (GCM), and an associated "tracer" model which simulates the advection of trace constituents using the winds generated by the GCM.
Exact solutions for network rewiring models
NASA Astrophysics Data System (ADS)
Evans, T. S.
2007-03-01
Evolving networks with a constant number of edges may be modelled using a rewiring process. These models are used to describe many real-world processes including the evolution of cultural artifacts such as family names, the evolution of gene variations, and the popularity of strategies in simple econophysics models such as the minority game. The model is closely related to Urn models used for glasses, quantum gravity and wealth distributions. The full mean field equation for the degree distribution is found and its exact solution and generating solution are given.
9th Annual Science and Engineering Technology Conference
2008-04-17
Disks Composite Technology Titanium Aluminides Processing Microstructure Properties Curve Generator Go-Forward: Integrated Materials & Process Models...Initiatives Current DPA/T3s: Atomic Layer Deposition Hermetic Coatings: ...domestic ALD for electronic components; transition to fabrication process ...Production windows estim • Process capability fully established >Production specifications in place >Supply chain established •All necessary property
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
Regional stochastic generation of streamflows using an ARIMA (1,0,1) process and disaggregation
Armbruster, Jeffrey T.
1979-01-01
An ARIMA (1,0,1) model was calibrated and used to generate long annual flow sequences at three sites in the Juniata River basin, Pennsylvania. The model preserves the mean, variance, and cross correlations of the observed station data. In addition, it has a desirable blend of both high and low frequency characteristics and therefore is capable of preserving the Hurst coefficient, h. The generated annual flows are disaggregated into monthly sequences using a modification of the Valencia-Schaake model. The low-flow frequency and flow duration characteristics of the generated monthly flows, with length equal to the historical data, compare favorably with the historical data. Once the models were verified, 100-year sequences were generated and analyzed for their low flow characteristics. One-, three- and six- month low-flow frequencies at recurrence intervals greater than 10 years are generally found to be lower than flow computed from the historical flows. A method is proposed for synthesizing flows at ungaged sites. (Kosco-USGS)
Center for the Study of Rhythmic Processes.
1987-10-20
pattern generators Neural network Spinal cord Mathematical modeling Neuromodulators Regeneration Sensory feedback 19 ABSTRACT (Continue on reverse if...generator circuit. Trends in Neurosciences 9: 432-437. Marder, E. (1987) Neurotransmitters and neuromodulators . In Selverston, A.I. and Moulins, M. The...relating to the effects of neuromodulators on the output of the lobster stomatogastric central pattern generator. (See Sections III and IV.) 2. Trainig
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, W.S.
Progress during the period includes completion of the SNAP 7C system tests, completion of safety analysis for the SNAP 7A and C systems, assembly and initial testing of SNAP 7A, assembly of a modified reliability model, and assembly of a 10-W generator. Other activities include completion of thermal and safety analyses for SNAP 7B and D generators and fuel processing for these generators. (J.R.D.)
[Vitamin K3-induced activation of molecular oxygen in glioma cells].
Krylova, N G; Kulagova, T A; Semenkova, G N; Cherenkevich, S N
2009-01-01
It has been shown by the method of fluorescent analysis that the rate of hydrogen peroxide generation in human U251 glioma cells under the effect of lipophilic (menadione) or hydrophilic (vikasol) analogues of vitamin K3 was different. Analyzing experimental data we can conclude that menadione underwent one- and two-electron reduction by intracellular reductases in glioma cells. Reduced forms of menadione interact with molecular oxygen leading to reactive oxygen species (ROS) generation. The theoretical model of ROS generation including two competitive processes of one- and two-electron reduction of menadione has been proposed. Rate constants of ROS generation mediated by one-electron reduction process have been estimated.
ERIC Educational Resources Information Center
Parks, Melissa
2014-01-01
Model-eliciting activities (MEAs) are not new to those in engineering or mathematics, but they were new to Melissa Parks. Model-eliciting activities are simulated real-world problems that integrate engineering, mathematical, and scientific thinking as students find solutions for specific scenarios. During this process, students generate solutions…
Two stochastic models useful in petroleum exploration
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1972-01-01
A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits
Nonparametric Bayesian models through probit stick-breaking processes
Rodríguez, Abel; Dunson, David B.
2013-01-01
We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology. PMID:24358072
Nonparametric Bayesian models through probit stick-breaking processes.
Rodríguez, Abel; Dunson, David B
2011-03-01
We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.
DEVS Unified Process for Web-Centric Development and Testing of System of Systems
2008-05-20
gathering from the user. Further, methodologies have been developed to generate DEVS models from BPMN /BPEL-based and message-based requirement specifications...27] 3. BPMN /BPEL based system specifications: Business Process Modeling Notation ( BPMN ) [bpm] or Business Process Execution Language (BPEL) provide a...information is stored in .wsdl and .bpel files for BPEL but in proprietary format for BPMN . 4. DoDAF-based requirement specifications: Department of
Katriel, G.; Yaari, R.; Huppert, A.; Roll, U.; Stone, L.
2011-01-01
This paper presents new computational and modelling tools for studying the dynamics of an epidemic in its initial stages that use both available incidence time series and data describing the population's infection network structure. The work is motivated by data collected at the beginning of the H1N1 pandemic outbreak in Israel in the summer of 2009. We formulated a new discrete-time stochastic epidemic SIR (susceptible-infected-recovered) model that explicitly takes into account the disease's specific generation-time distribution and the intrinsic demographic stochasticity inherent to the infection process. Moreover, in contrast with many other modelling approaches, the model allows direct analytical derivation of estimates for the effective reproductive number (Re) and of their credible intervals, by maximum likelihood and Bayesian methods. The basic model can be extended to include age–class structure, and a maximum likelihood methodology allows us to estimate the model's next-generation matrix by combining two types of data: (i) the incidence series of each age group, and (ii) infection network data that provide partial information of ‘who-infected-who’. Unlike other approaches for estimating the next-generation matrix, the method developed here does not require making a priori assumptions about the structure of the next-generation matrix. We show, using a simulation study, that even a relatively small amount of information about the infection network greatly improves the accuracy of estimation of the next-generation matrix. The method is applied in practice to estimate the next-generation matrix from the Israeli H1N1 pandemic data. The tools developed here should be of practical importance for future investigations of epidemics during their initial stages. However, they require the availability of data which represent a random sample of the real epidemic process. We discuss the conditions under which reporting rates may or may not influence our estimated quantities and the effects of bias. PMID:21247949
Item Difficulty Modeling of Paragraph Comprehension Items
ERIC Educational Resources Information Center
Gorin, Joanna S.; Embretson, Susan E.
2006-01-01
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Triangle Geometry Processing for Surface Modeling and Cartesian Grid Generation
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J. (Inventor); Melton, John E. (Inventor); Berger, Marsha J. (Inventor)
2002-01-01
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
Triangle geometry processing for surface modeling and cartesian grid generation
Aftosmis, Michael J [San Mateo, CA; Melton, John E [Hollister, CA; Berger, Marsha J [New York, NY
2002-09-03
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
A better sequence-read simulator program for metagenomics.
Johnson, Stephen; Trost, Brett; Long, Jeffrey R; Pittet, Vanessa; Kusalik, Anthony
2014-01-01
There are many programs available for generating simulated whole-genome shotgun sequence reads. The data generated by many of these programs follow predefined models, which limits their use to the authors' original intentions. For example, many models assume that read lengths follow a uniform or normal distribution. Other programs generate models from actual sequencing data, but are limited to reads from single-genome studies. To our knowledge, there are no programs that allow a user to generate simulated data following non-parametric read-length distributions and quality profiles based on empirically-derived information from metagenomics sequencing data. We present BEAR (Better Emulation for Artificial Reads), a program that uses a machine-learning approach to generate reads with lengths and quality values that closely match empirically-derived distributions. BEAR can emulate reads from various sequencing platforms, including Illumina, 454, and Ion Torrent. BEAR requires minimal user input, as it automatically determines appropriate parameter settings from user-supplied data. BEAR also uses a unique method for deriving run-specific error rates, and extracts useful statistics from the metagenomic data itself, such as quality-error models. Many existing simulators are specific to a particular sequencing technology; however, BEAR is not restricted in this way. Because of its flexibility, BEAR is particularly useful for emulating the behaviour of technologies like Ion Torrent, for which no dedicated sequencing simulators are currently available. BEAR is also the first metagenomic sequencing simulator program that automates the process of generating abundances, which can be an arduous task. BEAR is useful for evaluating data processing tools in genomics. It has many advantages over existing comparable software, such as generating more realistic reads and being independent of sequencing technology, and has features particularly useful for metagenomics work.
Declarative Business Process Modelling and the Generation of ERP Systems
NASA Astrophysics Data System (ADS)
Schultz-Møller, Nicholas Poul; Hølmer, Christian; Hansen, Michael R.
We present an approach to the construction of Enterprise Resource Planning (ERP) Systems, which is based on the Resources, Events and Agents (REA) ontology. This framework deals with processes involving exchange and flow of resources in a declarative, graphically-based manner describing what the major entities are rather than how they engage in computations. We show how to develop a domain-specific language on the basis of REA, and a tool which automatically can generate running web-applications. A main contribution is a proof-of-concept showing that business-domain experts can generate their own applications without worrying about implementation details.
Selective interference with image retention and generation: evidence for the workspace model.
van der Meulen, Marian; Logie, Robert H; Della Sala, Sergio
2009-08-01
We address three types of model of the relationship between working memory (WM) and long-term memory (LTM): (a) the gateway model, in which WM acts as a gateway between perceptual input and LTM; (b) the unitary model, in which WM is seen as the currently activated areas of LTM; and (c) the workspace model, in which perceptual input activates LTM, and WM acts as a separate workspace for processing and temporary retention of these activated traces. Predictions of these models were tested, focusing on visuospatial working memory and using dual-task methodology to combine two main tasks (visual short-term retention and image generation) with two interference tasks (irrelevant pictures and spatial tapping). The pictures selectively disrupted performance on the generation task, whereas the tapping selectively interfered with the retention task. Results are consistent with the predictions of the workspace model.
USING MM5 VERSION 2 WITH CMAQ AND MODELS-3, A USER'S GUIDE AND TUTORIAL
Meteorological data are important in many of the processes simulated in the Community Multi-Scale Air Quality (CMAQ) model and the Models-3 framework. The first meteorology model that has been selected and evaluated with CMAQ is the Fifth-Generation Pennsylvania State University...
NASA Astrophysics Data System (ADS)
Danáčová, Michaela; Valent, Peter; Výleta, Roman
2017-12-01
Nowadays, rainfall simulators are being used by many researchers in field or laboratory experiments. The main objective of most of these experiments is to better understand the underlying runoff generation processes, and to use the results in the process of calibration and validation of hydrological models. Many research groups have assembled their own rainfall simulators, which comply with their understanding of rainfall processes, and the requirements of their experiments. Most often, the existing rainfall simulators differ mainly in the size of the irrigated area, and the way they generate rain drops. They can be characterized by the accuracy, with which they produce a rainfall of a given intensity, the size of the irrigated area, and the rain drop generating mechanism. Rainfall simulation experiments can provide valuable information about the genesis of surface runoff, infiltration of water into soil and rainfall erodibility. Apart from the impact of physical properties of soil, its moisture and compaction on the generation of surface runoff and the amount of eroded particles, some studies also investigate the impact of vegetation cover of the whole area of interest. In this study, the rainfall simulator was used to simulate the impact of the slope gradient of the irrigated area on the amount of generated runoff and sediment yield. In order to eliminate the impact of external factors and to improve the reproducibility of the initial conditions, the experiments were conducted in laboratory conditions. The laboratory experiments were carried out using a commercial rainfall simulator, which was connected to an external peristaltic pump. The pump maintained a constant and adjustable inflow of water, which enabled to overcome the maximum volume of simulated precipitation of 2.3 l, given by the construction of the rainfall simulator, while maintaining constant characteristics of the simulated precipitation. In this study a 12-minute rainfall with a constant intensity of 5 mm/min was used to irrigate a corrupted soil sample. The experiment was undertaken for several different slopes, under the condition of no vegetation cover. The results of the rainfall simulation experiment complied with the expectations of a strong relationship between the slope gradient, and the amount of surface runoff generated. The experiments with higher slope gradients were characterised by larger volumes of surface runoff generated, and by shorter times after which it occurred. The experiments with rainfall simulators in both laboratory and field conditions play an important role in better understanding of runoff generation processes. The results of such small scale experiments could be used to estimate some of the parameters of complex hydrological models, which are used to model rainfall-runoff and erosion processes at catchment scale.
System dynamics model for predicting floods from snowmelt in North American prairie watersheds
NASA Astrophysics Data System (ADS)
Li, L.; Simonovic, S. P.
2002-09-01
This study uses a system dynamics approach to explore hydrological processes in the geographic locations where the main contribution to flooding is coming from the snowmelt. Temperature is identified as a critical factor that affects watershed hydrological processes. Based on the dynamic processes of the hydrologic cycle occurring in a watershed, the feedback relationships linking the watershed structure, as well as the climate factors, to the streamflow generation were identified prior to the development of a system dynamics model. The model is used to simulate flood patterns generated by snowmelt under temperature change in the spring. Model structure captures a vertical water balance using five tanks representing snow, interception, surface, subsurface and groundwater storage. Calibration and verification results show that temperature change and snowmelt play a key role in flood generation. Results indicate that simulated values match observed data very well. The goodness-of-fit between simulated and observed peak flow data is measured using coefficient of efficiency, coefficient of determination and square of the residual mass curve coefficient. For the Assiniboine River all three measures were in the interval between 0·92 and 0·96 and for the Red River between 0·89 and 0·97. The model is capable of capturing the essential dynamics of streamflow formation. Model input requires a set of initial values for all state variables and the time series of daily temperature and precipitation information. Data from the Red River Basin, shared by Canada and the USA, are used in the model development and testing.
Failure detection system design methodology. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chow, E. Y.
1980-01-01
The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.
Woods, J
2001-01-01
The third generation cardiac institute will build on the successes of the past in structuring the service line, re-organizing to assimilate specialist interests, and re-positioning to expand cardiac services into cardiovascular services. To meet the challenges of an increasingly competitive marketplace and complex delivery system, the focus for this new model will shift away from improved structures, and toward improved processes. This shift will require a sound methodology for statistically measuring and sustaining process changes related to the optimization of cardiovascular care. In recent years, GE Medical Systems has successfully applied Six Sigma methodologies to enable cardiac centers to control key clinical and market development processes through its DMADV, DMAIC and Change Acceleration processes. Data indicates Six Sigma is having a positive impact within organizations across the United States, and when appropriately implemented, this approach can serve as a solid foundation for building the next generation cardiac institute.
Simulation of generation of new ideas for new product development and IT services
NASA Astrophysics Data System (ADS)
Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda
2015-02-01
This paper describes a dynamic model of the New Product Development (NPD) process. The model has been occurring from best practice noticed in our research conducted at a range of situations. The model contributes to determine and put an IT company's NPD activities into the frame of the overall NPD process[1]. It has been found to be a useful tool for organizing data on IT company's NPD activities without enforcement an excessively restrictive research methodology refers to the model of NPD. The framework, which strengthens the model, will help to promote a research of the methods undertaken within an IT company's NPD process, thus promoting understanding and improvement of the simulation process[2]. IT companies tested many techniques with several different practices designed to improve the validity and efficacy of their NPD process[3]. Supported by the model, this research examines how widely accepted stated tactics are and what impact these best tactics have on NPD performance. The main assumption of this study is that simulation of generation of new ideas[4] will lead to greater NPD effectiveness and more successful products in IT companies. With the model implementation, practices concern the implementation strategies of NPD (product selection, objectives, leadership, marketing strategy and customer satisfaction) are all more widely accepted than best practices related with controlling the application of NPD (process control, measurements, results). In linking simulation with impact, our results states product success depends on developing strong products and ensuring organizational emphasis, through proper project selection. Project activities strengthens both product and project success. IT products and services success also depends on monitoring the NPD procedure through project management and ensuring team consistency with group rewards. Sharing experiences between projects can positively influence the NPD process.
Nanoparticle transport and delivery in a heterogeneous pulmonary vasculature.
Sohrabi, Salman; Wang, Shunqiang; Tan, Jifu; Xu, Jiang; Yang, Jie; Liu, Yaling
2017-01-04
Quantitative understanding of nanoparticles delivery in a complex vascular networks is very challenging because it involves interplay of transport, hydrodynamic force, and multivalent interactions across different scales. Heterogeneous pulmonary network includes up to 16 generations of vessels in its arterial tree. Modeling the complete pulmonary vascular system in 3D is computationally unrealistic. To save computational cost, a model reconstructed from MRI scanned images is cut into an arbitrary pathway consisting of the upper 4-generations. The remaining generations are represented by an artificially rebuilt pathway. Physiological data such as branch information and connectivity matrix are used for geometry reconstruction. A lumped model is used to model the flow resistance of the branches that are cut off from the truncated pathway. Moreover, since the nanoparticle binding process is stochastic in nature, a binding probability function is used to simplify the carrier attachment and detachment processes. The stitched realistic and artificial geometries coupled with the lumped model at the unresolved outlets are used to resolve the flow field within the truncated arterial tree. Then, the biodistribution of 200nm, 700nm and 2µm particles at different vessel generations is studied. At the end, 0.2-0.5% nanocarrier deposition is predicted during one time passage of drug carriers through pulmonary vascular tree. Our truncated approach enabled us to efficiently model hemodynamics and accordingly particle distribution in a complex 3D vasculature providing a simple, yet efficient predictive tool to study drug delivery at organ level. Copyright © 2016 Elsevier Ltd. All rights reserved.
Moral judgment as information processing: an integrative review.
Guglielmo, Steve
2015-01-01
How do humans make moral judgments about others' behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment.
Moral judgment as information processing: an integrative review
Guglielmo, Steve
2015-01-01
How do humans make moral judgments about others’ behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment. PMID:26579022
NASA Astrophysics Data System (ADS)
Zhu, X. A.; Tsai, C. T.
2000-09-01
Dislocations in gallium arsenide (GaAs) crystals are generated by excessive thermal stresses induced during the crystal growth process. The presence of dislocations has adverse effects on the performance and reliability of the GaAs-based devices. It is well known that dislocation density can be significantly reduced by doping impurity atoms into a GaAs crystal during its growth process. A viscoplastic constitutive equation that couples the microscopic dislocation density with the macroscopic plastic deformation is employed in a crystallographic finite element model for calculating the dislocation density generated in the GaAs crystal during its growth process. The dislocation density is considered as an internal state variable and the drag stress caused by doping impurity is included in this constitutive equation. A GaAs crystal grown by the vertical Bridgman process is adopted as an example to study the influences of doping impurity and growth orientation on dislocation generation. The calculated results show that doping impurity can significantly reduce the dislocation density generated in the crystal. The level of reduction is also influenced by the growth orientation during the crystal growth process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharyya, D.; Turton, R.; Zitney, S.
In this presentation, development of a plant-wide dynamic model of an advanced Integrated Gasification Combined Cycle (IGCC) plant with CO2 capture will be discussed. The IGCC reference plant generates 640 MWe of net power using Illinois No.6 coal as the feed. The plant includes an entrained, downflow, General Electric Energy (GEE) gasifier with a radiant syngas cooler (RSC), a two-stage water gas shift (WGS) conversion process, and two advanced 'F' class combustion turbines partially integrated with an elevated-pressure air separation unit (ASU). A subcritical steam cycle is considered for heat recovery steam generation. Syngas is selectively cleaned by a SELEXOLmore » acid gas removal (AGR) process. Sulfur is recovered using a two-train Claus unit with tail gas recycle to the AGR. A multistage intercooled compressor is used for compressing CO2 to the pressure required for sequestration. Using Illinois No.6 coal, the reference plant generates 640 MWe of net power. The plant-wide steady-state and dynamic IGCC simulations have been generated using the Aspen Plus{reg_sign} and Aspen Plus Dynamics{reg_sign} process simulators, respectively. The model is generated based on the Case 2 IGCC configuration detailed in the study available in the NETL website1. The GEE gasifier is represented with a restricted equilibrium reactor model where the temperature approach to equilibrium for individual reactions can be modified based on the experimental data. In this radiant-only configuration, the syngas from the Radiant Syngas Cooler (RSC) is quenched in a scrubber. The blackwater from the scrubber bottom is further cleaned in the blackwater treatment plant. The cleaned water is returned back to the scrubber and also used for slurry preparation. The acid gas from the sour water stripper (SWS) is sent to the Claus plant. The syngas from the scrubber passes through a sour shift process. The WGS reactors are modeled as adiabatic plug flow reactors with rigorous kinetics based on the mid-life activity of the shift-catalyst. The SELEXOL unit consists of the H2S and CO2 absorbers that are designed to meet the stringent environmental limits and requirements of other associated units. The model also considers the stripper for recovering H2S that is sent as a feed to a split-flow Claus unit. The tail gas from the Claus unit is recycled to the SELEXOL unit. The cleaned syngas is sent to the GE 7FB gas turbine. This turbine is modeled as per published data in the literature. Diluent N2 is used from the elevated-pressure ASU for reducing the NOx formation. The heat recovery steam generator (HRSG) is modeled by considering generation of high-pressure, intermediate-pressure, and low-pressure steam. All of the vessels, reactors, heat exchangers, and the columns have been sized. The basic IGCC process control structure has been synthesized by standard guidelines and existing practices. The steady-state simulation is solved in sequential-modular mode in Aspen Plus{reg_sign} and consists of more than 300 unit operations, 33 design specs, and 16 calculator blocks. The equation-oriented dynamic simulation consists of more than 100,000 equations solved using a multi-step Gear's integrator in Aspen Plus Dynamics{reg_sign}. The challenges faced in solving the dynamic model and key transient results from this dynamic model will also be discussed.« less
Integrated process modeling for the laser inertial fusion energy (LIFE) generation system
NASA Astrophysics Data System (ADS)
Meier, W. R.; Anklam, T. M.; Erlandson, A. C.; Miles, R. R.; Simon, A. J.; Sawicki, R.; Storm, E.
2010-08-01
A concept for a new fusion-fission hybrid technology is being developed at Lawrence Livermore National Laboratory. The primary application of this technology is base-load electrical power generation. However, variants of the baseline technology can be used to "burn" spent nuclear fuel from light water reactors or to perform selective transmutation of problematic fission products. The use of a fusion driver allows very high burn-up of the fission fuel, limited only by the radiation resistance of the fuel form and system structures. As a part of this process, integrated process models have been developed to aid in concept definition. Several models have been developed. A cost scaling model allows quick assessment of design changes or technology improvements on cost of electricity. System design models are being used to better understand system interactions and to do design trade-off and optimization studies. Here we describe the different systems models and present systems analysis results. Different market entry strategies are discussed along with potential benefits to US energy security and nuclear waste disposal. Advanced technology options are evaluated and potential benefits from additional R&D targeted at the different options is quantified.
Ferguson, Christobel M; Croke, Barry F W; Beatson, Peter J; Ashbolt, Nicholas J; Deere, Daniel A
2007-06-01
In drinking water catchments, reduction of pathogen loads delivered to reservoirs is an important priority for the management of raw source water quality. To assist with the evaluation of management options, a process-based mathematical model (pathogen catchment budgets - PCB) is developed to predict Cryptosporidium, Giardia and E. coli loads generated within and exported from drinking water catchments. The model quantifies the key processes affecting the generation and transport of microorganisms from humans and animals using land use and flow data, and catchment specific information including point sources such as sewage treatment plants and on-site systems. The resultant pathogen catchment budgets (PCB) can be used to prioritize the implementation of control measures for the reduction of pathogen risks to drinking water. The model is applied in the Wingecarribee catchment and used to rank those sub-catchments that would contribute the highest pathogen loads in dry weather, and in intermediate and large wet weather events. A sensitivity analysis of the model identifies that pathogen excretion rates from animals and humans, and manure mobilization rates are significant factors determining the output of the model and thus warrant further investigation.
Developing a semantic web model for medical differential diagnosis recommendation.
Mohammed, Osama; Benlamri, Rachid
2014-10-01
In this paper we describe a novel model for differential diagnosis designed to make recommendations by utilizing semantic web technologies. The model is a response to a number of requirements, ranging from incorporating essential clinical diagnostic semantics to the integration of data mining for the process of identifying candidate diseases that best explain a set of clinical features. We introduce two major components, which we find essential to the construction of an integral differential diagnosis recommendation model: the evidence-based recommender component and the proximity-based recommender component. Both approaches are driven by disease diagnosis ontologies designed specifically to enable the process of generating diagnostic recommendations. These ontologies are the disease symptom ontology and the patient ontology. The evidence-based diagnosis process develops dynamic rules based on standardized clinical pathways. The proximity-based component employs data mining to provide clinicians with diagnosis predictions, as well as generates new diagnosis rules from provided training datasets. This article describes the integration between these two components along with the developed diagnosis ontologies to form a novel medical differential diagnosis recommendation model. This article also provides test cases from the implementation of the overall model, which shows quite promising diagnostic recommendation results.
Clement, R; Schneider, J; Brambs, H-J; Wunderlich, A; Geiger, M; Sander, F G
2004-02-01
The paper demonstrates how to generate an individual 3D volume model of a human single-rooted tooth using an automatic workflow. It can be implemented into finite element simulation. In several computational steps, computed tomography data of patients are used to obtain the global coordinates of the tooth's surface. First, the large number of geometric data is processed with several self-developed algorithms for a significant reduction. The most important task is to keep geometrical information of the real tooth. The second main part includes the creation of the volume model for tooth and periodontal ligament (PDL). This is realized with a continuous free form surface of the tooth based on the remaining points. Generating such irregular objects for numerical use in biomechanical research normally requires enormous manual effort and time. The finite element mesh of the tooth, consisting of hexahedral elements, is composed of different materials: dentin, PDL and surrounding alveolar bone. It is capable of simulating tooth movement in a finite element analysis and may give valuable information for a clinical approach without the restrictions of tetrahedral elements. The mesh generator of FE software ANSYS executed the mesh process for hexahedral elements successfully.
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.
NASA Astrophysics Data System (ADS)
Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can
2017-09-01
A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.
Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can
2017-09-01
A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.
Filament winding cylinders. II - Validation of the process model
NASA Technical Reports Server (NTRS)
Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.
1990-01-01
Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
On the generation and evolution of internal solitary waves in the southern Red Sea
NASA Astrophysics Data System (ADS)
Guo, Daquan; Zhan, Peng; Kartadikaria, Aditya; Akylas, Triantaphyllos; Hoteit, Ibrahim
2015-04-01
Satellite observations recently revealed the existence of trains of internal solitary waves in the southern Red Sea between 16.0°N and 16.5°N, propagating from the centre of the domain toward the continental shelf [Da silva et al., 2012]. Given the relatively weak tidal velocity in this area and their generation in the central of the domain, Da Silva suggested three possible mechanisms behind the generation of the waves, namely Resonance and disintegration of interfacial tides, Generation of interfacial tides by impinging, remotely generated internal tidal beams and for geometrically focused and amplified internal tidal beams. Tide analysis based on tide stations data and barotropic tide model in the Red Sea shows that tide is indeed very weak in the centre part of the Red Sea, but it is relatively strong in the northern and southern parts (reaching up to 66 cm/s). Together with extreme steep slopes along the deep trench, it provides favourable conditions for the generation of internal solitary in the southern Red Sea. To investigate the generation mechanisms and study the evolution of the internal waves in the off-shelf region of the southern Red Sea we have implemented a 2-D, high-resolution and non-hydrostatic configuration of the MIT general circulation model (MITgcm). Our simulations reproduce well that the generation process of the internal solitary waves. Analysis of the model's output suggests that the interaction between the topography and tidal flow with the nonlinear effect is the main mechanism behind the generation of the internal solitary waves. Sensitivity experiments suggest that neither tidal beam nor the resonance effect of the topography is important factor in this process.
Energy recovery from solid waste. [production engineering model
NASA Technical Reports Server (NTRS)
Dalton, C.; Huang, C. J.
1974-01-01
A recent group study on the problem of solid waste disposal provided a decision making model for a community to use in determining the future for its solid waste. The model is a combination of the following factors: technology, legal, social, political, economic and environmental. An assessment of local or community needs determines what form of energy recovery is desirable. A market for low pressure steam or hot water would direct a community to recover energy from solid waste by incineration to generate steam. A fuel gas could be produced by a process known as pyrolysis if there is a local market for a low heating value gaseous fuel. Solid waste can also be used directly as a fuel supplemental to coal in a steam generator. An evaluation of these various processes is made.
An integrated 3D log processing optimization system for small sawmills in central Appalachia
Wenshu Lin; Jingxin Wang
2013-01-01
An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...
Real-time simulation of the retina allowing visualization of each processing stage
NASA Astrophysics Data System (ADS)
Teeters, Jeffrey L.; Werblin, Frank S.
1991-08-01
The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
New generation of meteorology cameras
NASA Astrophysics Data System (ADS)
Janout, Petr; Blažek, Martin; Páta, Petr
2017-12-01
A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.
NASA Technical Reports Server (NTRS)
1981-01-01
The development of a coal gasification system design and mass and energy balance simulation program for the TVA and other similar facilities is described. The materials-process-product model (MPPM) and the advanced system for process engineering (ASPEN) computer program were selected from available steady state and dynamic models. The MPPM was selected to serve as the basis for development of system level design model structure because it provided the capability for process block material and energy balance and high-level systems sizing and costing. The ASPEN simulation serves as the basis for assessing detailed component models for the system design modeling program. The ASPEN components were analyzed to identify particular process blocks and data packages (physical properties) which could be extracted and used in the system design modeling program. While ASPEN physical properties calculation routines are capable of generating physical properties required for process simulation, not all required physical property data are available, and must be user-entered.
Enhanced modeling and simulation of EO/IR sensor systems
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; May, Christopher
2015-05-01
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
3D deformable organ model based liver motion tracking in ultrasound videos
NASA Astrophysics Data System (ADS)
Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong
2013-03-01
This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ringler, Todd; Ju, Lili; Gunzburger, Max
2008-11-14
During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoimore » tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.« less
Level-crossing statistics of the horizontal wind speed in the planetary surface boundary layer
NASA Astrophysics Data System (ADS)
Edwards, Paul J.; Hurst, Robert B.
2001-09-01
The probability density of the times for which the horizontal wind remains above or below a given threshold speed is of some interest in the fields of renewable energy generation and pollutant dispersal. However there appear to be no analytic or conceptual models which account for the observed power law form of the distribution of these episode lengths over a range of over three decades, from a few tens of seconds to a day or more. We reanalyze high resolution wind data and demonstrate the fractal character of the point process generated by the wind speed level crossings. We simulate the fluctuating wind speed by a Markov process which approximates the characteristics of the real (non-Markovian) wind and successfully generates a power law distribution of episode lengths. However, fundamental questions concerning the physical basis for this behavior and the connection between the properties of a continuous-time stochastic process and the fractal statistics of the point process generated by its level crossings remain unanswered.
Audio-tactile integration and the influence of musical training.
Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo
2014-01-01
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
Animal Models of Lymphangioleiomyomatosis (LAM) and Tuberous Sclerosis Complex (TSC)
2010-01-01
Abstract Animal models of lymphangioleiomyomatosis (LAM) and tuberous sclerosis complex (TSC) are highly desired to enable detailed investigation of the pathogenesis of these diseases. Multiple rats and mice have been generated in which a mutation similar to that occurring in TSC patients is present in an allele of Tsc1 or Tsc2. Unfortunately, these mice do not develop pathologic lesions that match those seen in LAM or TSC. However, these Tsc rodent models have been useful in confirming the two-hit model of tumor development in TSC, and in providing systems in which therapeutic trials (e.g., rapamycin) can be performed. In addition, conditional alleles of both Tsc1 and Tsc2 have provided the opportunity to target loss of these genes to specific tissues and organs, to probe the in vivo function of these genes, and attempt to generate better models. Efforts to generate an authentic LAM model are impeded by a lack of understanding of the cell of origin of this process. However, ongoing studies provide hope that such a model will be generated in the coming years. PMID:20235887
Theoretical and material studies on thin-film electroluminescent devices
NASA Technical Reports Server (NTRS)
Summers, C. J.; Brennan, K. F.
1986-01-01
Electroluminescent materials and device technology were assessed. The evaluation strongly suggests the need for a comprehensive theoretical and experimental study of both materials and device structures, particularly in the following areas: carrier generation and multiplication; radiative and nonradiative processes of luminescent centers; device modeling; new device concepts; and single crystal materials growth and characterization. Modeling of transport properties of hot electrons in ZnSe and the generation of device concepts were initiated.
Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution.
Djordjevic, Ivan B
2015-08-24
Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled.
Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution
Djordjevic, Ivan B.
2015-01-01
Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled. PMID:26305258
NASA Astrophysics Data System (ADS)
Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.
2014-12-01
Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
An Aerodynamic Simulation Process for Iced Lifting Surfaces and Associated Issues
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Vickerman, Mary B.; Hackenberg, Anthony W.; Rigby, David L.
2003-01-01
This paper discusses technologies and software tools that are being implemented in a software toolkit currently under development at NASA Glenn Research Center. Its purpose is to help study the effects of icing on airfoil performance and assist with the aerodynamic simulation process which consists of characterization and modeling of ice geometry, application of block topology and grid generation, and flow simulation. Tools and technologies for each task have been carefully chosen based on their contribution to the overall process. For the geometry characterization and modeling, we have chosen an interactive rather than automatic process in order to handle numerous ice shapes. An Appendix presents features of a software toolkit developed to support the interactive process. Approaches taken for the generation of block topology and grids, and flow simulation, though not yet implemented in the software, are discussed with reasons for why particular methods are chosen. Some of the issues that need to be addressed and discussed by the icing community are also included.
Measurement of Neutrino-Induced Coherent Pion Production and the Diffractive Background in MINERvA
NASA Astrophysics Data System (ADS)
Gomez, Alicia; Minerva Collaboration
2015-04-01
Neutrino-induced coherent charged pion production is a unique neutrino-nucleus scattering process in which a muon and pion are produced while the nucleus is left in its ground state. The MINERvA experiment has made a model-independent differential cross section measurement of this process on carbon by selecting events with a muon and a pion, no evidence of nuclear break-up, and small momentum transfer to the nucleus | t | . A similar process which is a background to the measurement on carbon is diffractive pion production off the free protons in MINERvA's scintillator. This process is not modeled in the neutrino event generator GENIE. At low | t | these events have a similar final state to the aforementioned process. A study to quantify this diffractive event contribution to the background is done by emulating these diffractive events by reweighting all other GENIE-generated background events to the predicted | t | distribution of diffractive events, and then scaling to the diffractive cross section.
NASA Astrophysics Data System (ADS)
Galdos, L.; Saenz de Argandoña, E.; Mendiguren, J.; Silvestre, E.
2017-09-01
The roll levelling is a flattening process used to remove the residual stresses and imperfections of metal strips by means of plastic deformations. During the process, the metal sheet is subjected to cyclic tension-compression deformations leading to a flat product. The process is especially important to avoid final geometrical errors when coils are cold formed or when thick plates are cut by laser. In the last years, and due to the appearance of high strength materials such as Ultra High Strength Steels, machine design engineers are demanding reliable tools for the dimensioning of the levelling facilities. Like in other metal forming fields, finite element analysis seems to be the most widely used solution to understand the occurring phenomena and to calculate the processing loads. In this paper, the roll levelling process of the third generation Fortiform 1050 steel is numerically analysed. The process has been studied using the MSC MARC software and two different material laws. A pure isotropic hardening law has been used and set as the baseline study. In the second part, tension-compression tests have been carried out to analyse the cyclic behaviour of the steel. With the obtained data, a new material model using a combined isotropic-kinematic hardening formulation has been fitted. Finally, the influence of the material model in the numerical results has been analysed by comparing a pure isotropic model and the later combined mixed hardening model.
Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks
NASA Astrophysics Data System (ADS)
Karpatne, A.; Kumar, V.
2017-12-01
Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
A review of predictive coding algorithms.
Spratling, M W
2017-03-01
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Radiative B decays with four generations
NASA Astrophysics Data System (ADS)
Hewett, Joanne L.
1987-07-01
We study the decay b-->sγ in the four-generation model including the effects of QCD corrections. We find the fourth-generation contributions to be quite significant, extending the range of allowed branching ratios, BR(b-->sγ), to lie both above and below the three-generation standard model value. The existence of a fourth family of quarks would make a prediction for the top-quark mass difficult to obtain from this process. The author would like to thank T. Rizzo and J. Trampetic for discussions on QCD corrections, G. Eilam for introducing the author to the subject of rare B decays, and the Center for Particle Theory for its hospitality while this work was completed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.
System Level Uncertainty Assessment for Collaborative RLV Design
NASA Technical Reports Server (NTRS)
Charania, A. C.; Bradford, John E.; Olds, John R.; Graham, Matthew
2002-01-01
A collaborative design process utilizing Probabilistic Data Assessment (PDA) is showcased. Given the limitation of financial resources by both the government and industry, strategic decision makers need more than just traditional point designs, they need to be aware of the likelihood of these future designs to meet their objectives. This uncertainty, an ever-present character in the design process, can be embraced through a probabilistic design environment. A conceptual design process is presented that encapsulates the major engineering disciplines for a Third Generation Reusable Launch Vehicle (RLV). Toolsets consist of aerospace industry standard tools in disciplines such as trajectory, propulsion, mass properties, cost, operations, safety, and economics. Variations of the design process are presented that use different fidelities of tools. The disciplinary engineering models are used in a collaborative engineering framework utilizing Phoenix Integration's ModelCenter and AnalysisServer environment. These tools allow the designer to join disparate models and simulations together in a unified environment wherein each discipline can interact with any other discipline. The design process also uses probabilistic methods to generate the system level output metrics of interest for a RLV conceptual design. The specific system being examined is the Advanced Concept Rocket Engine 92 (ACRE-92) RLV. Previous experience and knowledge (in terms of input uncertainty distributions from experts and modeling and simulation codes) can be coupled with Monte Carlo processes to best predict the chances of program success.
Energy recovery from thermal treatment of dewatered sludge in wastewater treatment plants.
Yang, Qingfeng; Dussan, Karla; Monaghan, Rory F D; Zhan, Xinmin
Sewage sludge is a by-product generated from municipal wastewater treatment (WWT) processes. This study examines the conversion of sludge via energy recovery from gasification/combustion for thermal treatment of dewatered sludge. The present analysis is based on a chemical equilibrium model of thermal conversion of previously dewatered sludge with moisture content of 60-80%. Prior to combustion/gasification, sludge is dried to a moisture content of 25-55% by two processes: (1) heat recovered from syngas/flue gas cooling and (2) heat recovered from syngas combustion. The electricity recovered from the combined heat and power process can be reused in syngas cleaning and in the WWT plant. Gas temperature, total heat and electricity recoverable are evaluated using the model. Results show that generation of electricity from dewatered sludge with low moisture content (≤ 70%) is feasible within a self-sufficient sludge treatment process. Optimal conditions for gasification correspond to an equivalence ratio of 2.3 and dried sludge moisture content of 25%. Net electricity generated from syngas combustion can account for 0.071 kWh/m(3) of wastewater treated, which is up to 25.4-28.4% of the WWT plant's total energy consumption.
Numerical investigation of the staged gasification of wet wood
NASA Astrophysics Data System (ADS)
Donskoi, I. G.; Kozlov, A. N.; Svishchev, D. A.; Shamanskii, V. A.
2017-04-01
Gasification of wooden biomass makes it possible to utilize forestry wastes and agricultural residues for generation of heat and power in isolated small-scale power systems. In spite of the availability of a huge amount of cheap biomass, the implementation of the gasification process is impeded by formation of tar products and poor thermal stability of the process. These factors reduce the competitiveness of gasification as compared with alternative technologies. The use of staged technologies enables certain disadvantages of conventional processes to be avoided. One of the previously proposed staged processes is investigated in this paper. For this purpose, mathematical models were developed for individual stages of the process, such as pyrolysis, pyrolysis gas combustion, and semicoke gasification. The effect of controlling parameters on the efficiency of fuel conversion into combustible gases is studied numerically using these models. For the controlling parameter are selected heat inputted into a pyrolysis reactor, the excess of oxidizer during gas combustion, and the wood moisture content. The process efficiency criterion is the gasification chemical efficiency accounting for the input of external heat (used for fuel drying and pyrolysis). The generated regime diagrams represent the gasification efficiency as a function of controlling parameters. Modeling results demonstrate that an increase in the fraction of heat supplied from an external source can result in an adequate efficiency of the wood gasification through the use of steam generated during drying. There are regions where it is feasible to perform incomplete combustion of the pyrolysis gas prior to the gasification. The calculated chemical efficiency of the staged gasification is as high as 80-85%, which is 10-20% higher that in conventional single-stage processes.
Automatic 3d Building Model Generations with Airborne LiDAR Data
NASA Astrophysics Data System (ADS)
Yastikli, N.; Cetin, Z.
2017-11-01
LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.
Experience With Bayesian Image Based Surface Modeling
NASA Technical Reports Server (NTRS)
Stutz, John C.
2005-01-01
Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.
Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang
1999-01-01
Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230
NASA Astrophysics Data System (ADS)
Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.
2014-05-01
In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.
The class of L ∩ D and its application to renewal reward process
NASA Astrophysics Data System (ADS)
Kamışlık, Aslı Bektaş; Kesemen, Tülay; Khaniyev, Tahir
2018-01-01
The class of L ∩ D is generated by intersection of two important subclasses of heavy tailed distributions: The long tailed distributions and dominated varying distributions. This class itself is also an important member of heavy tailed distributions and has some principal application areas especially in renewal, renewal reward and random walk processes. The aim of this study is to observe some well and less known results on renewal functions generated by the class of L ∩ D and apply them into a special renewal reward process which is known in the literature a semi Markovian inventory model of type (s, S). Especially we focused on Pareto distribution which belongs to the L ∩ D subclass of heavy tailed distributions. As a first step we obtained asymptotic results for renewal function generated by Pareto distribution from the class of L ∩ D using some well-known results by Embrechts and Omey [1]. Then we applied the results we obtained for Pareto distribution to renewal reward processes. As an application we investigate inventory model of type (s, S) when demands have Pareto distribution from the class of L ∩ D. We obtained asymptotic expansion for ergodic distribution function and finally we reached asymptotic expansion for nth order moments of distribution of this process.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na(+) and K(+) channels, with generator potential and graded potential models lacking voltage-gated Na(+) channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na(+) channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a 'footprint' in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation.
Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward
2014-01-01
Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation. PMID:24465197
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were “non-task-specific” (NS) neurons that served as noise generators to “task-specific” neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors. PMID:27536235
NASA Astrophysics Data System (ADS)
Radev, Dimitar; Lokshina, Izabella
2010-11-01
The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.
A Supervised Learning Process to Validate Online Disease Reports for Use in Predictive Models.
Patching, Helena M M; Hudson, Laurence M; Cooke, Warrick; Garcia, Andres J; Hay, Simon I; Roberts, Mark; Moyes, Catherine L
2015-12-01
Pathogen distribution models that predict spatial variation in disease occurrence require data from a large number of geographic locations to generate disease risk maps. Traditionally, this process has used data from public health reporting systems; however, using online reports of new infections could speed up the process dramatically. Data from both public health systems and online sources must be validated before they can be used, but no mechanisms exist to validate data from online media reports. We have developed a supervised learning process to validate geolocated disease outbreak data in a timely manner. The process uses three input features, the data source and two metrics derived from the location of each disease occurrence. The location of disease occurrence provides information on the probability of disease occurrence at that location based on environmental and socioeconomic factors and the distance within or outside the current known disease extent. The process also uses validation scores, generated by disease experts who review a subset of the data, to build a training data set. The aim of the supervised learning process is to generate validation scores that can be used as weights going into the pathogen distribution model. After analyzing the three input features and testing the performance of alternative processes, we selected a cascade of ensembles comprising logistic regressors. Parameter values for the training data subset size, number of predictors, and number of layers in the cascade were tested before the process was deployed. The final configuration was tested using data for two contrasting diseases (dengue and cholera), and 66%-79% of data points were assigned a validation score. The remaining data points are scored by the experts, and the results inform the training data set for the next set of predictors, as well as going to the pathogen distribution model. The new supervised learning process has been implemented within our live site and is being used to validate the data that our system uses to produce updated predictive disease maps on a weekly basis.
Knowledge environments representing molecular entities for the virtual physiological human.
Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M
2008-09-13
In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.
PROGRESS REPORT: COFIRING PROJECTS FOR WILLOW ISLAND AND ALBRIGHT GENERATING STATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
K. Payette; D. Tillman
During the period April 1, 2001--June 30, 2001, Allegheny Energy Supply Co., LLC (Allegheny) accelerated construction of the Willow Island cofiring project, completed the installation of foundations for the fuel storage facility, the fuel receiving facility, and the processing building. Allegheny received all processing equipment to be installed at Willow Island. Allegheny completed the combustion modeling for the Willow Island project. During this time period construction of the Albright Generating Station cofiring facility was completed, with few items left for final action. The facility was dedicated at a ceremony on June 29. Initial testing of cofiring at the facility commenced.more » This report summarizes the activities associated with the Designer Opportunity Fuel program, and demonstrations at Willow Island and Albright Generating Stations. It details the construction activities at both sites along with the combustion modeling at the Willow Island site.« less
The Vulnerability Framework Integrates Various Models of Generating Surplus Revenue
ERIC Educational Resources Information Center
Maniaci, Vincent
2004-01-01
Budgets operationalize the strategic planning process, and institutions must have surplus revenue to be able to cope with future operations. There are three approaches to generate surplus revenue: increased revenue, decreased cost, and reallocation of resources. Extending their earlier work, where they established strategic benchmarks for annual…
Direct Scaling of Leaf-Resolving Biophysical Models from Leaves to Canopies
NASA Astrophysics Data System (ADS)
Bailey, B.; Mahaffee, W.; Hernandez Ochoa, M.
2017-12-01
Recent advances in the development of biophysical models and high-performance computing have enabled rapid increases in the level of detail that can be represented by simulations of plant systems. However, increasingly detailed models typically require increasingly detailed inputs, which can be a challenge to accurately specify. In this work, we explore the use of terrestrial LiDAR scanning data to accurately specify geometric inputs for high-resolution biophysical models that enables direct up-scaling of leaf-level biophysical processes. Terrestrial LiDAR scans generate "clouds" of millions of points that map out the geometric structure of the area of interest. However, points alone are often not particularly useful in generating geometric model inputs, as additional data processing techniques are required to provide necessary information regarding vegetation structure. A new method was developed that directly reconstructs as many leaves as possible that are in view of the LiDAR instrument, and uses a statistical backfilling technique to ensure that the overall leaf area and orientation distribution matches that of the actual vegetation being measured. This detailed structural data is used to provide inputs for leaf-resolving models of radiation, microclimate, evapotranspiration, and photosynthesis. Model complexity is afforded by utilizing graphics processing units (GPUs), which allows for simulations that resolve scales ranging from leaves to canopies. The model system was used to explore how heterogeneity in canopy architecture at various scales affects scaling of biophysical processes from leaves to canopies.
NASA Astrophysics Data System (ADS)
Kuras, P. K.; Weiler, M.; Alila, Y.; Spittlehouse, D.; Winkler, R.
2006-12-01
Hydrologic models have been increasingly used in forest hydrology to overcome the limitations of paired watershed experiments, where vegetative recovery and natural variability obscure the inferences and conclusions that can be drawn from such studies. Models, however, are also plagued by uncertainty stemming from a limited understanding of hydrological processes in forested catchments and parameter equifinality is a common concern. This has created the necessity to improve our understanding of how hydrological systems work, through the development of hydrological measures, analyses and models that address the question: are we getting the right answers for the right reasons? Hence, physically-based, spatially-distributed hydrologic models should be validated with high-quality experimental data describing multiple concurrent internal catchment processes under a range of hydrologic regimes. The distributed hydrology soil vegetation model (DHSVM) frequently used in forest management applications is an example of a process-based model used to address the aforementioned circumstances, and this study takes a novel approach at collectively examining the ability of a pre-calibrated model application to realistically simulate outlet flows along with the spatial-temporal variation of internal catchment processes including: continuous groundwater dynamics at 9 locations, stream and road network flow at 67 locations for six individual days throughout the freshet, and pre-melt season snow distribution. Model efficiency was improved over prior evaluations due to continuous efforts in improving the quality of meteorological data in the watershed. Road and stream network flows were very well simulated for a range of hydrological conditions, and the spatial distribution of the pre-melt season snowpack was in general agreement with observed values. The model was effective in simulating the spatial variability of subsurface flow generation, except at locations where strong stream-groundwater interactions existed, as the model is not capable of simulating such processes and subsurface flows always drain to the stream network. The model has proven overall to be quite capable in realistically simulating internal catchment processes in the watershed, which creates more confidence in future model applications exploring the effects of various forest management scenarios on the watershed's hydrological processes.
NASA Astrophysics Data System (ADS)
Kozak, J.; Gulbinowicz, D.; Gulbinowicz, Z.
2009-05-01
The need for complex and accurate three dimensional (3-D) microcomponents is increasing rapidly for many industrial and consumer products. Electrochemical machining process (ECM) has the potential of generating desired crack-free and stress-free surfaces of microcomponents. This paper reports a study of pulse electrochemical micromachining (PECMM) using ultrashort (nanoseconds) pulses for generating complex 3-D microstructures of high accuracy. A mathematical model of the microshaping process with taking into consideration unsteady phenomena in electrical double layer has been developed. The software for computer simulation of PECM has been developed and the effects of machining parameters on anodic localization and final shape of machined surface are presented.
NASA Astrophysics Data System (ADS)
Miyauchi, T.; Machimura, T.
2014-12-01
GCM is generally used to produce input weather data for the simulation of carbon and water cycle by ecosystem process based models under climate change however its temporal resolution is sometimes incompatible to requirement. A weather generator (WG) is used for temporal downscaling of input weather data for models, where the effect of WG algorithms on reproducibility of ecosystem model outputs must be assessed. In this study simulated carbon and water cycle by Biome-BGC model using weather data measured and generated by CLIMGEN weather generator were compared. The measured weather data (daily precipitation, maximum, minimum air temperature) at a few sites for 30 years was collected from NNDC Online weather data. The generated weather data was produced by CLIMGEN parameterized using the measured weather data. NPP, heterotrophic respiration (HR), NEE and water outflow were simulated by Biome-BGC using measured and generated weather data. In the case of deciduous broad leaf forest in Lushi, Henan Province, China, 30 years average monthly NPP by WG was 10% larger than that by measured weather in the growing season. HR by WG was larger than that by measured weather in all months by 15% in average. NEE by WG was more negative in winter and was close to that by measured weather in summer. These differences in carbon cycle were because the soil water content by WG was larger than that by measured weather. The difference between monthly water outflow by WG and by measured weather was large and variable, and annual outflow by WG was 50% of that by measured weather. The inconsistency in carbon and water cycle by WG and measured weather was suggested be affected by the difference in temporal concentration of precipitation, which was assessed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes andmore » fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.« less
Weighted Watson-Crick automata
NASA Astrophysics Data System (ADS)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-01
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
USDA-ARS?s Scientific Manuscript database
Process-based computer models have been proposed as a tool to generate data for phosphorus-index assessment and development. Although models are commonly used to simulate phosphorus (P) loss from agriculture using managements that are different from the calibration data, this use of models has not ...
Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.
2012-01-01
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914
Buon, Marine; Seara-Cardoso, Ana; Viding, Essi
2016-12-01
Findings in the field of experimental psychology and cognitive neuroscience have shed new light on our understanding of the psychological and biological bases of morality. Although a lot of attention has been devoted to understanding the processes that underlie complex moral dilemmas, attempts to represent the way in which individuals generate moral judgments when processing basic harmful actions are rare. Here, we will outline a model of morality which proposes that the evaluation of basic harmful actions relies on complex interactions between emotional arousal, Theory of Mind (ToM) capacities, and inhibitory control resources. This model makes clear predictions regarding the cognitive processes underlying the development of and ability to generate moral judgments. We draw on data from developmental and cognitive psychology, cognitive neuroscience, and psychopathology research to evaluate the model and propose several conceptual and methodological improvements that are needed to further advance our understanding of moral cognition and its development.
Predictive process simulation of cryogenic implants for leading edge transistor design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh
2012-11-06
Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less
Sulis, William H
2017-10-01
Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.
NASA Astrophysics Data System (ADS)
Efimov, A. E.; Maksarov, V. V.; Timofeev, D. Y.
2018-03-01
The present paper states the impact of a technological system on piece’s roughness and shape accuracy via simulation modeling. For this purpose, a theory was formulated and a mathematical model was generated to justify self-oscillations in a system. The method of oscillations eliminations based on workpiece’s high-energy laser irradiation with the purpose of further processing were suggested in compliance with the adopted theory and model. Modeling the behaviour of a system with the transient phenomenon indicated the tendency of reducing self-oscillations in unstable processing modes, which has a positive effect under the conditions of practical implementation over piece’s roughness and accuracy.
A cascade model of information processing and encoding for retinal prosthesis.
Pei, Zhi-Jun; Gao, Guan-Xin; Hao, Bo; Qiao, Qing-Li; Ai, Hui-Jian
2016-04-01
Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.
Enforcing elemental mass and energy balances for reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Agarwal, K.; Sharma, P.
2012-01-01
Development of economically feasible gasification and carbon capture, utilization and storage (CCUS) technologies requires a variety of software tools to optimize the designs of not only the key devices involved (e., g., gasifier, CO{sub 2} adsorber) but also the entire power generation system. High-fidelity models such as Computational Fluid Dynamics (CFD) models are capable of accurately simulating the detailed flow dynamics, heat transfer, and chemistry inside the key devices. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges due to the scale differences in both time and length,more » as well the high computational cost. A reduced order model (ROM) generated from a high-fidelity model can serve as a bridge between the models of different scales. While high-fidelity models are built upon the principles of mass, momentum, and energy conservations, ROMs are usually developed based on regression-type equations and hence their predictions may violate the mass and energy conservation laws. A high-fidelity model may also have the mass and energy balance problem if it is not tightly converged. Conservations of mass and energy are important when a ROM is integrated to a flowsheet for the process simulation of the entire chemical or power generation system, especially when recycle streams are connected to the modeled device. As a part of the Carbon Capture Simulation Initiative (CCSI) project supported by the U.S. Department of Energy, we developed a software framework for generating ROMs from CFD simulations and integrating them with Process Modeling Environments (PMEs) for system-wide optimization. This paper presents a method to correct the results of a high-fidelity model or a ROM such that the elemental mass and energy are conserved perfectly. Correction factors for the flow rates of individual species in the product streams are solved using a minimization algorithm based on Lagrangian multiplier method. Enthalpies of product streams are also modified to enforce the energy balance. The approach is illustrated for two ROMs, one based on a CFD model of an entrained-flow gasifier and the other based on the CFD model of a multiphase CO{sub 2} adsorber.« less
Kornecki, Martin; Strube, Jochen
2018-03-16
Productivity improvements of mammalian cell culture in the production of recombinant proteins have been made by optimizing cell lines, media, and process operation. This led to enhanced titers and process robustness without increasing the cost of the upstream processing (USP); however, a downstream bottleneck remains. In terms of process control improvement, the process analytical technology (PAT) initiative, initiated by the American Food and Drug Administration (FDA), aims to measure, analyze, monitor, and ultimately control all important attributes of a bioprocess. Especially, spectroscopic methods such as Raman or near-infrared spectroscopy enable one to meet these analytical requirements, preferably in-situ. In combination with chemometric techniques like partial least square (PLS) or principal component analysis (PCA), it is possible to generate soft sensors, which estimate process variables based on process and measurement models for the enhanced control of bioprocesses. Macroscopic kinetic models can be used to simulate cell metabolism. These models are able to enhance the process understanding by predicting the dynamic of cells during cultivation. In this article, in-situ turbidity (transmission, 880 nm) and ex-situ Raman spectroscopy (785 nm) measurements are combined with an offline macroscopic Monod kinetic model in order to predict substrate concentrations. Experimental data of Chinese hamster ovary cultivations in bioreactors show a sufficiently linear correlation (R² ≥ 0.97) between turbidity and total cell concentration. PLS regression of Raman spectra generates a prediction model, which was validated via offline viable cell concentration measurement (RMSE ≤ 13.82, R² ≥ 0.92). Based on these measurements, the macroscopic Monod model can be used to determine different process attributes, e.g., glucose concentration. In consequence, it is possible to approximately calculate (R² ≥ 0.96) glucose concentration based on online cell concentration measurements using turbidity or Raman spectroscopy. Future approaches will use these online substrate concentration measurements with turbidity and Raman measurements, in combination with the kinetic model, in order to control the bioprocess in terms of feeding strategies, by employing an open platform communication (OPC) network-either in fed-batch or perfusion mode, integrated into a continuous operation of upstream and downstream.
Kornecki, Martin; Strube, Jochen
2018-01-01
Productivity improvements of mammalian cell culture in the production of recombinant proteins have been made by optimizing cell lines, media, and process operation. This led to enhanced titers and process robustness without increasing the cost of the upstream processing (USP); however, a downstream bottleneck remains. In terms of process control improvement, the process analytical technology (PAT) initiative, initiated by the American Food and Drug Administration (FDA), aims to measure, analyze, monitor, and ultimately control all important attributes of a bioprocess. Especially, spectroscopic methods such as Raman or near-infrared spectroscopy enable one to meet these analytical requirements, preferably in-situ. In combination with chemometric techniques like partial least square (PLS) or principal component analysis (PCA), it is possible to generate soft sensors, which estimate process variables based on process and measurement models for the enhanced control of bioprocesses. Macroscopic kinetic models can be used to simulate cell metabolism. These models are able to enhance the process understanding by predicting the dynamic of cells during cultivation. In this article, in-situ turbidity (transmission, 880 nm) and ex-situ Raman spectroscopy (785 nm) measurements are combined with an offline macroscopic Monod kinetic model in order to predict substrate concentrations. Experimental data of Chinese hamster ovary cultivations in bioreactors show a sufficiently linear correlation (R2 ≥ 0.97) between turbidity and total cell concentration. PLS regression of Raman spectra generates a prediction model, which was validated via offline viable cell concentration measurement (RMSE ≤ 13.82, R2 ≥ 0.92). Based on these measurements, the macroscopic Monod model can be used to determine different process attributes, e.g., glucose concentration. In consequence, it is possible to approximately calculate (R2 ≥ 0.96) glucose concentration based on online cell concentration measurements using turbidity or Raman spectroscopy. Future approaches will use these online substrate concentration measurements with turbidity and Raman measurements, in combination with the kinetic model, in order to control the bioprocess in terms of feeding strategies, by employing an open platform communication (OPC) network—either in fed-batch or perfusion mode, integrated into a continuous operation of upstream and downstream. PMID:29547557
NASA Astrophysics Data System (ADS)
Aydogan Yenmez, Arzu; Erbas, Ayhan Kursat; Cakiroglu, Erdinc; Alacaci, Cengiz; Cetinkaya, Bulent
2017-08-01
Applications and modelling have gained a prominent role in mathematics education reform documents and curricula. Thus, there is a growing need for studies focusing on the effective use of mathematical modelling in classrooms. Assessment is an integral part of using modelling activities in classrooms, since it allows teachers to identify and manage problems that arise in various stages of the modelling process. However, teachers' difficulties in assessing student modelling work are a challenge to be considered when implementing modelling in the classroom. Thus, the purpose of this study was to investigate how teachers' knowledge on generating assessment criteria for assessing student competence in mathematical modelling evolved through a professional development programme, which is based on a lesson study approach and modelling perspective. The data was collected with four teachers from two public high schools over a five-month period. The professional development programme included a cyclical process, with each cycle consisting of an introductory meeting, the implementation of a model-eliciting activity with students, and a follow-up meeting. The results showed that the professional development programme contributed to teachers' knowledge for generating assessment criteria on the products, and the observable actions that affect the modelling cycle.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
NASA Astrophysics Data System (ADS)
Weiler, M.
2016-12-01
Heavy rain induced flash floods are still a serious hazard and generate high damages in urban areas. In particular in the spatially complex urban areas, the temporal and spatial pattern of runoff generation processes at a wide spatial range during extreme rainfall events need to be predicted including the specific effects of green infrastructure and urban forests. In addition, the initial conditions (soil moisture pattern, water storage of green infrastructure) and the effect of lateral redistribution of water (run-on effects and re-infiltration) have to be included in order realistically predict flash flood generation. We further developed the distributed, process-based model RoGeR (Runoff Generation Research) to include the relevant features and processes in urban areas in order to test the effects of different settings, initial conditions and the lateral redistribution of water on the predicted flood response. The uncalibrated model RoGeR runs at a spatial resolution of 1*1m² (LiDAR, degree of sealing, landuse), soil properties and geology (1:50.000). In addition, different green infrastructures are included into the model as well as the effect of trees on interception and transpiration. A hydraulic model was included into RoGeR to predict surface runoff, water redistribution, and re-infiltration. During rainfall events, RoGeR predicts at 5 min temporal resolution, but the model also simulates evapotranspiration and groundwater recharge during rain-free periods at a longer time step. The model framework was applied to several case studies in Germany where intense rainfall events produced flash floods causing high damage in urban areas and to a long-term research catchment in an urban setting (Vauban, Freiburg), where a variety of green infrastructures dominates the hydrology. Urban-RoGeR allowed us to study the effects of different green infrastructures on reducing the flood peak, but also its effect on the water balance (evapotranspiration and groundwater recharge). We could also show that infiltration of surface runoff from areas with a low infiltration (lateral redistribution) reduce the flood peaks by over 90% in certain areas and situations. Finally, we also evaluated the model to long-term runoff observations (surface runoff, ET, roof runoff) and to flood marks in the selected case studies.
Dynamics and Stability of Acoustic Wavefronts in the Ocean
2014-09-30
processes on underwater acoustic fields. The 3-D HWT algorithm was also applied to investigate long- range propagation of infrasound in the atmosphere...oceanographic processes on underwater sound propagation and also has been demonstrated to be an efficient and robust technique for modeling infrasound ...algorithm by modeling propagation of infrasound generated by Eyjafjallajökull volcano in southern Iceland. Eruptions of this volcano were recorded by
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Hirschi, M.; Spirig, C.
2014-12-01
To quantify impact of the climate change on a specific pest (or any weather-dependent process) in a specific site, we may use a site-calibrated pest (or other) model and compare its outputs obtained with site-specific weather data representing present vs. perturbed climates. The input weather data may be produced by the stochastic weather generator. Apart from the quality of the pest model, the reliability of the results obtained in such experiment depend on an ability of the generator to represent the statistical structure of the real world weather series, and on the sensitivity of the pest model to possible imperfections of the generator. This contribution deals with the multivariate HOWGH weather generator, which is based on a combination of parametric and non-parametric statistical methods. Here, HOWGH is used to generate synthetic hourly series of three weather variables (solar radiation, temperature and precipitation) required by a dynamic pest model SOPRA to simulate the development of codling moth. The contribution presents results of the direct and indirect validation of HOWGH. In the direct validation, the synthetic series generated by HOWGH (various settings of its underlying model are assumed) are validated in terms of multiple climatic characteristics, focusing on the subdaily wet/dry and hot/cold spells. In the indirect validation, we assess the generator in terms of characteristics derived from the outputs of SOPRA model fed by the observed vs. synthetic series. The weather generator may be used to produce weather series representing present and future climates. In the latter case, the parameters of the generator may be modified by the climate change scenarios based on Global or Regional Climate Models. To demonstrate this feature, the results of codling moth simulations for future climate will be shown. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
Automatic generation of computable implementation guides from clinical information models.
Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat
2015-06-01
Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. Copyright © 2015 Elsevier Inc. All rights reserved.
The Importance of Long Wavelength Processes in Generating Landscapes
NASA Astrophysics Data System (ADS)
Roberts, Gareth G.; White, Nicky
2017-04-01
The processes responsible for generating landscapes observed on Earth and elsewhere are poorly understood. For example, the relative importance of long (>10 km) and short wavelength erosional processes in determining the evolution of topography is debated. Much work has focused on developing an observational and theoretical framework for evolution of longitudinal river profiles (i.e. elevation as a function of streamwise distance), which probably sets the pace of erosion in low-mid latitude continents. A large number of geomorphic studies emphasis the importance of short wavelength processes in sculpting topography (e.g. waterfall migration, interaction of biota and the solid Earth, hill slope evolution). However, it is not clear if these processes scale to generate topography observed at longer (>10 km) wavelengths. At wavelengths of tens to thousands of kilometers topography is generated by modification of the lithosphere (e.g. shortening, extension, flexure) and by sub-plate processes (e.g. dynamic support). Inversion of drainage patterns suggests that uplift rate histories can be reliably recovered at these long wavelengths using simple erosional models (e.g. stream power). Calculated uplift and erosion rate histories are insensitive to short wavelength (<10 km) or rapid (<100 ka) environmental changes (e.g. biota, precipitation, lithology). One way to examine the relative importance of short and long wavelength processes in generating topography is to transform river profiles into distance-frequency space. We calculate the wavelet power spectrum of a suite of river profiles and examine their spectral content. Big rivers in North America (e.g. Colorado, Rio Grande) and Africa (e.g. Niger, Orange) have a red noise spectrum (i.e. power inversely proportional to wavenumber-squared) at wavelengths > 100 km. More than 90% of river profile elevations in our inventory are determined at these wavelengths. At shorter wavelengths spectra more closely resemble pink noise (power inversely proportional to wavenumber). These observations suggest that short wavelength processes do not simply scale to generate the long wavelength changes in elevation. Instead we suggest that long wavelength processes (e.g. regional uplift, knickzone migration) determine the shape and evolution of nearly all topography. These results suggest that the erosional complexity observed in local geomorphic studies and the relative simplicity of erosional models required to fit continental-scale drainage patterns are not mutually exclusive. Rather that the problem of fluvial erosion is being tackled at different and probably unrelated scales.
Modeling discrete and rhythmic movements through motor primitives: a review.
Degallier, Sarah; Ijspeert, Auke
2010-10-01
Rhythmic and discrete movements are frequently considered separately in motor control, probably because different techniques are commonly used to study and model them. Yet the increasing interest in finding a comprehensive model for movement generation requires bridging the different perspectives arising from the study of those two types of movements. In this article, we consider discrete and rhythmic movements within the framework of motor primitives, i.e., of modular generation of movements. In this way we hope to gain an insight into the functional relationships between discrete and rhythmic movements and thus into a suitable representation for both of them. Within this framework we can define four possible categories of modeling for discrete and rhythmic movements depending on the required command signals and on the spinal processes involved in the generation of the movements. These categories are first discussed in terms of biological concepts such as force fields and central pattern generators and then illustrated by several mathematical models based on dynamical system theory. A discussion on the plausibility of theses models concludes the work.
On compensatory strategies and computational models: the case of pure alexia.
Shallice, Tim
2014-01-01
The article is concerned with inferences from the behaviour of neurological patients to models of normal function. It takes the letter-by-letter reading strategy common in pure alexic patients as an example of the methodological problems involved in making such inferences that compensatory strategies produce. The evidence is discussed on the possible use of three ways the letter-by-letter reading process might operate: "reversed spelling"; the use of the phonological input buffer as a temporary holding store during word building; and the use of serial input to the visual word-form system entirely within the visual-orthographic domain such as in the model of Plaut [1999. A connectionist approach to word reading and acquired dyslexia: Extension to sequential processing. Cognitive Science, 23, 543-568]. The compensatory strategy used by, at least, one pure alexic patient does not fit with the third of these possibilities. On the more general question, it is argued that even if compensatory strategies are being used, the behaviour of neurological patients can be useful for the development and assessment of first-generation information-processing models of normal function, but they are not likely to be useful for the development and assessment of second-generation computational models.
Secondary dispersal driven by overland flow in drylands: Review and mechanistic model development.
Thompson, Sally E; Assouline, Shmuel; Chen, Li; Trahktenbrot, Ana; Svoray, Tal; Katul, Gabriel G
2014-01-01
Seed dispersal alters gene flow, reproduction, migration and ultimately spatial organization of dryland ecosystems. Because many seeds in drylands lack adaptations for long-distance dispersal, seed transport by secondary processes such as tumbling in the wind or mobilization in overland flow plays a dominant role in determining where seeds ultimately germinate. Here, recent developments in modeling runoff generation in spatially complex dryland ecosystems are reviewed with the aim of proposing improvements to mechanistic modeling of seed dispersal processes. The objective is to develop a physically-based yet operational framework for determining seed dispersal due to surface runoff, a process that has gained recent experimental attention. A Buoyant OBject Coupled Eulerian - Lagrangian Closure model (BOB-CELC) is proposed to represent seed movement in shallow surface flows. The BOB-CELC is then employed to investigate the sensitivity of seed transport to landscape and storm properties and to the spatial configuration of vegetation patches interspersed within bare earth. The potential to simplify seed transport outcomes by considering the limiting behavior of multiple runoff events is briefly considered, as is the potential for developing highly mechanistic, spatially explicit models that link seed transport, vegetation structure and water movement across multiple generations of dryland plants.
A blueprint for using climate change predictions in an eco-hydrological study
NASA Astrophysics Data System (ADS)
Caporali, E.; Fatichi, S.; Ivanov, V. Y.
2009-12-01
There is a growing interest to extend climate change predictions to smaller, catchment-size scales and identify their implications on hydrological and ecological processes. Small scale processes are, in fact, expected to mediate climate changes, producing local effects and feedbacks that can interact with the principal consequences of the change. This is particularly applicable, when a complex interaction, such as the inter-relationship between the hydrological cycle and vegetation dynamics, is considered. This study presents a blueprint methodology for studying climate change impacts, as inferred from climate models, on eco-hydrological dynamics at the catchment scale. Climate conditions, present or future, are imposed through input hydrometeorological variables for hydrological and eco-hydrological models. These variables are simulated with an hourly weather generator as an outcome of a stochastic downscaling technique. The generator is parameterized to reproduce the climate of southwestern Arizona for present (1961-2000) and future (2081-2100) conditions. The methodology provides the capability to generate ensemble realizations for the future that take into account the heterogeneous nature of climate predictions from different models. The generated time series of meteorological variables for the two scenarios corresponding to the current and mean expected future serve as input to a coupled hydrological and vegetation dynamics model, “Tethys-Chloris”. The hydrological model reproduces essential components of the land-surface hydrological cycle, solving the mass and energy budget equations. The vegetation model parsimoniously parameterizes essential plant life-cycle processes, including photosynthesis, phenology, carbon allocation, and tissue turnover. The results for the two mean scenarios are compared and discussed in terms of changes in the hydrological balance components, energy fluxes, and indices of vegetation productivity The need to account for uncertainties in projections of future climate is discussed and a methodology for propagating these uncertainties into the probability density functions of changes in eco-hydrological variables is presented.
Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Son, Woo-Sung
2015-01-01
This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6-18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R (2) had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status.
Microstructure Modeling of 3rd Generation Disk Alloys
NASA Technical Reports Server (NTRS)
Jou, Herng-Jeng
2010-01-01
The objective of this program is to model, validate, and predict the precipitation microstructure evolution, using PrecipiCalc (QuesTek Innovations LLC) software, for 3rd generation Ni-based gas turbine disc superalloys during processing and service, with a set of logical and consistent experiments and characterizations. Furthermore, within this program, the originally research-oriented microstructure simulation tool will be further improved and implemented to be a useful and user-friendly engineering tool. In this report, the key accomplishment achieved during the second year (2008) of the program is summarized. The activities of this year include final selection of multicomponent thermodynamics and mobility databases, precipitate surface energy determination from nucleation experiment, multiscale comparison of predicted versus measured intragrain precipitation microstructure in quench samples showing good agreement, isothermal coarsening experiment and interaction of grain boundary and intergrain precipitates, primary microstructure of subsolvus treatment, and finally the software implementation plan for the third year of the project. In the following year, the calibrated models and simulation tools will be validated against an independently developed experimental data set, with actual disc heat treatment process conditions. Furthermore, software integration and implementation will be developed to provide material engineers valuable information in order to optimize the processing of the 3rd generation gas turbine disc alloys.
Acts of Writing: A Compilation of Six Models That Define the Processes of Writing
ERIC Educational Resources Information Center
Sharp, Laurie A.
2016-01-01
Writing is a developmental and flexible process. Using a prescribed process for acts of writing during instruction does not take into account individual differences of writers and generates writing instruction that is narrow, rigid, and inflexible. Preservice teachers receive limited training with theory and pedagogy for writing, which potentially…
Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S
The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less
Ezawa, Kiyoshi; Innan, Hideki
2013-01-01
The population genetic behavior of mutations in sperm genes is theoretically investigated. We modeled the processes at two levels. One is the standard population genetic process, in which the population allele frequencies change generation by generation, depending on the difference in selective advantages. The other is the sperm competition during each genetic transmission from one generation to the next generation. For the sperm competition process, we formulate the situation where a huge number of sperm with alleles A and B, produced by a single heterozygous male, compete to fertilize a single egg. This “minimal model” demonstrates that a very slight difference in sperm performance amounts to quite a large difference between the alleles’ winning probabilities. By incorporating this effect of paternity-sharing sperm competition into the standard population genetic process, we show that fierce sperm competition can enhance the fixation probability of a mutation with a very small phenotypic effect at the single-sperm level, suggesting a contribution of sperm competition to rapid amino acid substitutions in haploid-expressed sperm genes. Considering recent genome-wide demonstrations that a substantial fraction of the mammalian sperm genes are haploid expressed, our model could provide a potential explanation of rapid evolution of sperm genes with a wide variety of functions (as long as they are expressed in the haploid phase). Another advantage of our model is that it is applicable to a wide range of species, irrespective of whether the species is externally fertilizing, polygamous, or monogamous. The theoretical result was applied to mammalian data to estimate the selection intensity on nonsynonymous mutations in sperm genes. PMID:23666936
Ong, Olivia X H; Seow, Yi-Xin; Ong, Peter K C; Zhou, Weibiao
2015-09-01
Application of high intensity ultrasound has shown potential in the production of Maillard reaction odor-active flavor compounds in model systems. The impact of initial pH, sonication duration, and ultrasound intensity on the production of Maillard reaction products (MRPs) by ultrasound processing in a cysteine-xylose model system were evaluated using Response Surface Methodology (RSM) with a modified mathematical model. Generation of selected MRPs, 2-methylthiophene and tetramethyl pyrazine, was optimal at an initial pH of 6.00, accompanied with 78.1 min of processing at an ultrasound intensity of 19.8 W cm(-2). However, identification of volatiles using gas chromatography-mass spectrometry (GC/MS) revealed that ultrasound-assisted Maillard reactions generated fewer sulfur-containing volatile flavor compounds as compared to conventional heat treatment of the model system. Likely reasons for this difference in flavor profile include the expulsion of H2S due to ultrasonic degassing and inefficient transmission of ultrasonic energy. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert
2005-01-01
Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.
Relation between social information processing and intimate partner violence in dating couples.
Setchell, Sarah; Fritz, Patti Timmons; Glasgow, Jillian
2017-07-01
We used couple-level data to predict physical acts of intimate partner violence (IPV) from self-reported negative emotions and social information-processing (SIP) abilities among 100 dating couples (n = 200; mean age = 21.45 years). Participants read a series of hypothetical conflict situation vignettes and responded to questionnaires to assess negative emotions and various facets of SIP including attributions for partner behavior, generation of response alternatives, and response selection. We conducted a series of negative binomial mixed-model regressions based on the actor-partner interdependence model (APIM; Kenny, Kashy, & Cook, 2006, Dyadic data analysis. New York, NY: Guilford Press). There were significant results for the response generation and negative emotion models. Participants who generated fewer coping response alternatives were at greater risk of victimization (actor effect). Women were at greater risk of victimization if they had partners who generated fewer coping response alternatives (sex by partner interaction effect). Generation of less competent coping response alternatives predicted greater risk of perpetration among men, whereas generation of more competent coping response alternatives predicted greater risk of victimization among women (sex by actor interaction effects). Two significant actor by partner interaction effects were found for the negative emotion models. Participants who reported discrepant levels of negative emotions from their partners were at greatest risk of perpetration. Participants who reported high levels of negative emotions were at greatest risk of victimization if they had partners who reported low levels of negative emotions. This research has implications for researchers and clinicians interested in addressing the problem of IPV. Aggr. Behav. 43:329-341, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Modeling laser velocimeter signals as triply stochastic Poisson processes
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1976-01-01
Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.
NASA Technical Reports Server (NTRS)
Baez, Marivell; Vickerman, Mary; Choo, Yung
2000-01-01
SmaggIce (Surface Modeling And Grid Generation for Iced Airfoils) is one of NASNs aircraft icing research codes developed at the Glenn Research Center. It is a software toolkit used in the process of aerodynamic performance prediction of iced airfoils. It includes tools which complement the 2D grid-based Computational Fluid Dynamics (CFD) process: geometry probing; surface preparation for gridding: smoothing and re-discretization of geometry. Future releases will also include support for all aspects of gridding: domain decomposition; perimeter discretization; grid generation and modification.
Automated system for generation of soil moisture products for agricultural drought assessment
NASA Astrophysics Data System (ADS)
Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.
Molina, Manuel; Mota, Manuel; Ramos, Alfonso
2015-01-01
This work deals with mathematical modeling through branching processes. We consider sexually reproducing animal populations where, in each generation, the number of progenitor couples is determined in a non-predictable environment. By using a class of two-sex branching processes, we describe their demographic dynamics and provide several probabilistic and inferential contributions. They include results about the extinction of the population and the estimation of the offspring distribution and its main moments. We also present an application to salmonid populations.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.
Hazell, J W; Jastreboff, P J
1990-02-01
A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.
Meson exchange current (MEC) models in neutrino interaction generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katori, Teppei
2015-05-15
Understanding of the so-called 2 particle-2 hole (2p-2h) effect is an urgent program in neutrino interaction physics for current and future oscillation experiments. Such processes are believed to be responsible for the event excesses observed by recent neutrino experiments. The 2p-2h effect is dominated by the meson exchange current (MEC), and is accompanied by a 2-nucleon emission from the primary vertex, instead of a single nucleon emission from the charged-current quasi-elastic (CCQE) interaction. Current and future high resolution experiments can potentially nail down this effect. For this reason, there are world wide efforts to model and implement this process inmore » neutrino interaction simulations. In these proceedings, I would like to describe how this channel is modeled in neutrino interaction generators.« less
Automated knowledge generation
NASA Technical Reports Server (NTRS)
Myler, Harley R.; Gonzalez, Avelino J.
1988-01-01
The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).
NASA Astrophysics Data System (ADS)
Han, Xuesong; Li, Haiyan; Zhao, Fu
2017-07-01
Particle-fluid based surface generation process has already become one of the most important materials processing technology for many advanced materials such as optical crystal, ceramics and so on. Most of the particle-fluid based surface generation technology involves two key process: chemical reaction which is responsible for surface softening; physical behavior which is responsible for materials removal/deformation. Presently, researchers cannot give a reasonable explanation about the complex process in the particle-fluid based surface generation technology because of the small temporal-spatial scale and the concurrent influence of physical-chemical process. Molecular dynamics (MD) method has already been proved to be a promising approach for constructing effective model of atomic scale phenomenon and can serve as a predicting simulation tool in analyzing the complex surface generation mechanism and is employed in this research to study the essence of surface generation. The deformation and piles of water molecule is induced with the feeding of abrasive particle which justifies the property mutation of water at nanometer scale. There are little silica molecule aggregation or materials removal because the water-layer greatly reduce the strength of mechanical interaction between particle and materials surface and minimize the stress concentration. Furthermore, chemical effect is also observed at the interface: stable chemical bond is generated between water and silica which lead to the formation of silconl and the reaction rate changes with the amount of water molecules in the local environment. Novel ring structure is observed in the silica surface and it is justified to be favored of chemical reaction with water molecule. The siloxane bond formation process quickly strengthened across the interface with the feeding of abrasive particle because of the compressive stress resulted by the impacting behavior.
Self-Assembly of Human Serum Albumin: A Simplex Phenomenon
Thakur, Garima; Prashanthi, Kovur; Jiang, Keren; Thundat, Thomas
2017-01-01
Spontaneous self-assemblies of biomolecules can generate geometrical patterns. Our findings provide an insight into the mechanism of self-assembled ring pattern generation by human serum albumin (HSA). The self-assembly is a process guided by kinetic and thermodynamic parameters. The generated protein ring patterns display a behavior which is geometrically related to a n-simplex model and is explained through thermodynamics and chemical kinetics. PMID:28930179
Functional identification of spike-processing neural circuits.
Lazar, Aurel A; Slutskiy, Yevgeniy B
2014-02-01
We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.
Solid waste forecasting using modified ANFIS modeling.
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; K N A, Maulud
2015-10-01
Solid waste prediction is crucial for sustainable solid waste management. Usually, accurate waste generation record is challenge in developing countries which complicates the modelling process. Solid waste generation is related to demographic, economic, and social factors. However, these factors are highly varied due to population and economy growths. The objective of this research is to determine the most influencing demographic and economic factors that affect solid waste generation using systematic approach, and then develop a model to forecast solid waste generation using a modified Adaptive Neural Inference System (MANFIS). The model evaluation was performed using Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the coefficient of determination (R²). The results show that the best input variables are people age groups 0-14, 15-64, and people above 65 years, and the best model structure is 3 triangular fuzzy membership functions and 27 fuzzy rules. The model has been validated using testing data and the resulted training RMSE, MAE and R² were 0.2678, 0.045 and 0.99, respectively, while for testing phase RMSE =3.986, MAE = 0.673 and R² = 0.98. To date, a few attempts have been made to predict the annual solid waste generation in developing countries. This paper presents modeling of annual solid waste generation using Modified ANFIS, it is a systematic approach to search for the most influencing factors and then modify the ANFIS structure to simplify the model. The proposed method can be used to forecast the waste generation in such developing countries where accurate reliable data is not always available. Moreover, annual solid waste prediction is essential for sustainable planning.
NASA Astrophysics Data System (ADS)
Li, Qiaoling; Ishidaira, Hiroshi
2012-01-01
SummaryThe biosphere and hydrosphere are intrinsically coupled. The scientific question is if there is a substantial change in one component such as vegetation cover, how will the other components such as transpiration and runoff generation respond, especially under climate change conditions? Stand-alone hydrological models have a detailed description of hydrological processes but do not sufficiently parameterize vegetation as a dynamic component. Dynamic global vegetation models (DGVMs) are able to simulate transient structural changes in major vegetation types but do not simulate runoff generation reliably. Therefore, both hydrological models and DGVMs have their limitations as well as advantages for addressing this question. In this study a biosphere hydrological model (LPJH) is developed by coupling a prominent DGVM (Lund-Postdam-Jena model referred to as LPJ) with a stand-alone hydrological model (HYMOD), with the objective of analyzing the role of vegetation in the hydrological processes at basin scale and evaluating the impact of vegetation change on the hydrological processes under climate change. The application and validation of the LPJH model to four basins representing a variety of climate and vegetation conditions shows that the performance of LPJH is much better than that of the original LPJ and is similar to that of stand-alone hydrological models for monthly and daily runoff simulation at the basin scale. It is argued that the LPJH model gives more reasonable hydrological simulation since it considers both the spatial variability of soil moisture and vegetation dynamics, which make the runoff generation mechanism more reliable. As an example, it is shown that changing atmospheric CO 2 content alone would result in runoff increases in humid basins and decreases in arid basins. Theses changes are mainly attributable to changes in transpiration driven by vegetation dynamics, which are not simulated in stand-alone hydrological models. Therefore LPJH potentially provides a powerful tool for simulating vegetation response to climate changes in the biosphere hydrological cycle.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
Iterating between Tools to Create and Edit Visualizations.
Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah
2017-01-01
A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.
NASA Astrophysics Data System (ADS)
Aljoaba, Sharif; Dillon, Oscar; Khraisheh, Marwan; Jawahir, I. S.
2012-07-01
The ability to generate nano-sized grains is one of the advantages of friction stir processing (FSP). However, the high temperatures generated during the stirring process within the processing zone stimulate the grains to grow after recrystallization. Therefore, maintaining the small grains becomes a critical issue when using FSP. In the present reports, coolants are applied to the fixture and/or processed material in order to reduce the temperature and hence, grain growth. Most of the reported data in the literature concerning cooling techniques are experimental. We have seen no reports that attempt to predict these quantities when using coolants while the material is undergoing FSP. Therefore, there is need to develop a model that predicts the resulting grain size when using coolants, which is an important step toward designing the material microstructure. In this study, two three-dimensional computational fluid dynamics (CFD) models are reported which simulate FSP with and without coolant application while using the STAR CCM+ CFD commercial software. In the model with the coolant application, the fixture (backing plate) is modeled while is not in the other model. User-defined subroutines were incorporated in the software and implemented to investigate the effects of changing process parameters on temperature, strain rate and material velocity fields in, and around, the processed nugget. In addition, a correlation between these parameters and the Zener-Holloman parameter used in material science was developed to predict the grain size distribution. Different stirring conditions were incorporated in this study to investigate their effects on material flow and microstructural modification. A comparison of the results obtained by using each of the models on the processed microstructure is also presented for the case of Mg AZ31B-O alloy. The predicted results are also compared with the available experimental data and generally show good agreement.
Integration of Point Clouds Dataset from Different Sensors
NASA Astrophysics Data System (ADS)
Abdullah, C. K. A. F. Che Ku; Baharuddin, N. Z. S.; Ariff, M. F. M.; Majid, Z.; Lau, C. L.; Yusoff, A. R.; Idris, K. M.; Aspuri, A.
2017-02-01
Laser Scanner technology become an option in the process of collecting data nowadays. It is composed of Airborne Laser Scanner (ALS) and Terrestrial Laser Scanner (TLS). ALS like Phoenix AL3-32 can provide accurate information from the viewpoint of rooftop while TLS as Leica C10 can provide complete data for building facade. However if both are integrated, it is able to produce more accurate data. The focus of this study is to integrate both types of data acquisition of ALS and TLS and determine the accuracy of the data obtained. The final results acquired will be used to generate models of three-dimensional (3D) buildings. The scope of this study is focusing on data acquisition of UTM Eco-home through laser scanning methods such as ALS which scanning on the roof and the TLS which scanning on building façade. Both device is used to ensure that no part of the building that are not scanned. In data integration process, both are registered by the selected points among the manmade features which are clearly visible in Cyclone 7.3 software. The accuracy of integrated data is determined based on the accuracy assessment which is carried out using man-made registration methods. The result of integration process can achieve below 0.04m. This integrated data then are used to generate a 3D model of UTM Eco-home building using SketchUp software. In conclusion, the combination of the data acquisition integration between ALS and TLS would produce the accurate integrated data and able to use for generate a 3D model of UTM eco-home. For visualization purposes, the 3D building model which generated is prepared in Level of Detail 3 (LOD3) which recommended by City Geographic Mark-Up Language (CityGML).
Incremental terrain processing for large digital elevation models
NASA Astrophysics Data System (ADS)
Ye, Z.
2012-12-01
Incremental terrain processing for large digital elevation models Zichuan Ye, Dean Djokic, Lori Armstrong Esri, 380 New York Street, Redlands, CA 92373, USA (E-mail: zye@esri.com, ddjokic@esri.com , larmstrong@esri.com) Efficient analyses of large digital elevation models (DEM) require generation of additional DEM artifacts such as flow direction, flow accumulation and other DEM derivatives. When the DEMs to analyze have a large number of grid cells (usually > 1,000,000,000) the generation of these DEM derivatives is either impractical (it takes too long) or impossible (software is incapable of processing such a large number of cells). Different strategies and algorithms can be put in place to alleviate this situation. This paper describes an approach where the overall DEM is partitioned in smaller processing units that can be efficiently processed. The processed DEM derivatives for each partition can then be either mosaicked back into a single large entity or managed on partition level. For dendritic terrain morphologies, the way in which partitions are to be derived and the order in which they are to be processed depend on the river and catchment patterns. These patterns are not available until flow pattern of the whole region is created, which in turn cannot be established upfront due to the size issues. This paper describes a procedure that solves this problem: (1) Resample the original large DEM grid so that the total number of cells is reduced to a level for which the drainage pattern can be established. (2) Run standard terrain preprocessing operations on the resampled DEM to generate the river and catchment system. (3) Define the processing units and their processing order based on the river and catchment system created in step (2). (4) Based on the processing order, apply the analysis, i.e., flow accumulation operation to each of the processing units, at the full resolution DEM. (5) As each processing unit is processed based on the processing order defined in (3), compare the resulting drainage pattern with the drainage pattern established at the coarser scale and adjust the drainage boundaries and rivers if necessary.
Gulf Coast megaregion evacuation traffic simulation modeling and analysis.
DOT National Transportation Integrated Search
2015-12-01
This paper describes a project to develop a micro-level traffic simulation for a megaregion. To : accomplish this, a mass evacuation event was modeled using a traffic demand generation process that : created a spatial and temporal distribution of dep...
Structural Counterfactuals: A Brief Introduction
ERIC Educational Resources Information Center
Pearl, Judea
2013-01-01
Recent advances in causal reasoning have given rise to a computational model that emulates the process by which humans generate, evaluate, and distinguish counterfactual sentences. Contrasted with the "possible worlds" account of counterfactuals, this "structural" model enjoys the advantages of representational economy,…
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.
Eikenberry, Steffen E; Marmarelis, Vasilis Z
2013-02-01
We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.
Discussion on the Modelling and Processing of Signals fom an Acousto-Optic Spectrum Analyzer.
1987-06-01
AD-AIBS 639 DISCUSSION ON THE MODELLING AND PROCESSIN OF SIGNALS 1/1 FOR RN ACOUSTO - OPTIC SPECTRUM ANALYZER(U)G DFENCE RESERCH ESTABGLISHMENT OTTANA...8217’~ AV - I National DefenseI Defence nationale DISCUSSION ON THE MODELLING AND PROCESSING OF SIGNALS FROM AN ACOUSTO - OPTIC SPECTRUM ANALYZER by Guy...signals generated by an Acousto - Optic Spectrum Analyzer (AOSA). It also shows how this calculation can be related to pulse modu- lated signals. In its
Modeling the filament winding process
NASA Technical Reports Server (NTRS)
Calius, E. P.; Springer, G. S.
1985-01-01
A model is presented which can be used to determine the appropriate values of the process variables for filament winding a cylinder. The model provides the cylinder temperature, viscosity, degree of cure, fiber position and fiber tension as functions of position and time during the filament winding and subsequent cure, and the residual stresses and strains within the cylinder during and after the cure. A computer code was developed to obtain quantitative results. Sample results are given which illustrate the information that can be generated with this code.
Design of structure and simulation of the three-zone gasifier of dense layer of the inverted process
NASA Astrophysics Data System (ADS)
Zagrutdinov, R. Sh; Negutorov, V. N.; Maliykhin, D. G.; Nikishanin, M. S.; Senachin, P. K.
2017-11-01
Experts of LLC “New Energy Technologies” have developed gasifiers designs, with the implementation of the three-zone gasification method, which satisfy the following conditions: 1) the generated gas must be free from tar, soot and hydrocarbons, with a given ratio of CO/H2; 2) to use as the fuel source a wide range of low-grade low-value solid fuels, including biomass and various kinds of carbonaceous wastes; 3) have high reliability in operation, do not require qualified operating personnel, be relatively inexpensive to produce and use steam-air blowing instead of expensive steam-oxygen one; 4) the line of standard sizes should be sufficiently wide (with a single unit capacity of fuel from 1 to 50-70 MW). Two models of gas generators of the inverted gasification process with three combustion zones operating under pressure have been adopted for design: 1) gas generator with a remote combustion chamber type GOP-VKS (two-block version) and 2) a gas generator with a common combustion chamber of the GOP-OK type (single-block version), which is an almost ideal model for increasing the unit capacity. There have been worked out various schemes for the preparation of briquettes from practically the entire spectrum of low-grade fuel: high-ash and high-moisture coals, peat and biomass, including all types of waste - solid household waste, crop, livestock, poultry, etc. In the gas generators there are gasified the cylindrical briquettes with a diameter of 20-25 mm and a length of 25-35 mm. There have been developed a mathematical model and computer code for numerical simulation of synthesis gas generation processes in a gasifier of a dense layer of inverted process during a steam-air blast, including: continuity equations for the 8 gas phase components and for the solid phase; the equation of the heat balance for the entire heterogeneous system; the Darcy law equation (for porous media); equation of state for 8 components of the gas phase; equations for the rates of 3 gas-phase and 4 heterogeneous reactions; macro kinetics law of coke combustion; other equations and boundary conditions.
NASA Astrophysics Data System (ADS)
Sobolev, Stephan; Muldashev, Iskander
2016-04-01
The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.
Modeling Signal-Noise Processes Supports Student Construction of a Hierarchical Image of Sample
ERIC Educational Resources Information Center
Lehrer, Richard
2017-01-01
Grade 6 (modal age 11) students invented and revised models of the variability generated as each measured the perimeter of a table in their classroom. To construct models, students represented variability as a linear composite of true measure (signal) and multiple sources of random error. Students revised models by developing sampling…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-30
... remain subject to USML control are modeling or simulation tools that model or simulate the environments... USML revision process, the public is asked to provide specific examples of nuclear-related items whose...) Modeling or simulation tools that model or simulate the environments generated by nuclear detonations or...
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
NASTRAN analysis of Tokamak vacuum vessel using interactive graphics
NASA Technical Reports Server (NTRS)
Miller, A.; Badrian, M.
1978-01-01
Isoparametric quadrilateral and triangular elements were used to represent the vacuum vessel shell structure. For toroidally symmetric loadings, MPCs were employed across model boundaries and rigid format 24 was invoked. Nonsymmetric loadings required the use of the cyclic symmetry analysis available with rigid format 49. NASTRAN served as an important analysis tool in the Tokamak design effort by providing a reliable means for assessing structural integrity. Interactive graphics were employed in the finite element model generation and in the post-processing of results. It was felt that model generation and checkout with interactive graphics reduced the modelling effort and debugging man-hours significantly.
C code generation from Petri-net-based logic controller specification
NASA Astrophysics Data System (ADS)
Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei
2017-08-01
The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.
Routes to the past: neural substrates of direct and generative autobiographical memory retrieval.
Addis, Donna Rose; Knapp, Katie; Roberts, Reece P; Schacter, Daniel L
2012-02-01
Models of autobiographical memory propose two routes to retrieval depending on cue specificity. When available cues are specific and personally-relevant, a memory can be directly accessed. However, when available cues are generic, one must engage a generative retrieval process to produce more specific cues to successfully access a relevant memory. The current study sought to characterize the neural bases of these retrieval processes. During functional magnetic resonance imaging (fMRI), participants were shown personally-relevant cues to elicit direct retrieval, or generic cues (nouns) to elicit generative retrieval. We used spatiotemporal partial least squares to characterize the spatial and temporal characteristics of the networks associated with direct and generative retrieval. Both retrieval tasks engaged regions comprising the autobiographical retrieval network, including hippocampus, and medial prefrontal and parietal cortices. However, some key neural differences emerged. Generative retrieval differentially recruited lateral prefrontal and temporal regions early on during the retrieval process, likely supporting the strategic search operations and initial recovery of generic autobiographical information. However, many regions were activated more strongly during direct versus generative retrieval, even when we time-locked the analysis to the successful recovery of events in both conditions. This result suggests that there may be fundamental differences between memories that are accessed directly and those that are recovered via the iterative search and retrieval process that characterizes generative retrieval. Copyright © 2011 Elsevier Inc. All rights reserved.
Routes to the past: Neural substrates of direct and generative autobiographical memory retrieval
Addis, Donna Rose; Knapp, Katie; Roberts, Reece P.; Schacter, Daniel L.
2011-01-01
Models of autobiographical memory propose two routes to retrieval depending on cue specificity. When available cues are specific and personally-relevant, a memory can be directly accessed. However, when available cues are generic, one must engage a generative retrieval process to produce more specific cues to successfully access a relevant memory. The current study sought to characterize the neural bases of these retrieval processes. During functional magnetic resonance imaging (fMRI), participants were shown personally-relevant cues to elicit direct retrieval, or generic cues (nouns) to elicit generative retrieval. We used spatiotemporal partial least squares to characterize the spatial and temporal characteristics of the networks associated with direct and generative retrieval. Both retrieval tasks engaged regions comprising the autobiographical retrieval network, including hippocampus, and medial prefrontal and parietal cortices. However, some key neural differences emerged. Generative retrieval differentially recruited lateral prefrontal and temporal regions early on during the retrieval process, likely supporting the strategic search operations and initial recovery of generic autobiographical information. However, many regions were activated more strongly during direct versus generative retrieval, even when we time-locked the analysis to the successful recovery of events in both conditions. This result suggests that there may be fundamental differences between memories that are accessed directly and those that are recovered via the iterative search and retrieval process that characterizes generative retrieval. PMID:22001264
40 CFR 98.463 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...
40 CFR 98.463 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... generation using Equation TT-1 of this section. ER29NO11.004 Where: GCH4 = Modeled methane generation in... = Methane correction factor (fraction). Use the default value of 1 unless there is active aeration of waste... paragraphs (a)(2)(ii)(A) and (B) of this section when historical production or processing data are available...
Bias Reduction in Quasi-Experiments with Little Selection Theory but Many Covariates
ERIC Educational Resources Information Center
Steiner, Peter M.; Cook, Thomas D.; Li, Wei; Clark, M. H.
2015-01-01
In observational studies, selection bias will be completely removed only if the selection mechanism is ignorable, namely, all confounders of treatment selection and potential outcomes are reliably measured. Ideally, well-grounded substantive theories about the selection process and outcome-generating model are used to generate the sample of…
The Cerebellum Generates Motor-to-Auditory Predictions: ERP Lesion Evidence
ERIC Educational Resources Information Center
Knolle, Franziska; Schroger, Erich; Baess, Pamela; Kotz, Sonja A.
2012-01-01
Forward predictions are crucial in motor action (e.g., catching a ball, or being tickled) but may also apply to sensory or cognitive processes (e.g., listening to distorted speech or to a foreign accent). According to the "internal forward model," the cerebellum generates predictions about somatosensory consequences of movements. These predictions…
NASA Astrophysics Data System (ADS)
Plainaki, Christina; Mura, Alessandro; Milillo, Anna; Orsini, Stefano; Livi, Stefano; Mangano, Valeria; Massetti, Stefano; Rispoli, Rosanna; De Angelis, Elisabetta
2017-06-01
The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) observations of the seasonal variability of Mercury's Ca exosphere are consistent with the general idea that the Ca atoms originate from the bombardment of the surface by particles from comet 2P/Encke. The generating mechanism is believed to be a combination of different processes including the release of atomic and molecular surface particles and the photodissociation of exospheric molecules. Considering different generation and loss mechanisms, we perform simulations with a 3-D Monte Carlo model based on the exosphere generation model by Mura et al. (2009). We present for the first time the 3-D spatial distribution of the CaO and Ca exospheres generated through the process of micrometeoroid impact vaporization, and we show that the morphology of the latter is consistent with the available MESSENGER/Mercury Atmospheric and Surface Composition Spectrometer observations. The results presented in this paper can be useful in the exosphere observations planning for BepiColombo, the upcoming European Space Agency-Japanese Aerospace Exploration Agency mission to Mercury.
Multilevel perspective on high-order harmonic generation in solids
NASA Astrophysics Data System (ADS)
Wu, Mengxi; Browne, Dana A.; Schafer, Kenneth J.; Gaarde, Mette B.
2016-12-01
We investigate high-order harmonic generation in a solid, modeled as a multilevel system dressed by a strong infrared laser field. We show that the cutoff energies and the relative strengths of the multiple plateaus that emerge in the harmonic spectrum can be understood both qualitatively and quantitatively by considering a combination of adiabatic and diabatic processes driven by the strong field. Such a model was recently used to interpret the multiple plateaus exhibited in harmonic spectra generated by solid argon and krypton [G. Ndabashimiye et al., Nature 534, 520 (2016), 10.1038/nature17660]. We also show that when the multilevel system originates from the Bloch state at the Γ point of the band structure, the laser-dressed states are equivalent to the Houston states [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986), 10.1103/PhysRevB.33.5494] and will therefore map out the band structure away from the Γ point as the laser field increases. This leads to a semiclassical three-step picture in momentum space that describes the high-order harmonic generation process in a solid.
A Tri-network Model of Human Semantic Processing
Xu, Yangwen; He, Yong; Bi, Yanchao
2017-01-01
Humans process the meaning of the world via both verbal and nonverbal modalities. It has been established that widely distributed cortical regions are involved in semantic processing, yet the global wiring pattern of this brain system has not been considered in the current neurocognitive semantic models. We review evidence from the brain-network perspective, which shows that the semantic system is topologically segregated into three brain modules. Revisiting previous region-based evidence in light of these new network findings, we postulate that these three modules support multimodal experiential representation, language-supported representation, and semantic control. A tri-network neurocognitive model of semantic processing is proposed, which generates new hypotheses regarding the network basis of different types of semantic processes. PMID:28955266
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
NASA Astrophysics Data System (ADS)
Zbiciak, M.; Grabowik, C.; Janik, W.
2015-11-01
Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.
NASA Astrophysics Data System (ADS)
Genxu, W.
2017-12-01
There is a lack of knowledge about how to quantify runoff generation and the hydrological processes operating in permafrost catchments on permafrost-dominant catchments. To understand the mechanism of runoff generation processes in permafrost catchments, a typical headwater catchment with continuous permafrost on the Tibetan Plateau was measured. A new approach is presented in this study to account for runoff processes on the spring thawing period and autumn freezing period, when runoff generation clearly differs from that of non-permafrost catchments. This approach introduces a soil temperature-based water saturation function and modifies the soil water storage curve with a soil temperature threshold. The results show that surface soil thawing induced saturation excess runoff and subsurface interflow account for approximately 66-86% and 14-34% of total spring runoff, respectively, and the soil temperature significantly affects the runoff generation pattern, the runoff composition and the runoff coefficient with the enlargement of the active layer. The suprapermafrost groundwater discharge decreases exponentially with active layer frozen processes during autumn runoff recession, whereas the ratio of groundwater discharge to total runoff and the direct surface runoff coefficient simultaneously increase. The bidirectional freezing of the active layer controls and changes the autumn runoff processes and runoff composition. The new approach could be used to further develop hydrological models of cold regions dominated by permafrost.
SWANN: The Snow Water Artificial Neural Network Modelling System
NASA Astrophysics Data System (ADS)
Broxton, P. D.; van Leeuwen, W.; Biederman, J. A.
2017-12-01
Snowmelt from mountain forests is important for water supply and ecosystem health. Along Arizona's Mogollon Rim, snowmelt contributes to rivers and streams that provide a significant water supply for hydro-electric power generation, agriculture, and human consumption in central Arizona. In this project, we are building a snow monitoring system for the Salt River Project (SRP), which supplies water and power to millions of customers in the Phoenix metropolitan area. We are using process-based hydrological models and artificial neural networks (ANNs) to generate information about both snow water equivalent (SWE) and snow cover. The snow-cover data is generated with ANNs that are applied to Landsat and MODIS satellite reflectance data. The SWE data is generated using a combination of gridded SWE estimates generated by process-based snow models and ANNs that account for variations in topography, forest cover, and solar radiation. The models are trained and evaluated with snow data from SNOTEL stations as well as from aerial LiDAR and field data that we collected this past winter in northern Arizona, as well as with similar data from other sites in the Southwest US. These snow data are produced in near-real time, and we have built a prototype decision support tool to deliver them to SRP. This tool is designed to provide daily-to annual operational monitoring of spatial and temporal changes in SWE and snow cover conditions over the entire Salt River Watershed (covering 17,000 km2), and features advanced web mapping capabilities and watershed analytics displayed as graphical data.
Singer, Burton
2018-01-01
Abduction is the process of generating and choosing models, hypotheses and data analyzed in response to surprising findings. All good empirical economists abduct. Explanations usually evolve as studies evolve. The abductive approach challenges economists to step outside the framework of received notions about the “identification problem” that rigidly separates the act of model and hypothesis creation from the act of inference from data. It asks the analyst to engage models and data in an iterative dynamic process, using multiple models and sources of data in a back and forth where both models and data are augmented as learning evolves. PMID:29430020
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
Modeling language and cognition with deep unsupervised learning: a tutorial overview
Zorzi, Marco; Testolin, Alberto; Stoianov, Ivilin P.
2013-01-01
Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition. PMID:23970869
Modeling language and cognition with deep unsupervised learning: a tutorial overview.
Zorzi, Marco; Testolin, Alberto; Stoianov, Ivilin P
2013-01-01
Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.
Regional brain activation/deactivation during word generation in schizophrenia: fMRI study.
John, John P; Halahalli, Harsha N; Vasudev, Mandapati K; Jayakumar, Peruvumba N; Jain, Sanjeev
2011-03-01
Examination of the brain regions that show aberrant activations and/or deactivations during semantic word generation could pave the way for a better understanding of the neurobiology of cognitive dysfunction in schizophrenia. To examine the pattern of functional magnetic resonance imaging blood oxygen level dependent activations and deactivations during semantic word generation in schizophrenia. Functional magnetic resonance imaging was performed on 24 participants with schizophrenia and 24 matched healthy controls during an overt, paced, 'semantic category word generation' condition and a baseline 'word repetition' condition that modelled all the lead-in/associated processes involved in the performance of the generation task. The brain regions activated during word generation in healthy individuals were replicated with minimal redundancies in participants with schizophrenia. The individuals with schizophrenia showed additional activations of temporo-parieto-occipital cortical regions as well as subcortical regions, despite significantly poorer behavioural performance than the healthy participants. Importantly, the extensive deactivations in other brain regions during word generation in healthy individuals could not be replicated in those with schizophrenia. More widespread activations and deficient deactivations in the poorly performing participants with schizophrenia may reflect an inability to inhibit competing cognitive processes, which in turn could constitute the core information-processing deficit underlying impaired word generation in schizophrenia.
Using explanatory crop models to develop simple tools for Advanced Life Support system studies
NASA Technical Reports Server (NTRS)
Cavazzoni, J.
2004-01-01
System-level analyses for Advanced Life Support require mathematical models for various processes, such as for biomass production and waste management, which would ideally be integrated into overall system models. Explanatory models (also referred to as mechanistic or process models) would provide the basis for a more robust system model, as these would be based on an understanding of specific processes. However, implementing such models at the system level may not always be practicable because of their complexity. For the area of biomass production, explanatory models were used to generate parameters and multivariable polynomial equations for basic models that are suitable for estimating the direction and magnitude of daily changes in canopy gas-exchange, harvest index, and production scheduling for both nominal and off-nominal growing conditions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.